Entity

Time filter

Source Type

Beersheba, Israel

Ben-Gurion University of the Negev is a university in Beersheba, Israel. Wikipedia.


Eichler D.,Ben - Gurion University of the Negev
Astrophysical Journal | Year: 2014

Particle acceleration by cascading Alfvén wave turbulence was suggested as being responsible for energetic particle populations in 3He-rich solar flares. In particular, it was noted that the damping of the turbulence by the tail of the particle distribution in rigidity naturally leads to the dramatic enhancement of a pre-accelerated species - as 3He is posited to be - and superheavy elements. The subsequent detection of large enrichment of ultraheavies, relative to iron, has apparently confirmed this prediction, lending support to the original idea. It is shown here that this picture could be somewhat sharpened by progress in understanding the three-dimensional geometrical details of cascading Alfvén turbulence. The mechanism may be relevant in other astrophysical environments where the source of turbulence is nonmagnetic, such as clusters of galaxies. © 2014 The American Astronomical Society. All rights reserved. Source


Shore H.,Ben - Gurion University of the Negev
Quality and Reliability Engineering International | Year: 2014

Statistical process control monitoring of nonlinear relationships (profiles) has been the subject of much research recently. While attention is primarily given to the statistical aspects of the monitoring techniques, little effort has been devoted to developing a general modeling approach that would introduce 'uniformity of practice' in modeling nonlinear profiles (analogously with the three-sigma limits of Shewhart control charts). In this article, we use response modeling methodology (RMM) to demonstrate implementation of this approach to statistical process control monitoring of ecological relationships. Using 10 ecological models that have appeared in the literature, it is first shown that RMM models can replace (approximate) current ecological models with negligible loss in accuracy. Computer simulation is then used to demonstrate that estimated RMM models and estimated data generating ecological models achieve goodness-of-fit that is practically indistinguishable from one another. A regression-adjusted control scheme, based on control charts for the predicted median and for residuals variation, is developed and demonstrated for three types of 'out of control' scenarios. © 2013 John Wiley & Sons, Ltd. Source


Shabtay D.,Ben - Gurion University of the Negev
Journal of Scheduling | Year: 2011

This paper studies two closely related online-list scheduling problems of a set of n jobs with unit processing times on a set of m multipurpose machines. It is assumed that there are k different job types, where each job type can be processed on a unique subset of machines. In the classical definition of online-list scheduling, the scheduler has all the information about the next job to be scheduled in the list while there is uncertainty about all the other jobs in the list not yet scheduled. We extend this classical definition to include lookahead abilities, i.e., at each decision point, in addition to the information about the next job in the list, the scheduler has all the information about the next h jobs beyond the current one in the list. We show that for the problem of minimizing the makespan there exists an optimal (1-competitive) algorithm for the online problem when there are two job types. That is, the online algorithm gives the same minimal makespan as the optimal offline algorithm for any instance of the problem. Furthermore, we show that for more than two job types no such online algorithm exists. We also develop several dynamic programming algorithms to solve a stochastic version of the problem, where the probability distribution of the job types is known and the objective is to minimize the expected makespan. © 2011 Springer Science+Business Media, LLC. Source


Balatsky A.V.,Los Alamos National Laboratory | Nishijima M.,Kyoto University | Manassen Y.,Ben - Gurion University of the Negev
Advances in Physics | Year: 2012

Electron spin resonance-scanning tunneling microscopy (ESR-STM) is a rapidly developing surface-science technique that is sensitive to a single spin existing on or nearby a solid surface. The single spin is detected through elevated noise at the Larmor frequency that appears when the single spin participates in the tunneling process between the tip and the surface. In this review, experimental and theoretical works which have been performed up to date on ESR-STM are reviewed. The remaining experimental problems which have to be solved, possible approaches to differentiate between different mechanisms and the future of ESR-STM are discussed.\ PACS: 72.25.Dc Spin polarized transport in semiconductors, 72.70.×m Noise processes and phenomena, 73.20.Hb Impurity and defect levels; energy states of adsorbed species, 73.40.Gk Tunneling, 75.70.Rf Surface magnetism, 75.76.×j Spin transport effects, 76.30.-v Electron paramagnetic resonance and relaxation, 78.47.-p Spectroscopy of solid state dynamics © 2012 Taylor and Francis. Source


Horisaki R.,Osaka University | Tanida J.,Osaka University | Stern A.,Ben - Gurion University of the Negev | Javidi B.,University of Connecticut
Optics Letters | Year: 2012

We propose a generalized framework for single-shot acquisition of multidimensional objects using compressive Fresnel holography. A multidimensional object with spatial, spectral, and polarimetric information is propagated with the Fresnel diffraction, and the propagated signal of each channel is observed by an image sensor with randomly arranged optical elements for filtering. The object data are reconstructed using a compressive sensing algorithm. This scheme is verified with numerical experiments. The proposed framework can be applied to imageries for spectrum, polarization, and so on. © 2012 Optical Society of America. Source


Panich A.M.,Ben - Gurion University of the Negev
Journal of Physics Condensed Matter | Year: 2014

We report on a proton magnetic resonance study of a powder nanodiamond sample. We show that 1H spin-lattice relaxation in this compound is mainly driven by the interaction of nuclear spins with unpaired electron spins of paramagnetic defects. We measured the spin-lattice relaxation time T 1 by means of a saturation comb pulse sequence followed by dipolar dephasing, and plotted T1 as a function of the dephasing time in different external magnetic fields. The received T1 () dependence provides a striking manifestation of the spin diffusion-assisted relaxation regime. The obtained experimental data allow us to estimate the spin diffusion coefficient and spin diffusion barrier radius. © 2014 IOP Publishing Ltd. Source


Ashkenazy Y.,Ben - Gurion University of the Negev | Gildor H.,Hebrew University of Jerusalem
Journal of Physical Oceanography | Year: 2011

Insights into the probability distribution of ocean currents are important for various applications such as the chance to encounter extreme events, which may affect, for example, marine construction, and for estimating the energy that can be extracted from the ocean. In addition, for devising better parameterizations for submesoscale mixing, which present climate models cannot resolve, one should understand the velocity distribution and its relation to the various forcing of surface ocean circulation. Here, the authors investigate the probability distribution of surface currents from measured by high-frequency radar. Their results show that the distribution of ocean current speeds can be approximated by a Weibull distribution. Moreover, the authors demonstrate the existence of spatial variations of the scale and shape parameters of the Weibull distribution over a relatively small region of only a few kilometers. They use a simple surface Ekman layer model to investigate this spatial variability. They find that, when forced by local winds, this model does not reproduce the observations. The addition of Gaussian noise to the zonal and meridional components of the bottom geostrophic currents has only a slight effect on the surface current distribution. However, noise added to the components of the local wind (mimicking wind gusts) has a much greater effect on the distribution of surface currents, suggesting that wind spatial and temporal variability underlay the observed spatial variability of the parameters of the Weibull distribution. © 2011 American Meteorological Society. Source


Rubin O.D.,Ben - Gurion University of the Negev | Babcock B.A.,Iowa State University
Energy | Year: 2013

The remarkable expansion of global wind power capacity in many countries brings forward several key economic questions regarding the performance of impacted electricity markets. Wind forecast uncertainties and rules that penalize scheduling deviations often discourage wind energy producers from participating in day-ahead electricity markets. Instead, wind energy commonly is priced either in the real-time balancing market or via a Feed-in-tariff (FIT) arrangement. In this paper we extend a recently developed equilibrium modeling approach to examine how the integration of wind energy impacts premiums in day-ahead electricity markets. In particular, we compare premiums as wind power capacity expands for different pricing mechanisms. By considering operational costs related to ramping conventional generators we are able to characterize the efficient level of premium needed to cope with the intermittent nature of wind. We find that the way that wind energy is priced is critical. We show that in the presence of imperfect competition pricing wind energy in the market increases firms' ability to extract oversized day-ahead premiums while in the case of market-independent FIT, market power is reduced as wind power capacity expands. © 2013 Elsevier Ltd. Source


Fishov I.,Ben - Gurion University of the Negev | Norris V.,University of Rouen
Current Opinion in Microbiology | Year: 2012

The bacterial membrane is characterized by a heterogeneous distribution of lipids and proteins and of higher level structures termed hyperstructures. The causes of this heterogeneity include lipid-lipid, protein-protein and protein-lipid interactions. The coupling of transcription, translation and insertion of nascent proteins into membrane, transertion, creates large membrane domains that are proposed to be important in the regulation and execution of the cell cycle and in other functions. In describing membrane heterogeneity, we suggest here that transertion is a global regulator coupling metabolism to the cell cycle. © 2012 Elsevier Ltd. Source


Segev R.,Ben - Gurion University of the Negev
Mathematical Methods in the Applied Sciences | Year: 2013

A metric independent geometric framework for some fundamental objects of continuum mechanics is presented. In the geometric setting of general differentiable manifolds, balance principles for extensive properties are formulated and Cauchy's theorem for fluxes is proved. Fluxes in an n-dimensional space are represented as differential (n - 1)-forms. In an analogous formulation of stress theory, a distinction is made between the traction stress, enabling the evaluation of the traction on the boundaries of the various regions, and the variational stress, which acts on the derivative of a virtual velocity field to produce the virtual power density. The relation between the two stress fields is examined as well as the resulting differential balance law. As an application, metric-invariant aspects of electromagnetic theory are presented within the framework of the foregoing flux and stress theory. Copyright © 2012 John Wiley & Sons, Ltd. Copyright © 2012 John Wiley & Sons, Ltd. Source


Milo R.,Ben - Gurion University of the Negev
Expert Opinion on Pharmacotherapy | Year: 2015

Introduction: Multiple sclerosis (MS) is a chronic inflammatory disease of the CNS of a putative autoimmune origin characterized by neurologic dysfunction disseminated in space and time due to demyelination and axonal loss that results in progressive disability. Recent advances in understanding the immune pathogenesis of the disease resulted in the introduction of numerous effective immunomodulatoty drugs having diverse mechanisms of action, modes of administration and risk-benefit profiles. This results in more complex albeit more promising treatment selection and choices. Areas covered: The epidemiology, clinical features, pathogenesis and diagnosis of the disease are discussed. The mode of action and main characteristics of current immunomodulatory drugs for MS and their place in the therapeutic algorithm of the disease based on evidence from clinical trials are described. Speculation on new paradigms, treatment goals and outcome measures aimed at improving the landscape of MS treatment is presented. Expert opinion: Multiple disease, drug and patient-related factors should be taken into consideration when selecting the appropriate drug and treatment strategy to the appropriate patient, thus paving the road for personalized medicine in MS. © 2014 Informa UK, Ltd. Source


Abdulhalim I.,Ben - Gurion University of the Negev
Applied Physics Letters | Year: 2012

Short pitch deformable helix ferroelectric liquid crystals are shown to become optically isotropic at certain critical tilt angle with an average dielectric constant (ε 1+ε 2+ε 3)/ 3. For uniaxial medium, this angle equals θ d = cos -1(1/√3), and it depends weakly on the local biaxial anisotropy. The degeneracy is removed upon the application of a small electric field and the medium becomes optically biaxial. A high contrast fast electrooptic modulation and display devices can be built using such materials even when aligned in a multi-domain. For tilt angles very near the degeneracy angle, ultrasensitivity of the eigenaxes rotation to the applied field is found. © 2012 American Institute of Physics. Source


Kozuch S.,Ben - Gurion University of the Negev
Physical Chemistry Chemical Physics | Year: 2015

C(CH3)5+, a pentacoordinate carbocation, was recently described as a "fleeting" molecule, that is, stable only at low temperatures. Herein it is shown using theoretical methods that, owing to carbon tunneling, the molecule is not persistent even at 0 K. One possible way to lengthen its lifespan is by deuterium substitution, in spite of the fact that no bond to hydrogen is broken. This journal is © the Owner Societies 2015. Source


Gutman E.M.,Ben - Gurion University of the Negev
Journal of Solid State Electrochemistry | Year: 2014

The main problem in electrocapillarity of solid electrodes is the lack of clarity in determining the surface stress and basic equations. Within the framework of the Gibbs concept of geometrical dividing surface, the “surface stress” cannot be defined because methods of continuum mechanics can be applied to a physical surface layer (of finite thickness), but not to a mathematical surface. Gibbs never used the concept of surface stress, introducing only “surface tension” for a liquid electrode and “closely related quantity” for a solid electrode. Revisiting the derivation of the Gibbs adsorption equation, we prove its applicability to solid surfaces without the limiting requirement of constant state of strain, which was undeservedly interpreted by Eriksson as a shortcoming of the Gibbs theory caused to look for other approaches to surface stress problem. A critical analysis shows that the attempts (Shuttleworth, Eriksson, Couchman, Gokhstein, Weissmüller, etc.) to create a thermodynamic definition of the surface stress (as well as the formulation of fundamental thermodynamic equations and Maxwell relations operating with surface stresses) contain mathematical defects. It is shown that confusing interpretations of some Gibbs’ concepts encountered in the literature have led to “modifications” of the Lippmann equation based on the critical error in the Gibbs–Duhem relation due to the occurrence of an extensive variable, which is inadmissible. The famous Lippmann equation should not be modified, and it remains a unique electrocapillary relation applicable to liquid and solid electrodes. © 2014, Springer-Verlag Berlin Heidelberg. Source


Sukoriansky S.,Ben - Gurion University of the Negev | Galperin B.,University of South Florida
Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences | Year: 2013

The buoyancy subrange of stably stratified turbulence is defined as an intermediate range of scales larger than those in the inertial subrange. This subrange encompasses the crossover from internal gravity waves (IGWs) to small-scale turbulence. The energy exchange between the waves and small-scale turbulence is communicated across this subrange. At the same time, it features progressive anisotropization of flow characteristics on increasing spatial scales. Despite many observational and computational studies of the buoyancy subrange, its theoretical understanding has been lagging. This article presents an investigation of the buoyancy subrange using the quasi-normal scale elimination (QNSE) theory of turbulence. This spectral theory uses a recursive procedure of small-scale modes elimination based upon a quasi-normal mapping of the velocity and temperature fields using the Langevin equations. In the limit of weak stable stratification, the theory becomes completely analytical and yields simple expressions for horizontal and vertical eddy viscosities and eddy diffusivities. In addition, the theory provides expressions for various one-dimensional spectra that quantify turbulence anisotropization. The theory reveals how the dispersion relation for IGWs ismodified by turbulence, thus alleviating many unique waves' features. Predictions of the QNSE theory for the buoyancy subrange are shown to agree well with various data. © 2012 The Author(s) Published by the Royal Society. Source


Granot R.,Ben - Gurion University of the Negev | Dyment J.,University Paris Diderot
Earth and Planetary Science Letters | Year: 2015

The separation of South America from Africa during the Cretaceous is poorly understood due to the long period of stable polarity of the geomagnetic field, the Cretaceous Normal Superchron (CNS, lasted between ~121 and 83.6 Myr ago). We present a new identification of magnetic anomalies located within the southern South Atlantic magnetic quiet zones that have arisen due to past variations in the strength of the dipolar geomagnetic field. Using these anomalies, together with fracture zone locations, we calculate the first set of magnetic anomalies-based finite rotation parameters for South America and Africa during that period. The kinematic solutions are generally consistent with fracture zone traces and magnetic anomalies outside the area used to construct them. The rotations indicate that seafloor spreading rates increased steadily throughout most of the Cretaceous and decreased sharply at around 80 Myr ago. A change in plate motion took place in the middle of the superchron, roughly 100 Myr ago, around the time of the final breakup (i.e., separation of continental-oceanic boundary in the Equatorial Atlantic). Prominent misfit between the calculated synthetic flowlines (older than Anomaly Q1) and the fracture zones straddling the African Plate in the central South Atlantic could only be explained by a combination of seafloor asymmetry and internal dextral motion (<100 km) within South America, west of the Rio Grande fracture zone. This process has lasted until ~92 Myr ago after which both Africa and South America (south of the equator) behaved rigidly. The clearing of the continental-oceanic boundaries within the Equatorial Atlantic Gateway was probably completed by ~95 Myr ago. The clearing was followed by a progressive widening and deepening of the passageway, leading to the emergence of north-south flow of intermediate and deep-water which might have triggered the global cooling of bottom water and the end for the Cretaceous greenhouse period. © 2015 Elsevier B.V. Source


Jiang J.-H.,Weizmann Institute of Science | Entin-Wohlman O.,Ben - Gurion University of the Negev | Imry Y.,Weizmann Institute of Science
Physical Review B - Condensed Matter and Materials Physics | Year: 2012

A two-site nanostructure (e.g., a molecule) bridging two conducting leads and connected to a phonon bath is considered. The two relevant levels closest to the Fermi energy are each connected to its lead. The leads have slightly different temperatures and chemical potentials, and the nanostructure is also coupled to a thermal (third) phonon bath. The 3×3 linear transport (Onsager) matrix is evaluated, along with the ensuing figure of merit, and found to be very favorable for thermoelectric energy conversion. © 2012 American Physical Society. Source


Avieli N.,Ben - Gurion University of the Negev
Food, Culture and Society | Year: 2013

Barbequing meat is the main activity for most Israeli Jews celebrating the nation's Independence Day. It is a ritual without which the festival is incomplete, and beyond which not much is done. Identification with the nation-state is embodied through the consumption of meat that represents processed and refined chunks of Israeliness. But what is the meaning of this ritual and what does al ha'esh ([meat] over fire) stand for? Anthropological theory tends to discuss roasted meat as an extreme expression of power, potency and masculinity, and highlights its relationship to territory and to the modern nation-state. In this article, based on ethnographic research conducted from 2002 to 2009, I stress the Israeli features of this food event. I analyze the two main features of Israeli Independence Day barbeques-handling the meat and managing the space-and argue that these features expose the ambivalent sense of power and weakness, and of stability and influx, underlining Israeli barbeques and, perhaps, Israeli society at large. © Association for the Study of Food and Society 2013. Source


Yochelis A.,Ben - Gurion University of the Negev
Physical Chemistry Chemical Physics | Year: 2014

Intense investigations of room temperature ionic liquids have revealed not only their advantages in a wide range of technological applications but also triggered scientific debates about charge distribution properties within the bulk and near the solid-liquid interfaces. While many observations report on an alternating charge layering (i.e., spatially extended decaying charge density oscillations), there are recent conjectures that ionic liquids bear similarity to dilute electrolytes. Using a modified Poisson-Nernst-Planck model for ionic liquids (after Bazant et al., Phys. Rev. Lett. 2011, 106, 046102), we show that both behaviors are fundamental properties of ionic liquids. The transition from the non-monotonic (oscillatory) to the monotonic structure of electrical diffuse layers appears to non-trivially depend on ionic density in the bulk, electrostatic correlation length, confinement and surface properties. Consequently, the results not only reconcile the empirical results but also provide a powerful methodology to gain insights into the nonlinear aspects of concentrated electrolytes. This journal is © the Owner Societies 2014. Source


Cruz-Cabeza A.J.,University of Amsterdam | Bernstein J.,Abu Dhabi University | Bernstein J.,Ben - Gurion University of the Negev
Chemical Reviews | Year: 2014

An unambiguous definition of conformational change and conformational polymorphism, as well as a quantitative basis for the likelihood of its appearance, is reviewed. Conformational adjustment and conformational change are different phenomena. Two polymorphs are conformational polymorphs only if their conformations are related by conformational change. Conformational change requires a change of gas-phase conformer and, hence, crossing of an energy barrier. Energy differences associated with conformational variations of small organic molecules in different polymorphs are usually small. Higher-energy conformations in crystals are rare but possible for molecules that are able to break an intramolecular interaction in favor of a strong intermolecular interaction and molecules that crystallize in special symmetry positions. Polymorphic molecules containing R-bonds that are prone to both change and adjust are likely to display a rich polymorphic landscape. Source


Kissinger M.,Ben - Gurion University of the Negev
Ecological Indicators | Year: 2013

In recent years increasing number of social and environmental signals have raised a discussion on issues of food security, exploring the extent to which the current prevailing food system is sustainable. It follows that the development of food sustainability indicators is essential. One such indicator is Ecological Footprint Analysis. This paper presents and analyses four approaches for ecological footprint calculation - a conventional global hectare approach, a domestic versus international approach, a place-oriented approach, and a place-oriented global hectares approach, and examine each for the case of Canada. The research documented food consumption, major sources of supply, and land and energy inputs required to supply Canadians with a wide variety of food products over a year. Overall the research reveals an agricultural and food footprint at a range of 63,312,000-64,600,000 gha or the average equivalent of 2.0 gha per capita, integrating agricultural land (1.46 gha per capita) and energy land (0.54 gha per capita). It then highlights unique merits and shortcomings of each approach; argue that integrating all can generate a better, more comprehensive, indicator of sustainability which can support sustainable resource supply management, and a guide for individuals food and dietary choices. © 2012 Elsevier Ltd. All rights reserved. Source


Abu-Shakra M.,Ben - Gurion University of the Negev
Israel Medical Association Journal | Year: 2016

Physical, mental and social well-being are important outcomes in patients with chronic rheumatic diseases, including systemic lupus erythematosus (SLE). The MOS SF-36 and the WHO QoL Bref are appropriate for assessing quality of life (QoL) in patients with SLE. The QoL of patients with SLE is impaired compared with that of controls. Fibromyalgia adversely affects the QoL of SLE patients. Women with SLE had significantly lower scores on subscales of the sense of coherence (SoC) compared with matched controls. This reduced SoC in SLE women represents impaired adaptive coping and is independently associated with reduced QoL in women with SLE. Depression and anxiety are common among SLE patients, and the frequency is similar to that in patients with rheumatoid arthritis. A reciprocal longitudinal relationship between depression and illness intrusiveness was found in patients with SLE. Disease activity and damage are not associated with depression. The subjective experience, not the illness per se, causes depression. © 2016, Israel Medical Association. All rights reserved. Source


Shani G.,Ben - Gurion University of the Negev
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics | Year: 2010

Recent scaling up of partially observable Markov decision process solvers toward realistic applications is largely due to point-based methods which quickly provide approximate solutions for midsized problems. New multicore machines offer an opportunity to scale up to larger domains. These machines support parallel execution and can speed up existing algorithms considerably. In this paper, we evaluate several ways in which point-based algorithms can be adapted to parallel computing. We overview the challenges and opportunities and present experimental results, providing evidence to the usability of our suggestions. © 2006 IEEE. Source


Bashiri A.,Ben - Gurion University of the Negev
Harefuah | Year: 2011

Women with recurrent spontaneous pregnancy loss (RPLJ are in a complex scenario, due to the work-up evaLuation that is negative in about 50% of the cases, and the stress associated with the outcome of the next pregnancy. Therefore, these patients experience psychoLogicaL conditions including anxiety and depression. The basic examination includes: parental karyotype, anatomical evaluation of the uterus, hormonal profile (including TSH, prolactin level and glucose) and APLA evaLuation. In recent years, several changes had been reported in the recurrent pregnancy loss definition, the preferred mode of the uterine cavity evaluation and the need for anti-thrombotic treatment in idiopathic RPL. In addition, new concepts in the pathophysiology and the etiology of RPL have been described as well as the extent of the evaluation. It is important to emphasize that those women with RPL in general and those with idiopathic RPL specifically, will experience about a 75% live birth rate in the next pregnancy when they are in a specialized RPL clinic. In this review, we will discuss these issues according to the recent studies incLuding the evaluation and treatment that are implicated in the RPL clinic in the Soroka University Medical Center. Source


Ganel T.,Ben - Gurion University of the Negev | Goodale M.A.,University of Western Ontario
Experimental Brain Research | Year: 2014

Garner's speeded classification task has been used as an effective tool to probe holistic processing of object shape. This is achieved by comparing classification performance of a given object dimension between two experimental conditions. Worse performance in a "filtering" condition in which a second, irrelevant dimension of the same object varies on a trial-to-trial basis, compared to a "baseline" condition in which the irrelevant dimension is held constant, is labeled Garner interference, and indicates that the two dimensions are processed in a holistic manner. About a decade ago, we used Garner's task to provide evidence for different frames of processing mediating action and perception. Unlike perceptual estimations, visually guided grasping showed no Garner interference when subjects were asked to reach out and grasp an object along a given dimension. In other words, slower reaction times were observed in the filtering compared to the baseline condition only for perceptual estimates but not for grasping. In two experiments, we extend these findings to kinematic measures beyond simple reaction times. The results showed that Garner interference is also expressed in the variability of the response, with more variable within-subject performance in the filtering compared to the baseline condition for perceptual estimates but not for grasping. These findings provide converging evidence for the idea that, unlike perception, which processes objects holistically, visually guided action is performed in an analytic manner. © 2014 Springer-Verlag. Source


Shpungin H.,University of Calgary | Segal M.,Ben - Gurion University of the Negev
IEEE/ACM Transactions on Networking | Year: 2010

In this paper, we study asymmetric power assignments that induce a low-energy k-strongly connected communication graph with spanner properties. We address two spanner models: energy and distance. The former serves as an indicator for the energy consumed in a message propagation between two nodes, while the latter reflects the geographic properties of routing in the induced communication graph. We consider a random wireless ad hoc network with IVI = n nodes distributed uniformly and independently in a unit square. For k {1, 2}, we propose several power assignments that obtain a good bicriteria approximation on the total cost and stretch factor under the two models. For k > 2, we analyze a power assignment developed by Carmi et al. and derive some interesting bounds on the stretch factor for both models as well. We also describe how to compute all the power assignments distributively, and we provide simulation results. To the best of our knowledge, these are the first provable theoretical bounds for low-cost spanners in wireless ad hoc networks. © 1993-2012 IEEE. Source


Ben-Bassat T.,Sami Shamoon College of Engineering | Shinar D.,Ben - Gurion University of the Negev
Accident Analysis and Prevention | Year: 2011

Roadway design is one of the most significant factors that affect driving behavior and perceived safety. The current study tests the combined effects of three roadway design elements - shoulders width, guardrail existence and roadway geometry (curvature) - on objective driving measures (speed and lane position), and subjective measures (perceived safe driving speed and estimated road safety). Twenty two drivers participated in an experiment with a driving simulation. In the first part objective driving data were collected, and in the second part subjective paper-pencil evaluations were requested of the perceived safety of 30 different scenarios that were previously experienced in the simulator. The scenarios consisted of the various combinations of the three roadway design elements. The results showed a significant effect of roadway geometry on both objective and subjective measures. The shoulders width had a significant effect on actual speed, on lane position, and on perceived safe driving speed, but only when a guardrail was present. These findings illustrate the perceptual role of a guardrail in defining the perceived safety margins that various shoulder widths provide. When a guardrail is absent, the width of the shoulder loses much of its benefits and effects on driving behavior. The results also demonstrate that roadway geometry can be used to reduce driving speeds, but at the same time it can have a negative effect on maintaining a stable lane position in sharp curves. Thus, controlling the width of road shoulders and the placement of guardrails seems to be a safer approach to speed and lane position control. © 2011 Elsevier Ltd. Source


Armon R.,Ben - Gurion University of the Negev
Journal of the History of Biology | Year: 2012

The discovery by Hans Spemann of the "organizer" tissue and its ability to induce the formation of the amphibian embryo's neural tube inspired leading embryologists to attempt to elucidate embryonic inductions' underlying mechanism. Joseph Needham, who during the 1930s conducted research in biochemical embryology, proposed that embryonic induction is mediated by a specific chemical entity embedded in the inducing tissue, surmising that chemical to be a hormone of sterol-like structure. Along with embryologist Conrad H. Waddington, they conducted research aimed at the isolation and functional characterization of the underlying agent. As historians clearly pointed out, embryologists came to question Needham's biochemical approach; he failed to locate the hormone he sought and eventually abandoned his quest. Yet, this study finds that the difficulties he ran into resulted primarily from the limited conditions for conducting his experiments at his institute. In addition, Needham's research reflected the interests of leading biochemists in hormone and cancer research, because it offered novel theoretical models and experimental methods for engaging with the function of the hormones and carcinogens they isolated. Needham and Waddington were deterred neither by the mounting challenges nor by the limited experimental infrastructure. Like their colleagues in hormone and cancer research, they anticipated difficulties in attempting to establish causal links between complex biological phenomena and simple chemical triggering. © 2010 Springer Science+Business Media B.V. Source


Palii A.,Moldova Academy of Sciences | Tsukerblat B.,Ben - Gurion University of the Negev | Klokishner S.,Moldova Academy of Sciences | Dunbar K.R.,Texas A&M University | And 2 more authors.
Chemical Society Reviews | Year: 2011

In this critical review we review the problem of exchange interactions in polynuclear metal complexes involving orbitally degenerate metal ions. The key feature of these systems is that, in general, they carry an unquenched orbital angular momentum that manifests itself in all their magnetic properties. Thus, interest in degenerate systems involves fundamental problems related to basic models in magnetism. In particular, the conventional Heisenberg-Dirac-Van Vleck model becomes inapplicable even as an approximation. In the first part we attempt to answer two key questions, namely which theoretical tools are to be used in the case of degeneracy, and how these tools can be employed. We demonstrate that the exchange interaction between orbitally degenerate metal ions can be described by the so-called orbitally-dependent exchange Hamiltonian. This approach has shown to reveal an anomalously strong magnetic anisotropy that can be considered as the main physical manifestation of the unquenched orbital angular momentum in magnetic systems. Along with the exchange coupling, a set of other interactions (such as crystal field effects, spin-orbit and Zeeman coupling), which are specific for the degenerate systems, need to be considered. All these features will be discussed in detail using a pseudo-spin-1/2 Hamiltonian approach. In the second part, the described theoretical background will be used to account for the magnetic properties of several magnetic metal clusters and low-dimensional systems: (i) the dinuclear face-sharing unit [Ti2Cl9]3-, which exhibits a large magnetic anisotropy; (ii) the rare-earth compounds Cs3Yb 2Cl9 and Cs3Yb2Br9, which, surprisingly, exhibit a full magnetic isotropy; (iii) a zig-zag Co II chain exhibiting unusual combination of single-chain magnet behavior and antiferromagnetic exchange coupling; (iv) a trigonal bipyramidal Ni3Os2 complex; (v) various CoII clusters encapsulated by polyoxometalate ligands. In the two last examples a pseudospin-1/2 Hamiltonian approach is applied to account for the presence of exchange anisotropy (150 references). © The Royal Society of Chemistry 2011. Source


Abraham I.,Microsoft | Neiman O.,Ben - Gurion University of the Negev
Proceedings of the Annual ACM Symposium on Theory of Computing | Year: 2012

We prove that any graph G=(V,E) with n points and m edges has a spanning tree T such that Σ (u,v)ε E(G)d T(u,v) = O(m log n log log n). Moreover such a tree can be found in time O(m log n log log n). Our result is obtained using a new petal-decomposition approach which guarantees that the radius of each cluster in the tree is at most 4 times the radius of the induced subgraph of the cluster in the original graph. © 2012 ACM. Source


Deichmann U.,Ben - Gurion University of the Negev
Developmental Biology | Year: 2011

Three early 20th-century attempts at unifying separate areas of biology, in particular development, genetics, physiology, and evolution, are compared in regard to their success and fruitfulness for further research: Jacques Loeb's reductionist project of unifying approaches by physico-chemical explanations; Richard Goldschmidt's anti-reductionist attempts to unify by integration; and Sewall Wright's combination of reductionist research and vision of hierarchical genetic systems. Loeb's program, demanding that all aspects of biology, including evolution, be studied by the methods of the experimental sciences, proved highly successful and indispensible for higher level investigations, even though evolutionary change and properties of biological systems up to now cannot be fully explained on the molecular level alone. Goldschmidt has been appraised as pioneer of physiological and developmental genetics and of a new evolutionary synthesis which transcended neo-Darwinism. However, this study concludes that his anti-reductionist attempts to integrate genetics, development and evolution have to be regarded as failures or dead ends. His grand speculations were based on the one hand on concepts and experimental systems that were too vague in order to stimulate further research, and on the other on experiments which in their core parts turned out not to be reproducible. In contrast, Sewall Wright, apart from being one of the architects of the neo-Darwinian synthesis of the 1930s, opened up new paths of testable quantitative developmental genetic investigations. He placed his research within a framework of logical reasoning, which resulted in the farsighted speculation that examinations of biological systems should be related to the regulation of hierarchical genetic subsystems, possibly providing a mechanism for development and evolution. I argue that his suggestion of basing the study of systems on clearly defined properties of the components has proved superior to Goldschmidt's approach of studying systems as a whole, and that attempts to integrate different fields at a too early stage may prove futile or worse. © 2011 Elsevier Inc. Source


Averbukh M.,Ben - Gurion University of the Negev | Lineykin S.,Ariel University | Kuperman A.,Ariel University
Progress in Photovoltaics: Research and Applications | Year: 2013

The paper presents a simple approach to deriving I-V curves of photovoltaic panels and small arrays for arbitrary environmental conditions on the basis of three points of a single operating curve data and short current temperature coefficient only. The proposed method does not employ fitting of any type and is solely based on a numerical solution of a system of transcendental equations. The equations are expressed in a dimensionless form, simplifying both the solution and photovoltaic panel parameters' representation. The solution is used to find the values of normalized equivalent circuit elements for the available data and then perform an appropriate adjustment to obtain the operating curves for arbitrary conditions. The proposed method was applied to monocrystalline and polycrystalline commercial solar panels and was compared with both manufacturer-provided and experimentally measured operating curves to analyze the approach applicability and accuracy. Copyright © 2012 John Wiley & Sons, Ltd. The paper presents a simple approach to deriving I-V curves of photovoltaic panels and small arrays for arbitrary environmental conditions on the basis of three points of a single operating curve data and short current temperature coefficient only. Copyright © 2012 John Wiley & Sons, Ltd. Source


Kidron G.J.,Hebrew University of Jerusalem | Vonshak A.,Ben - Gurion University of the Negev
Geoderma | Year: 2012

Previous work at the Hallamish dunefield in the western Negev Desert defined and mapped 5 mature types of microbiotic crusts (MCs). Crust definition and sampling were based on 4 × 4. m pixels. Yet within these pixels, microhabitats with different crust types could be noted. This was also the case within crust type A that extends over the south-facing slope and the sandy interdune. There, close examination mainly at < 0.5 × 0.5. m pixels revealed a high variability in crust types, as was also the case at fine-grained playa surfaces, scattered within the sandy interdune. These microhabitats belonged to two groups: habitats that received water from an additional source (runoff or subsurface flow), i.e., water-enriched habitats (WE) and habitats having a better water regime due to shading (SH), such as by shrubs. Daylight wetness duration and crust biomass were monitored during 1991-1994 in 12 habitats. WE habitats, which exhibited the highest biomass also exhibited longer wetness duration in comparison to SH habitats. Both groups exhibited higher biomass and wetness duration than the control. When the relation between daylight wetness duration and the chlorophyll content of the crust was compared, a positive linear relation with a good correlation (r2=0.74) was found. The findings indicate that crust biomass may be used as a biomarker for surface wetness duration and may thus point to habitats subjected to particular hydrological processes such as water ponding and subsurface flow. They may also point to soil patches having high moisture content and therefore long wetness duration. Higher moisture content that lasted for a longer duration was indeed recorded during preliminary measurements that were carried out at 0-40cm soil patches subjected to subsurface flow (as indicated by their MC) during the end of the winter season of 1994/95 and 1996/97. The findings may attest to differential rates of pedogenesis, thus highlighting the use of MCs as possible biomarkers for small soil habitats with variable pedogenesis. © 2012 Elsevier B.V. Source


As Tversky and Kahneman (1971) noted, effect sizes in smaller samples are inherently unstable. Donellan et al. (2014) in a large sample show that the relation between trait loneliness and warmth extraction through bathing activities is much smaller than in our initial smaller samples. We report further replications of our original findings in samples from India, Israel, and North America, again showing significant correlations between loneliness and physical warmth extraction from bathing and showering the overall effect being reliable across all three samples, although, consistent with Donellan et al.'s conclusions, smaller than in our original studies. We also respond to criticisms of the original data analyses, noting that removal of the problematic 'bathing frequency' item from the warmth index did not substantially change the results and thus our conclusions from them. We also note that in their 2 studies in which Donellan et al. attempted to most closely follow our original procedure, they did replicate our original results, but not in the other 7 studies in which considerable procedural changes were made. As our new replications reveal variability in bathing and showering preferences and habits around the world, we recommend the inclusion of a wider sample of cultures beyond North American in future research. This research should also focus not only on the narrower question of how loneliness relates to bathing activities but on the broader relation between feelings of social coldness (e.g., after rejection or exclusion) and the seeking of physical warmth (e.g., warm food and drink, thermostat settings). © 2014 American Psychological Association. Source


Gelbstein Y.,Ben - Gurion University of the Negev
Journal of Applied Physics | Year: 2012

Multi-phase thermoelectric materials are widely investigated due to their high potential for thermoelectric efficiency enhancement as a result of lattice thermal conductivity reduction by phonon scattering from the involved interfaces. Yet, the electronic mutual influence of the involved phases on the thermoelectric efficiency has to date usually been neglected in designing novel multi-phase thermoelectric materials. In the current research, the influence of distribution and morphology alignment on the electronic thermoelectric properties of three-phase composite materials was analyzed. Such an analysis, based on defining interaction volumes of the relevant thermoelectric properties, for any distribution and relative amount of any three phases with known individual properties, can be utilized for thermoelectric efficiency enhancement by intentional alignment of the phases along the preferred thermoelectric direction. For the p-type Gex(SnyPb1-y) 1-xTe thermoelectric alloys, following a phase separation reaction it was found that serially relative alignment between the involved phases is optimal from the thermoelectric point of view. © 2012 American Institute of Physics. Source


Cohen S.,Ben - Gurion University of the Negev
American Journal of Bioethics | Year: 2013

Libertarian paternalism's notion of "nudging" refers to steering individual decision making so as to make choosers better off without breaching their free choice. If successful, this may offer an ideal synthesis between the duty to respect patient autonomy and that of beneficence, which at times favors paternalistic influence. A growing body of literature attempts to assess the merits of nudging in health care. However, this literature deals almost exclusively with health policy, while the question of the potential benefit of nudging for the practice of informed consent has escaped systematic analysis. This article focuses on this question. While it concedes that nudging could amount to improper exploitation of cognitive weaknesses, it defends the practice of nudging in a wide range of other conditions. The conclusion is that, when ethically legitimate, nudging offers an important new paradigm for informed consent, with a special potential to overcome the classical dilemma between paternalistic beneficence and respect for autonomy. © 2013 Copyright Taylor and Francis Group, LLC. Source


Melamed T.,Ben - Gurion University of the Negev
Journal of the Optical Society of America A: Optics and Image Science, and Vision | Year: 2011

The present contribution is concerned with applying beam-type expansion to planar aperture time-harmonic electromagnetic field distribution in which the propagating elements, the electromagnetic beam-type wave objects, are decomposed into transverse electric (TE) and transverse magnetic (TM) field constituents. This procedure is essential for applying Maxwell's boundary conditions for solving different scattering problems. The propagating field is described as a discrete superposition of tilted and shifted TE and TM electromagnetic beams over the frame-based spatial-directional expansion lattice. These vector wave objects are evaluated either by applying differential operators to scalar beampropagators, or by using plane-wave spectral representations. Explicit asymptotic expressions for scalar, as well as for electromagnetic, Gaussian beam propagators are presented as well. ©2011 Optical Society of America. Source


Fink L.,Ben - Gurion University of the Negev
Journal of Strategic Information Systems | Year: 2010

This paper explores the value of a configurational approach to IT outsourcing by developing a framework for IT outsourcing effectiveness. Taking a process view of outsourcing and drawing on the relational view of the firm, the framework identifies four high-level dimensions that correspond to an organization's resource position in four key areas: organizational IT value position, organizational IT asset position, relational asset position, and relational capability position. A novel structured method is used to identify the congruent outsourcing configurations within the range of possible outsourcing configurations based on the interdependencies among the four dimensions. Three congruent outsourcing configurations, designated asset dependence, relational dependence, and independence, emerge from this analysis. Drawing on the assumptions of configurational theory about organizational change and taking a dynamic perspective, the framework is extended to describe how organizations transition between outsourcing configurations over time. This paper demonstrates how a configurational approach can address three potential problems of the outsourcing literature and advance outsourcing research. © 2010 Elsevier B.V. All rights reserved. Source


Argaman N.,Nuclear Research Center - Negev | Band Y.B.,Ben - Gurion University of the Negev
Physical Review A - Atomic, Molecular, and Optical Physics | Year: 2011

The thermodynamic approach to density-functional theory (DFT) is used to derive a versatile theoretical framework for the treatment of finite-temperature (and in the limit, zero-temperature) Bose-Einstein condensates (BECs). The simplest application of this framework, using the overall density of bosons alone, would yield the DFT of Nunes. It is argued that a significant improvement in accuracy may be obtained by using additional density fields: the condensate amplitude and the anomalous density. Thus, two advanced schemes are suggested, one corresponding to a generalized two-fluid model of condensate systems, and another scheme, which explicitly accounts for anomalous density contributions and anomalous effective potentials. The latter reduces to the Hartree-Fock-Bogoliubov approach in the limit of weak interactions. For stronger interactions, a local-density approximation is suggested, but its implementation requires accurate data for the thermodynamic properties of uniform interacting BEC systems, including fictitious perturbed states of such systems. Provided that such data become available (e.g., from quantum Monte Carlo computation), DFT can be used to obtain high-accuracy theoretical results for the equilibrium states of BECs of various geometries and external potentials. © 2011 American Physical Society. Source


Keshet U.,Ben - Gurion University of the Negev | Balberg S.,Hebrew University of Jerusalem
Physical Review Letters | Year: 2012

The explosion of a core-collapse supernova can be approximated by the breakdown of steady-state solutions for accretion onto a proto-neutron star (PNS). We analytically show that as the neutrino luminosity exceeds a critical value L c, the neutrinosphere pressure exceeds the hydrostatic limit even for an optimal shock radius R. This yields L cM2T2 (with logarithmic corrections) and RM/T, in agreement with numerical results, where M and T are the PNS mass and neutrino temperature, respectively. The near-critical flow can be approximated as a ballistic shell on top of an isothermal layer. © 2012 American Physical Society. Source


Regev O.,Ben - Gurion University of the Negev | Grunlan J.C.,Texas A&M University
ACS Applied Materials and Interfaces | Year: 2014

In an effort to reduce the flammability of polyurethane foam, a thin film of renewable inorganic nanoparticles (i.e., anionic vermiculite [VMT] and cationic boehmite [BMT]) was deposited on polyurethane foam via layer-by-layer (LbL) assembly. One, two, and three bilayers (BL) of BMT-VMT resulted in foam with retained shape after being exposed to a butane flame for 10 s, while uncoated foam was completely consumed. Cone calorimetry confirmed that the coated foam exhibited a 55% reduction in peak heat release rate with only a single bilayer deposited. Moreover, this protective nanocoating reduced total smoke release by 50% relative to untreated foam. This study revealed that 1 BL, adding just 4.5 wt % to PU foam, is an effective and conformal flame retardant coating. These results demonstrate one of the most efficient and renewable nanocoatings prepared using LbL assembly, taking this technology another step closer to commercial viability. © 2014 American Chemical Society. Source


Maimon N.,Ben - Gurion University of the Negev | Hanly P.J.,University of Calgary
Journal of Clinical Sleep Medicine | Year: 2010

Study Objectives: It is commonly believed that louder snoring is associated with more severe obstructive sleep apnea (OSA). We evaluated the association between snoring intensity and the severity of OSA to better understand this clinical correlation. We also investigated the relationships between body mass index (BMI), neck size, sleep stage, and body position with the intensity of snoring. Methods: Overnight polysomnography, including objective measurement of snoring intensity, in 1643 habitual snorers referred for evaluation of sleep apnea. Results: Sixty-five percent of patients were male; the cohort had a mean age of 48.7 ± 13.7 y and BMI of 30.9 ± 8.8 kg/m 2. The mean apnea-hypopnea index (AHI) was 28.2 ± 26. The severity of OSA was graded as no OSA (AHI < 5), mild (AHI 5 to 15), moderate (AHI 15 to 30), severe (AHI 30 to 50), and very severe OSA (AHI > 50). Snoring intensity increased progressively across all 5 categories of AHI frequency and ranged from 46.3 ± 3.6 db in patients with AHI < 5 to 60.5 ± 6.4 db in those with AHI > 50. Furthermore, there was a positive correlation between the intensity of snoring and the AHI (r = 0.66, p < 0.01). Conclusions: The intensity of snoring increases as OSA becomes more severe. Source


Reuveni S.,Tel Aviv University | Klafter J.,Tel Aviv University | Granek R.,Ben - Gurion University of the Negev
Physical Review Letters | Year: 2012

Motivated by novel experimental work and the lack of an adequate theory, we study the dynamic structure factor S(k,t) of large vibrating fractal networks at large wave numbers k. We show that the decay of S(k,t) is dominated by the spatially averaged mean square displacement of a network node, which evolves subdiffusively in time, (u →i(t)-u →i(0)) 2∼tν, where ν depends on the spectral dimension d s and fractal dimension d f. As a result, S(k,t) decays as a stretched exponential S(k,t)S(k)e -(Γ kt)ν with Γ k∼k2 /ν. Applications to a variety of fractal-like systems are elucidated. © 2012 American Physical Society. Source


Silberstein T.,Ben - Gurion University of the Negev
Harefuah | Year: 2013

Prostate cancer is one of the most common types of cancer and it is the second most common cause of cancer-related death among men in the western world. Microscopic prostate cancer was found in up to 30% of men as early as 20-40 years old of age. Lycopene is a lipid soluble carotenoid molecule. It is found in high concentration in red fruit and vegetables. Lycopene has a significant anti-oxidative activity. There is laboratory evidence that explain it's function as an inhibitor of various tumors as well as epidemiological evidence that shows that lycopene-rich foods reduce the incidence of a few types of cancer and especially prostate cancer. This paper reviews the evidence that explain lycopene activity, its biological effect and availability and its effect on the prevention of prostate cancer. Source


Mayseless M.,Ben - Gurion University of the Negev
Journal of Applied Mechanics, Transactions ASME | Year: 2011

Explosive reactive armor (ERA) is a type of add-on armor that usually consists of tiles made of two metal plates with an explosive layer in between. The ERA is placed at a certain distance from the main armor to enhance its performance. ERA design is optimized based on the required effectiveness of the tiles. Various methods of defining ERA effectiveness are described. The effectiveness parameters of the mass-flux model and its derivatives, the effect of material properties, the escape length of the jet tip precursor, the explosive layer thickness, and the edge effects are analyzed, and correlations between them are presented. Analysis results are compared with available experimental data and a very good correlation is found. © 2011 American Society of Mechanical Engineers. Source


Shlivinski A.,Ben - Gurion University of the Negev
Progress in Electromagnetics Research | Year: 2011

The kinematic properties of an array of transmitting antennas that are transiently excited by a sequence of modulated pulses, with high repetition rate, are explored. The array's parameterization is carried out via the energy radiation pattern. It is shown that the energy radiation pattern can be decomposed into a set of different types of beam contributions, dened over a beam-skeleton, which is determined by the array's physical and excitation parameters. The different types of beams are main beams, grating-lobe beams and cross-pulsed lobe beams, each corresponding to a different pulsed interference mechanism. While grating lobes are time-harmonic phenomena, cross-pulsed lobes are unique for excitation with a pulsed sequence. The different beam types set limits for array sparsity in terms of the array's physical and excitation parameters. The array's directivity is introduced as a figure of merit of its performance and to demonstrate the resulting effect of the time-domain excitation characteristics. The array's parameterization can be used with any type of excitation - from extreme narrow band (time-harmonic) to extreme ultra-wideband (transient/short pulsed) excitation. For time-harmonic excitation, the resulting characterization matches that of the classical frequency domain antenna theory. Source


Yip E.C.,Cornell University | Yip E.C.,Ben - Gurion University of the Negev | Rayor L.S.,Cornell University
Biological Reviews | Year: 2014

While most spiders are solitary and opportunistically cannibalistic, a variety of social organisations has evolved in a minority of spider species. One form of social organisation is subsociality, in which siblings remain together with their parent for some period of time but disperse prior to independent reproduction. We review the literature on subsocial and maternal behaviour in spiders to highlight areas in which subsocial spiders have informed our understanding of social evolution and to identify promising areas of future research. We show that subsocial behaviour has evolved independently at least 18 times in spiders, across a wide phylogenetic distribution. Subsocial behaviour is diverse in terms of the form of care provided by the mother, the duration of care and sibling association, the degree of interaction and cooperation among siblings, and the use of vibratory and chemical communication. Subsocial spiders are useful model organisms to study various topics in ecology, such as kin recognition and the evolution of cheating and its impact on societies. Further, why social behaviour evolved in some lineages and not others is currently a topic of debate in behavioural ecology, and we argue that spiders offer an opportunity to untangle the ecological causes of parental care, which forms the basis of many other animal societies. © 2013 Cambridge Philosophical Society. Source


Issar A.S.,Ben - Gurion University of the Negev
Global and Planetary Change | Year: 2010

Since the very beginning of the human race, the Middle East served as a bridge between Africa-where our species first evolved-and the rest of the world. The passage over this bridge opened and closed with the global fluctuations of climate. The first glacial periods at the beginning of the Quaternary caused the greenhouse of equatorial Africa to become less hospitable, while making the desert belt of the Middle East more humid, green, and thus passable. Flint tools found along the shores of dried up lakes and swamps in the Negev Desert provide evidence that members of the first wave, Homo erectus, as well as the last wave, Homo sapiens, camped there en route to all the other continents. © 2009 Elsevier B.V. Source


Gruenbaum Y.,Hebrew University of Jerusalem | Medalia O.,University of Zurich | Medalia O.,Ben - Gurion University of the Negev
Current Opinion in Cell Biology | Year: 2015

Lamins are nuclear intermediate filament (IF) proteins. They assemble to fibrous structures that are positioned between the inner nuclear membrane and the peripheral chromatin. A small fraction of lamins is also present in the nucleoplasm. Lamins are required to maintain the nuclear structure and, together with their associated proteins, are involved in most nuclear activities. Mutations in lamins cause >14 distinct diseases, called laminopathies, that include heart, muscle, fat and early aging diseases. However, it is not clear how lamins are organized in vivo and how the disease mutations affect lamin organization and functions. Here, we will review structural aspects of lamin assembly, discuss differences between peripheral and nucleoplasmic lamins and describe the protein complexes that lamins form. © 2014 Elsevier Ltd. Source


Palchik V.,Ben - Gurion University of the Negev
Rock Mechanics and Rock Engineering | Year: 2010

The mechanical behavior of heterogeneous carbonate rocks exhibiting total volumetric strain curves of type 2 was studied. Studied rock samples exhibiting a wide range of mechanical properties were collected from different geological settings of Israel. The results show that crack initiation for studied heterogeneous carbonate rocks occurs at significant standard deviations for all studied samples. Values of the difference (D) between the uniaxial compressive strength and crack initiation stress, and the ratio (k) between the crack initiation stress and uniaxial compressive strength are not constant even for samples within the same rock formation. The standard deviation of the mean k within the same rock formation varies between 0.03 and 0.12. Values of the maximum axial strain and maximum volumetric strain are 0.19/1.36% and 0.084/0.9%, respectively. Source


Schoknecht K.,Charite - Medical University of Berlin | David Y.,Ben - Gurion University of the Negev | Heinemann U.,Charite - Medical University of Berlin
Seminars in Cell and Developmental Biology | Year: 2015

The blood-brain barrier is part of the neurovascular unit and serves as a functional and anatomical barrier between the blood and the extracellular space. It controls the flow of solutes in and out of the brain thereby providing an optimal environment for neuronal functioning. Paracellular transport between endothelial cells is restricted by tight junctions and transendothelial transport is reduced and more selective compared to capillaries of other organs. Further, the blood-brain barrier is involved in controlling blood flow and it is the site for signaling damage of the nervous system to the peripheral immune system. As an important player in brain homeostasis, blood-brain barrier dysfunction has been implicated in the pathophysiology of many brain diseases including stroke, traumatic brain injury, brain tumors, epilepsy and neurodegenerative disorders. In this article - highlighting recent advances in basic science - we review the features of the blood-brain barrier and their significance for neuronal homeostasis to discuss clinical implications for neurological complications following cerebral ischemia. © 2014 Elsevier Ltd. Source


Guendelman E.I.,Ben - Gurion University of the Negev
International Journal of Modern Physics A | Year: 2010

A gravitational theory involving a vector field Xμ which has the properties of a dynam-ical space-time, is studied. The variation of the action with respect to Xμ gives the covariant conservation of an energy-momentum tensor T μV(X). Studying the theory in a background that has Killing vectors and Killing tensors, we find appropriate shift symmetries of the field Xμ which lead to conservation laws. The energy-momentum that is the source of gravity T μV (G) is dierent but related to T μV (X) and the covariant conservation of T μV (G) determines in general the vector field Xμ. When TμV (X) is chosen to be proportional to the metric, the theory coincides with the Two Measures Theory, which has been studied before in relation to the Cosmological Constant Problem. When the matter model consists of point particles, or strings, the form of T μV (G), solutions for Xμ are found. In a locally inertial frame, the vector field corresponds to the locally at coordinates. For the case of a string gas cosmology, we find that the Milne universe can be a solution, where the gas of strings does not curve the space-time since although T μV (X) 6= 0, T μV (G) = 0, as a model for the early universe, this solution is also free of the horizon problem. There may also be an application to the "time problem" of quantum cosmology. © World Scientific Publishing Company. Source


Palty R.,University of California at Berkeley | Sekler I.,Ben - Gurion University of the Negev
Cell Calcium | Year: 2012

Powered by the steep mitochondrial membrane potential Ca2+ permeates into the mitochondria via the Ca2+ uniporter and is then extruded by a mitochondrial Na+/Ca2+ exchanger. This mitochondrial Ca2+ shuttling regulates the rate of ATP production and participates in cellular Ca2+ signaling. Despite the fact that the exchanger was functionally identified 40 years ago its molecular identity remained a mystery. Early studies on isolated mitochondria and intact cells characterized the functional properties of a mitochondrial Na+/Ca2+ exchanger, and showed that it possess unique functional fingerprints such as Li+/Ca2+ exchange and that it is displaying selective sensitivity to inhibitors. Purification of mitochondria proteins combined with functional reconstitution led to the isolation of a polypeptide candidate of the exchanger but failed to molecularly identify it. A turning point in the search for the exchanger molecule came with the recent cloning of the last member of the Na+/Ca2+ exchanger superfamily termed NCLX (Na+/Ca2+/Li+ exchanger). NCLX is localized in the inner mitochondria membrane and its expression is linked to mitochondria Na+/Ca2+ exchange matching the functional fingerprints of the putative mitochondrial Na+/Ca2+ exchanger. Thus NCLX emerges as the long sought mitochondria Na+/Ca2+ exchanger and provide a critical molecular handle to study mitochondrial Ca2+ signaling and transport. Here we summarize some of the main topics related to the molecular properties of the Na+/Ca2+ exchanger, beginning with the early days of its functional identification, its kinetic properties and regulation, and culminating in its molecular identification. © 2012 Elsevier Ltd. Source


Elkin M.,Ben - Gurion University of the Negev
Proceedings of the Annual ACM-SIAM Symposium on Discrete Algorithms | Year: 2010

The problem of constructing dense subsets S of {1, 2, . . . , n} that contain no three-term arithmetic progression was introduced by Erdocombining double acute accents and Turán in 1936. They have presented a construction with |S| = Ω(n log3 2) elements. Their construction was improved by Salem and Spencer, and further improved by Behrend in 1946. The lower bound of Behrend is (Chemical Equation Presented) Since then the problem became one of the most central, most fundamental, and most intensively studied problems in additive number theory. Nevertheless, no improvement of the lower bound of Behrend has been reported since 1946. In this paper we present a construction that improves the result of Behrend by a factor of Θ(√log n), and shows that (Chemical Equation Presented) In particular, our result implies that the construction of Behrend is not optimal. Our construction and proof are elementary and self-contained. Also, the construction can be implemented by an efficient algorithm. Behrend's construction has numerous applications in Theoretical Computer Science. In particular, it is used for fast matrix multiplication, for property testing, and in the area of communication complexity. Plugging in our construction instead of Behrend's construction in the matrix multiplication algorithm of Coppersmith and Winograd improves the state-of-the-art upper bound on the complexity of the matrix multiplication by a factor of logυ n, for some fixed constant υ > 0. We also present an application of our technique in Computational Geometry. Copyright © by SIAM. Source


Guendelman E.I.,Ben - Gurion University of the Negev
International Journal of Modern Physics A | Year: 2010

A general coordinate invariant theory is constructed where confinement of gauge fields and gauge dynamics in general is governed by the spontaneous symmetry breaking (s.s.b.) of scale invariance. The model uses two measures of integration in the action, the standard √-g where g is the determinant of the metric and another measure Φ independent of the metric. To implement scale invariance, a dilaton field is introduced. Using the first-order formalism, curvature (ΦR and √-gR2 terms, gauge field term (Φ√-FμvaFαβagmu; αgv β and √-g F μva Fαβag mu; αgv β and dilaton kinetic terms are introduced in a conformally invariant way. Exponential potentials for the dilaton break down (softly) the conformal invariance down to global scale invariance, which also suffers s.s.b. after integrating the equations of motion. The model has a well-defined flat space limit. As a result of the s.s.b. of scale invariance phases with different vacuum energy density appear. Inside the bags, that is in the regions of larger vacuum energy density, the gauge dynamics is normal, that is nonconfining, while for the region of smaller vacuum energy density, the gauge field dynamics is confining. Likewise, the dynamics of scalars, like would be Goldstone bosons, is suppressed inside the bags. © World Scientific Publishing Company. Source


Guendelman E.I.,Ben - Gurion University of the Negev | Nishino H.,California State University, Long Beach | Rajpoot S.,California State University, Long Beach
Physics Letters, Section B: Nuclear, Elementary Particle and High-Energy Physics | Year: 2014

The use in the action integral of totally divergent densities in generally coordinate invariant theories can lead to interesting mechanisms of spontaneous symmetry breaking of scale invariance. With dependence in the action on a metric independent density Φ, in 4D, we can define Φ = εμναβ ∂μAναβ that gives a new interesting mechanism for breaking scale symmetry in 4D theories of gravity plus matter fields, through the Aναβ equations of motion which lead to an integration constant the breaks the scale symmetry, while introducing terms of the form e GlnK, e being the determinant of the vierbein, G being the Gauss-Bonnet scalar and K being scalar functions of the fields transforming like K→c K (where c is a constant) under a scale transformation. Such a term is invariant only up to a total divergence and therefore leads to breaking of scale invariance due to gravitational instantons. The topological density constructed out of gauge field strengths εμναβFμνaFαβa can be coupled to the dilaton field linearly to produce a scale invariant term up to a total divergence. The scale symmetry can be broken by Yang-Mills instantons which lead to a very small vacuum energy for our Universe. © 2014 The Authors. Source


Pascal R.,Max Mousseron Institute of Biomolecules | Pross A.,Ben - Gurion University of the Negev
Journal of Systems Chemistry | Year: 2014

The conceptual divide separating the physical and biological sciences continues to challenge modern science. In this perspective it is proposed that the two sciences can be directly connected through the fundamental concept of stability. Physicochemical stability is shown to have a logical, rather than an empirical basis, and able to manifest itself in two distinct and often contrary ways, one thermodynamic, reflecting energetic considerations, and the other kinetic, reflecting time/persistence considerations. Each stability kind is shown to rest on a particular mathematical truism. Thermodynamic stability, the energetic expression, has a probabilistic/statistical basis due to Boltzmann, and leads to the Second Law of Thermodynamics. Dynamic kinetic stability (DKS), the time/persistence expression, is attributed to the stability associated with persistent replicating systems, and derives from the mathematics of exponential growth. The existence of two distinct stability kinds, each mathematically-based, leads to two distinct organizational forms of matter, animate and inanimate. That understanding offers insight into the reasons for the observation of just those two organizational forms, their different material characteristics, and provides a logical basis for understanding the nature of chemical and biological transformations, both within, and between, the two forms. © 2014 Pascal and Pross; licensee Chemistry Central Ltd. Source


Diamond G.M.,Ben - Gurion University of the Negev
Psychotherapy | Year: 2014

Attachment-Based Family Therapy is a treatment model designed specifically for depressed and suicidal adolescents. The primary goal of the treatment is to promote developmentally appropriate adolescent- parent attachment. Three core interventions are discussed: relational reframes; focusing on primary emotions and unmet attachment needs; and facilitating corrective attachment episodes. For each intervention, the theoretical/clinical rationale is presented followed by a brief illustration and relevant research findings. © 2013 American Psychological Association. Source


Brustein R.,Ben - Gurion University of the Negev
Fortschritte der Physik | Year: 2014

It is argued that the blackhole information paradox originates from treating the blackhole geometry as strictly classical. It is further argued that the theory of quantum fields in a classical curved space with a horizon is an ill posed problem. If the geometry is allowed to fluctuate quantum mechanically, then the horizon effectively disappears. The sharp horizon emerges only in the classical limit when the ratio of the Compton wavelength of the black hole to its Schwarzschild radius vanishes. The region of strong gravity that develops when matter collapses to form the blackhole remains visible to the whole of spacetime and has to be described by a microscopic theory of strong gravity. The arguments imply that the information paradox is demoted from a paradox involving fundamental principles of physics to the problem of describing how matter at the highest densities gravitates. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim. Source


Sekler I.,Ben - Gurion University of the Negev
Biochemical and Biophysical Research Communications | Year: 2015

It is now the 40th anniversary of the Journal of Molecular and Cellular Cardiology paper by Ernesto Carafoli and colleagues. This seminal study described for the first time mitochondrial Ca2+ extrusion and its coupling to Na+. This short review will describe the profound impact that this work had on mitochondrial signaling and the cross talk between the mitochondria, the ER, and the plasma membrane. It will further tell how the functional identification and in particular its unique cation selectivity to both Li+ and Na+ eventually contributed to the identification of the mitochondrial Na+/Ca2+ exchanger gene NCLX many years later. The last part will describe how molecular tools derived from NCLX identification are used to study the novel physiological aspects of Ca2+ signaling. © 2015 Elsevier Inc. All rights reserved. Source


Eichler D.,Ben - Gurion University of the Negev
Astrophysical Journal Letters | Year: 2015

None of the UHE neutrinos detected thus far by ICECUBE came from identifiable GRBs. It is noted here that, because the GRB luminosity function is top heavy, this limits the fraction of GRB energy that goes into neutrinos (in the detectability range of ICECUBE) to a small fraction (≲0.15) of the GRB photon energy, but does not yet constrain any scenario that converts baryonic outflow from GRB to UHE neutrinos. © 2015. The American Astronomical Society. All rights reserved.. Source


McCue M.D.,Ben - Gurion University of the Negev
Comparative Biochemistry and Physiology - A Molecular and Integrative Physiology | Year: 2010

All animals face the possibility of limitations in food resources that could ultimately lead to starvation-induced mortality. The primary goal of this review is to characterize the various physiological strategies that allow different animals to survive starvation. The ancillary goals of this work are to identify areas in which investigations of starvation can be improved and to discuss recent advances and emerging directions in starvation research. The ubiquity of food limitation among animals, inconsistent terminology associated with starvation and fasting, and rationale for scientific investigations into starvation are discussed. Similarities and differences with regard to carbohydrate, lipid, and protein metabolism during starvation are also examined in a comparative context. Examples from the literature are used to underscore areas in which reporting and statistical practices, particularly those involved with starvation-induced changes in body composition and starvation-induced hypometabolism can be improved. The review concludes by highlighting several recent advances and promising research directions in starvation physiology. Because the hundreds of studies reviewed here vary so widely in their experimental designs and treatments, formal comparisons of starvation responses among studies and taxa are generally precluded; nevertheless, it is my aim to provide a starting point from which we may develop novel approaches, tools, and hypotheses to facilitate meaningful investigations into the physiology of starvation in animals. © 2010 Elsevier Inc. All rights reserved. Source


Pross A.,Ben - Gurion University of the Negev | Pascal R.,Montpellier University
Open Biology | Year: 2013

The origin of life (OOL) problem remains one of the more challenging scientific questions of all time. In this essay, we propose that following recent experimental and theoretical advances in systems chemistry, the underlying principle governing the emergence of life on the Earth can in its broadest sense be specified, and may be stated as follows: all stable (persistent) replicating systems will tend to evolve over time towards systems of greater stability. The stability kind referred to, however, is dynamic kinetic stability, and quite distinct from the traditional thermodynamic stability which conventionally dominates physical and chemical thinking. Significantly, that stability kind is generally found to be enhanced by increasing complexification, since added features in the replicating system that improve replication efficiency will be reproduced, thereby offering an explanation for the emergence of life's extraordinary complexity. On the basis of that simple principle, a fundamental reassessment of the underlying chemistry-biology relationship is possible, one with broad ramifications. In the context of the OOL question, this novel perspective can assist in clarifying central ahistoric aspects of abiogenesis, as opposed to the many historic aspects that have probably been forever lost in the mists of time. © 2013 The Authors. Source


Medalia O.,Ben - Gurion University of the Negev | Geiger B.,Weizmann Institute of Science
Current Opinion in Cell Biology | Year: 2010

Focal adhesions (FAs) are highly dynamic multi-protein complexes, through which cells interact with the extracellular matrix (ECM) via integrin receptors. These large assemblies, which typically measure several micrometers in diameter, mediate interactions of cells with external surfaces, and are linked at their cytoplasmic faces with F-actin bundles. Over the last four decades, the molecular diversity of these adhesions and their roles in cell migration and matrix sensing have been extensively studied. Microscopy-based research is considered critical for characterizing and understanding the nature of these assemblies. Here, we review the contributions of, advanced microscopy to the characterization of the functional architecture of integrin-mediated, cell-matrix adhesions." Knowledge of structure is critical to an understanding of function" (Dorothy Hodgkin). © 2010 Elsevier Ltd. Source


Tal A.,Ben - Gurion University of the Negev
Journal of Arid Environments | Year: 2015

Despite many important success stories around the planet, there is general disappointment at the overall impact of the United Nations Convention to Combat Desertification (hereinafter: UNCCD) during its first 20 years, with soil degradation still a challenge for land managers throughout much of the world's drylands. Calls for a new protocol under the UNCCD that will ensure "Zero Net Land Degradation" (ZNLD) have gained momentum since the tacit endorsement of the concept at the 2012 Rio+20 summit. There is great conceptual appeal to a framework that implicitly allows for development by balancing associated soil fertility loss with commensurate gains resulting from restoration activities. Trading programs which seek to reach "zero net" degradation of other natural resources have been in place for many years now internationally and offer an important basis for assessing the practical and theoretical problems that are likely to arise under a ZNLD framework. This article summarizes the relevant experiences garnered in "offsetting" regulatory schemes in the areas of wetland preservation, biodiversity, forestry, greenhouse gas emissions mitigation, real estate zoning, and conventional air pollution control. While many of these initiatives take place in environments with completely different climatic conditions, they offer important lessons for ZNLD advocates. Pitfalls in offset programs are identified in the areas of: reliability of trades; clear quantifiable units of measure; equivalence given land heterogeneity; and delayed benefits. The article contains a series of recommendations for land degradation offsets based on this diverse international experience. Proven implementation strategies should inform any future ZNLD policies as part of national and regional regulatory programs to combat desertification and arid land soil degradation. © 2014. Source


Palchik V.,Ben - Gurion University of the Negev
Engineering Geology | Year: 2012

The width and length of weathering-induced fractures, horizontal distances between through channels formed due to weathering, number of through channels in a very weak sandy shale surface layer, and methane emission through these fractures to the atmosphere are studied. Sampling from the studied weathered sandy shale exhibiting very low rock quality (RQD < 12%) is impossible and, for this reason, fracture characteristics of this rock were measured in situ directly in the walls of five surface vertical boreholes 1.5. m deep using an original measurement device. The radial expansion of measurement devise due to air pressure is used to press wet gypsum into the fractures in borehole surface in order to make gypsum casts of the fractures. Semi-empirical relations between methane volumetric flow rate into the atmosphere through weathering-induced fractures and dimensions, area and number of weathering-induced fractures and gas pressure difference are obtained. The comparison between the values of gas seepage through the fractures and rock matrix is performed. © 2011 Elsevier B.V. Source


Spitz I.M.,Ben - Gurion University of the Negev
Contraception | Year: 2010

Administration of mifepristone followed by the prostaglandin, misoprostol, has been used successfully in the medical termination of pregnancy for over 25 years, and the method is registered in 35 countries. Single doses of mifepristone are also effective as an emergency postcoital contraceptive. Mifepristone administered for 3 months or longer to women with uterine leiomyomas, is associated with a reduction in pain and bleeding with improvement in quality of life and decrease in fibroid size. Mifepristone is also effective in decreasing pain in women with endometriosis. In both these conditions, serum estradiol levels are in the range of those in the early follicular phase. A daily dose of at least 2 mg mifepristone blocks ovulation. In contrast, weekly administration of 25 or 50 mg does not consistently block ovulation but has contraceptive potential by delaying endometrial development. Mifepristone in a dose of 200 mg, administered 48 h after the Luteinizing Hormone (LH) surge, also acts as a contraceptive, but this strategy is not practical for widespread use. Administration of mifepristone for 4-6 months or longer may lead to endometrial thickening. Endometrial histology reveals cystic glandular dilation together with admixed estrogen (mitotic) and progestin (secretory) epithelial effects. This histological pattern does not represent endometrial hyperplasia. © 2010 Elsevier Inc. All rights reserved. Source


Kissinger M.,Ben - Gurion University of the Negev
Food Policy | Year: 2012

At the beginning of the 21st century imports of agricultural and food commodities have become a major part of many nations' food baskets. Indeed the global food system has several merits for nations, businesses and individual consumers' well-being. However, as increasing evidence suggests that we are approaching an era of climate change and scarcity of cheap energy sources the sustainability of that system must be examined. One part of any food commodity chain is its 'food miles' - the distance the commodity travels from point of production to point of consumption, the required energy and resulting emissions. This paper presents a 1 year 'snapshot' of Canada's total import related food miles. It presents an analysis of the distance imported foods traveled from around the world to major points of consumption in Canada and documents the equivalent carbon dioxide emissions related to those imports. It presents both a macro scale picture of the equivalent emissions related to transportation of imported food and a micro scale picture which focuses on specific commodities consumed in various parts of the country. It then discusses policy implications for food sustainability. Overall the research highlights that about 30% of the agricultural and food commodities consumed in Canada are imported, resulting in 'food miles' of over 61billiontonneskm, leading to annual emissions of 3.3 million metric tonnes of CO 2. Of the various agriculture and food commodities studied, fruits and vegetables had the highest food miles related emissions. © 2012 Elsevier Ltd. Source


Kamenetskii E.O.,Ben - Gurion University of the Negev
Journal of Physics Condensed Matter | Year: 2010

Small ferrite-disk particles with magnetostatic (magneto-dipole) oscillations are characterized by the topological-phase states - the vortex states. In a recently published paper (Kamenetskii et al 2010 Phys. Rev. A 81 053823), it was shown that such magnetic vortices act as traps, providing purely subwavelength confinement of electromagnetic fields. The symmetry properties of magnetostatic-vortex ferrite disks allow one to propose new-type subwavelength microwave structures. In this paper it is demonstrated that the unique topological properties of the fields in a ferrite disk are intimately related to the symmetry breaking effects of magnetostatic oscillations. This analysis is based on postulates about a physical meaning of the magnetostatic-potential function ψ(r→, t) as a complex scalar wavefunction, which presumes a long-range phase coherence in magnetic dipole-dipole interactions. The proper solutions are found based on an analysis of magnetostatic-wave propagation in a helical coordinate system. It is shown that while a composition of two helical waves may acquire a geometrical phase over-running of 2π during a period, every separate helical wave has a dynamical phase over-running of π and so behaves as a double-valued function. This results in the appearance of helical-mode magnetostatic resonances in quasi-2D ferrite disks. The solutions give magnetostatic-wave power-flow-density vortices with cores at the disk center and azimuthally running waves of magnetization. The near fields of magnetostatic-vortex ferrite-disk particles are characterized by space-time symmetry violation. For incident electromagnetic waves, such particles, with sizes much less than the free-space electromagnetic wavelength, appear as local singular regions. From the properties of a composition of magnetostatic-vortex ferrite-disk particles, one may propose novel metamaterials - singular metamaterials. © 2010 IOP Publishing Ltd. Source


Peretz M.M.,University of Toronto | Ben-Yaakov S.,Ben - Gurion University of the Negev
IEEE Transactions on Power Electronics | Year: 2012

A time-domain design method for the digital controller of pulsewidth modulation dc-dc converters was developed. The proposed approach is based on the fact that the closed-loop response of a digitally controlled system is largely determined by the first few samples of the compensator. This concept is used to fit a digital PID template to the desired response. The proposed controller design method is carried out in the time domain and, thus, bypasses errors related to the transformation from the continuous to discrete domain and to discretization. The method was tested by simulations and experimentally. Digital PID controllers for experimental buck- and boost-type converters were designed according to the proposed method and implemented on a TMS320LF2407 DSP core. The measured closed-loop attributes were found to be in good agreement with the design goals. The study was further expanded to investigate the possible realistic closed-loop performance that can be obtained from a system that is controlled by a PID template controller, as well as the stability boundaries of the proposed time-domain controller design approach. The results of the study delineate a normalized map of deviation from the target closed-loop performance goals possible for PID control of switch-mode converters and the areas in which the use of this control law is feasible. © 2011 IEEE. Source


Feintuch A.,Ben - Gurion University of the Negev
Mathematics of Control, Signals, and Systems | Year: 2011

This paper considers the strong stabilization problem: given a linear time-varying system which is stabilizable by dynamic feedback, when can the stabilizer be chosen to be itself stable? We consider here the case of algebras of discrete time, time-varying systems which are asymptotically time-invariant, in the sense that as time evolves the time-varying transfer operator converges to a time-invariant transfer operator. Convergence here is in the sense of uniform or strong convergence of sequences of operators on an appropriate Hilbert space of input-output signals. © 2011 Springer-Verlag London Limited. Source


Eichler D.,Ben - Gurion University of the Negev
Astrophysical Journal | Year: 2011

It is shown that the contribution of faint gamma-ray bursts (GRBs) to the total all-sky GRB energy flux is small. The all-sky flux of GRBs appears to be 5.3 × 10-3ergcm-2yr-1, with little additional component hidden within weak or otherwise undetectable GRBs. This significantly constrains physical models of GRBs and dirty fireballs, suggesting a rather sharp dichotomy between them should the latter exist. © 2011. The American Astronomical Society. All rights reserved. Source


Shmulevich E.,Ben - Gurion University of the Negev
Canadian journal of cardiovascular nursing = Journal canadien en soins infirmiers cardio-vasculaires | Year: 2011

Dual antiplatelet therapy with aspirin and clopidogrel is among the most efficacious treatment for patients after acute coronary syndromes and for those who have had a percutaneous coronary intervention and coronary stent implantation. Patients who are treated with dual antiplatelet therapy are usually also ordered medications that reduce the secretion of gastric acid (such as H2 receptor blockers or proton pump inhibitors [PPIs]) in order to decrease the risk of gastrointestinal bleeding and dyspepsia. Numerous observational studies reported that omeprazole (a PPI) attenuates the antiplatelet activity and clinical effectiveness of clopidogrel and causes adverse cardiovascular events. Based on these findings, several medical agencies in the world have issued communications regarding the negative interaction between clopidogrel and PPIs, urging clinicians to evaluate the need for starting treatment with a PPI in patients taking clopidogrel. There are studies that reported contradicting findings, suggesting that there is no significant interaction between clopidogrel and PPIs. Only one prospective, randomized, double-blind, placebo-controlled clinical trial examined the interaction between clopidogrel and omeprazole and did not demonstrate cardiovascular harm among the patients who were treated with clopidogrel and omeprazole, as compared to those who were treated with clopidogrel and placebo. In this article, the authors review the current studies that reported a possible drug-drug interaction between clopidogrel and PPIs, particularly omeprazole. Source


Berezansky L.,Ben - Gurion University of the Negev | Braverman E.,University of Calgary
Automatica | Year: 2012

For the system with a distributed delay Ẋ(t)+∑k= 1m∫ hk(t)t[ dsRk(t,s)]X(s)=0 and a nonnegative fundamental matrix, solution estimates and stability results are obtained. The results are applied to integrodifferential equations and equations of mixed type including both distributed and concentrated delays. © 2012 Elsevier Ltd. All rights reserved. Source


Vashitz G.,Ben - Gurion University of the Negev
Harefuah | Year: 2011

Second opinion is a decision-support tool for ratification or modification of a suggested treatment, by another physician. Second opinion may have a critical influence on the diagnosis, treatment and prognosis. The patient can benefit from treatment optimization and avoid unnecessary risks. The physician can benefit from less exposure to legal claims, and healthcare organizations can benefit from increased treatment, quality assurance and costs saving from unnecessary surgery and treatments. Nevertheless, injudicious use of this tool can provoke unnecessary medical costs. In recent years, many patients prefer to seek a second opinion on their disease and available treatments. Private and public insurance companies are trying to control surgery costs by urging and even demanding a second opinion before surgery. Although second opinions are common in medical practice, relatively little is known on this subject. Most of the studies reviewed in this article evaluated the clinical benefit of second opinions, the reasons patients seek a second opinion and the characteristics of these patients, as well as technological interventions to promote second opinions, and ethical or legal issues related to second opinions. Yet, there are opportunities for further studies about physicians attitudes and barriers towards second opinions, their effect on patient-physician communication and cost-effectiveness analyses of second opinions. Due to the relevance of second opinions for public heath, this review aims to summarize the current research on second opinions. Source


Eichler D.,Ben - Gurion University of the Negev
Astrophysical Journal Letters | Year: 2014

It is suggested that many gamma-ray bursts (GRBs) are cloaked by an ultrarelativistic baryonic shell that has high optical depth when the photons are manufactured. Such a shell would not fully block photons reflected or emitted from its inner surface, because the radial velocity of the photons can be less than that of the shell. This avoids the standard problem associated with GRBs that the thermal component should be produced where the flow is still obscured by high optical depth. The radiation that escapes high optical depth obeys the Amati relation. Observational implications may include (1) anomalously high ratios of afterglow to prompt emission, such as may have been the case in the recently discovered PTF 11agg, and (2) ultrahigh-energy neutrino pulses that are non-coincident with detectable GRB. It is suggested that GRB 090510, a short, very hard GRB with very little afterglow, was an exposed GRB, in contrast to those cloaked by baryonic shells. © 2014. The American Astronomical Society. All rights reserved. Source


Rosenkrantz E.,Ilse Katz Institute for Nanoscale Science and Technology | Arnon S.,Ilse Katz Institute for Nanoscale Science and Technology | Arnon S.,Ben - Gurion University of the Negev
Optics Letters | Year: 2014

The emerging technology of visible light communications (VLC) will provide a new modality of communication. This technology uses illumination lighting to carry information. We propose to add a smart capability to mitigate interferences from unwanted light sources. This is achieved by adaptively filtering interference light using a tunable filter to block interferences dynamically. In this Letter, we present an innovative concept for a tunable notch filter based on ferroelectric thin films embedded with noble metal nanoparticles. The adaptivity of the filter is achieved by controlling the external applied voltage. This voltage creates an electric field that changes the refractive index of the host film through the linear electro-optic effect. Moreover, the fundamental characteristics of the filter are determined by the layer's parameters, such as film thickness, nanoparticles concentration and geometry, and the material of both the host thin film and nanoparticles. We study the tunability of lead zirconate titanate (PZT) embedded with Ag nanoparticles that reaches approximately 50 nm, between 530 and 590 nm. Moreover, we showed that a PZT notch filter embedded with Ag nanoshells has its stop band shifted to shorter wavelengths. These tunable filters can be used as mode selectors inside a laser resonator, spatial light filters for imaging and communication both for VLC and infrared communication. © 2014 Optical Society of America. Source


Localized kilometer-sized propeller features, recently detected in CASSINI images of Saturn's A ring, may be interpreted as signatures of small hundred-meter-sized moonlets embedded within the broad ring. We propose that these moonlets are formed due to the constructive interference of the gravitationally unstable axisymmetric and nonaxisymmetric perturbations of the self-gravitating Saturnian ring disk. The hydrodynamical slab model of the disk is studied to determine its instability against low-frequency compression-type, or Lin-Shu-type, oscillations whose propagation vector is perpendicular to the axis of the disk's rotation. The linear instability analysis is performed in the following way. First, we derive the general dispersion relation for the gravity perturbations inside the slab within the standard local short-wavelength approach. Next, we match the solutions to the solutions of the Poisson equation outside the slab to arrive at the particular dispersion relation. The particular dispersion relation describes the development of symmetric Jeans' perturbations which cause density enhancements. From this dispersion relation, the growth rate of Jeans-unstable oscillations developing in the spatially homogeneous slab model is obtained. The dispersion relation shows that perturbations of both types, axisymmetric and nonaxisymmetric, are excited simultaneously due to the Jeans' gravitational instability. As a result, circular and spiral patterns develop. We suggest that clumping occurs due to the enhanced density increase at the intersections of the patterns. We also speculate that these clumps further develop into propeller moonlets. © 2011. The American Astronomical Society. All rights reserved.. Source


Myers S.S.,Harvard University | Wessells K.R.,University of California at Davis | Kloog I.,Ben - Gurion University of the Negev | Zanobetti A.,Harvard University | Schwartz J.,Harvard University
The Lancet Global Health | Year: 2015

Background: Increasing concentrations of atmospheric carbon dioxide (CO2) lower the content of zinc and other nutrients in important food crops. Zinc deficiency is currently responsible for large burdens of disease globally, and the populations who are at highest risk of zinc deficiency also receive most of their dietary zinc from crops. By modelling dietary intake of bioavailable zinc for the populations of 188 countries under both an ambient CO2 and elevated CO2 scenario, we sought to estimate the effect of anthropogenic CO2 emissions on the global risk of zinc deficiency. Methods: We estimated per capita per day bioavailable intake of zinc for the populations of 188 countries at ambient CO2 concentrations (375-384 ppm) using food balance sheet data for 2003-07 from the Food and Agriculture Organization. We then used previously published data from free air CO2 enrichment and open-top chamber experiments to model zinc intake at elevated CO2 concentrations (550 ppm, which is the concentration expected by 2050). Estimates developed by the International Zinc Nutrition Consultative Group were used for country-specific theoretical mean daily per-capita physiological requirements for zinc. Finally, we used these data on zinc bioavailability and population-weighted estimated average zinc requirements to estimate the risk of inadequate zinc intake among the populations of the different nations under the two scenarios (ambient and elevated CO2). The difference between the population at risk at elevated and ambient CO2 concentrations (ie, population at new risk of zinc deficiency) was our measure of impact. Findings: The total number of people estimated to be placed at new risk of zinc deficiency by 2050 was 138 million (95% CI 120-156). The people likely to be most affected live in Africa and South Asia, with nearly 48 million (32-63) residing in India alone. Global maps of increased risk show significant heterogeneity. Interpretation: Our results indicate that one heretofore unquantified human health effect associated with anthropogenic CO2 emissions will be a significant increase in the human population at risk of zinc deficiency. Our country-specific findings can be used to help guide interventions aimed at reducing this vulnerability. Funding: Bill & Melinda Gates Foundation, Winslow Foundation. © 2015 Myers et al. Source


Ganel T.,Ben - Gurion University of the Negev
Psychonomic Bulletin and Review | Year: 2015

People smile in social interactions to convey different types of nonverbal communication. However, smiling can potentially change the way a person is perceived along different facial dimensions, including perceived age. It is commonly assumed that smiling faces are perceived as younger than faces carrying a neutral expression. In the series of experiments reported here, I describe an unintuitive and robust effect in the opposite direction. Across different experimental conditions and stimulus sets, smiling faces were consistently perceived as older compared to neutral face photos of the same persons. I suggest that this effect is due to observer failure to ignore smile-associated wrinkles, mainly along the region of the eyes. These findings point to a misconception regarding the relationship between facial smile and perceived age and shed new light on the processes underlying human age perception. © 2015, Psychonomic Society, Inc. Source


Sverdlik N.,Ben - Gurion University of the Negev | Oreg S.,Hebrew University of Jerusalem
Journal of Personality | Year: 2015

Using a person-situation perspective, we explain what happens to individuals' identification with a collective in the context of a change. We propose that given the anxiety that often emerges during change, individuals' personal values (conservation and openness to change) interact with type of change (imposed vs. voluntary) in predicting identification following change. In a pilot, longitudinal field study (N=61, 67% female) of an imposed university campus relocation, we measured employees' values and identification with the university before and several months after the relocation. In two lab experiments (Study 1: N=104, 91.3% female; Study 2: N=113, 75.2% female), we manipulated a change to be either imposed or voluntary and compared the relationships between values and identification across types of change. In Study 2, we also measured anxiety from the change. When change was imposed (all three studies), but not when voluntary (Studies 1 and 2), individuals' conservation was positively, and openness negatively, related to individuals' post-change identification. The effects emerged only for individuals who experienced change-related anxiety (Study 2). Our findings demonstrate that individuals' identification with a changing collective depends on the amount of anxiety change elicits and on the particular combination of their values and type of change. © 2014 Wiley Periodicals, Inc. Source


Golub A.,Ben - Gurion University of the Negev
Physical Review B - Condensed Matter and Materials Physics | Year: 2015

We calculate the current as a function of applied voltage in a nontopological s-wave superconductor-quantum dot-topological superconductor (TS) tunnel junction. We consider the type of TS which hosts two Majorana bound states (MBSs) at the ends of a semiconductor quantum wire or of a chain of magnetic atoms in the proximity with an s-wave superconductor. We find that the I-V characteristic of such a system in the regime of big voltages has a typical two-dot shape and is ornamented by peaks of multiple Andreev reflections. We also consider the other options when the zero-energy states are created by disorder (hereby Shiba states) or by Andreev zero-energy bound states at the surface of a quantum dot and a superconductor. The later are obtained by tuning the magnetic field to a specific value. Unlike the last two cases the MBS I-V curves are robust to change the magnetic field. Therefore, the magnetic-field dependence of the tunneling current can serve as a unique signature for the presence of a MBS. © 2015 American Physical Society. Source


Rosen J.,Johns Hopkins University | Rosen J.,Ben - Gurion University of the Negev | Siegel N.,Johns Hopkins University | Brooker G.,Johns Hopkins University
Optics Express | Year: 2011

Fresnel Incoherent Correlation Holography (FINCH) enables holograms to be recorded from incoherent light with just a digital camera and spatial light modulator. We previously described its application to general three dimensional incoherent imaging and specifically to fluorescence microscopy, wherein one complex hologram contains the three dimensional information in the field of view, obviating the need for scanning or serial sectioning. We have now further analyzed FINCH in view of linear system theory and in comparison to conventional coherent and incoherent two dimensional imaging systems. We demonstrate, theoretically and experimentally, improved resolution by FINCH, when compared to conventional imaging. © 2011 Optical Society of America. Source


Weizman Z.,Ben - Gurion University of the Negev
Beneficial Microbes | Year: 2015

Infants and children attending day-care centres demonstrate a notably higher risk of gastrointestinal as well as of respiratory tract infections. The present short review evaluates various well-controlled clinical trials analysing the effect of probiotics and prebiotics in the prevention of such infections. In most of the 12 studies identified, probiotic supplementation was found to be a safe and effective therapeutic tool in preventing gastrointestinal and respiratory infection in this population. © 2014 Wageningen Academic Publishers. Source


Lublinsky M.,Ben - Gurion University of the Negev
Physics Letters, Section B: Nuclear, Elementary Particle and High-Energy Physics | Year: 2014

We discuss the high energy diffractive dissociation in DIS at the Next to Leading Order. In the large N c dipole limit we derive the NLO version of the Kovchegov-Levin equation. We argue that the original structure of the equation is preserved, that is it coincides with the Balitsky-Kovchegov equation at NLO. © 2014. Source


Guendelman E.I.,Ben - Gurion University of the Negev
International Journal of Modern Physics A | Year: 2011

We consider a nonsingular origin for the universe starting from an Einstein static universe, the so-called "emergent universe" scenario, in the framework of a theory which uses two volume elements □-gd4x and Φd4x, where Φ is a metric independent density, used as an additional measure of integration. Also curvature, curvature square terms and for scale invariance a dilaton field φ are considered in the action. The first-order formalism is applied. The integration of the equations of motion associated with the new measure gives rise to the spontaneous symmetry breaking of scale invariance. After spontaneous symmetry breaking of scale invariance it is found that a nontrivial potential for the dilaton is generated. In the Einstein frame we also add a cosmological term that parametrizes the zero point fluctuations. The resulting effective potential for the dilaton contains two flat regions, for φ → ∞ relevant for the nonsingular origin of the universe, followed by an inflationary phase and φ → - ∞, describing our present universe. The dynamics of the scalar field becomes nonlinear and these nonlinearities are instrumental in the stability of some of the emergent universe solutions, which exists for a parameter range of values of the vacuum energy in φ → - ∞, which must be positive but not very big, avoiding the extreme fine tuning required to keep the vacuum energy density of the present universe small. Zero vacuum energy density for the present universe defines the threshold for the creation of the universe. © 2011 World Scientific Publishing Company. Source


Tal A.,Ben - Gurion University of the Negev
Water Research | Year: 2016

Broad utilization of drip irrigation technologies in Israel has contributed to the 1600 percent increase in the value of produce grown by local farmers over the past sixty-five years. The recycling of 86% of Israeli sewage now provides 50% of the country's irrigation water and is the second, idiosyncratic component in Israel's strategy to overcome water scarcity and maintain agriculture in a dryland region. The sustainability of these two practices is evaluated in light of decades of experience and ongoing research by the local scientific community. The review confirms the dramatic advantages of drip irrigation over time, relative to flood, furrow and sprinkler irrigation and its significance as a central component in agricultural production, especially under arid conditions. In contrast, empirical findings increasingly report damage to soil and to crops from salinization caused by irrigation with effluents. To be environmentally and agriculturally sustainable over time, wastewater reuse programs must ensure extremely high quality treated effluents and ultimately seek the desalinization of recycled sewage. © 2015 Elsevier Ltd. Source


Panich A.M.,Ben - Gurion University of the Negev | Aleksenskii A.E.,RAS Ioffe Physical - Technical Institute
Diamond and Related Materials | Year: 2012

Nanodiamond (ND) particles are usually bound in aggregates that are difficult to separate into a single nanoparticle form, which restrains their applications. Recently, a stable suspension of deaggregated NDs [1] has been prepared using annealed ND samples. We discuss the mechanism of deaggregation of these NDs based on our 1H NMR data. We show that in contrast with the spectra of the initial sample, those of the annealed sample evidence fast proton exchange between the surface carboxyl groups and adsorbed water molecules and the formation of charged ND-COO - ↔ H 3O + structures, which results in repulsion of ND particles and yields a stable aqueous suspension of isolated NDs. © 2012 Elsevier B.V. Source


Bruins H.J.,Ben - Gurion University of the Negev
Journal of Arid Environments | Year: 2012

Thousands of ancient terraces in the valleys of the Negev desert show that agriculture was conducted here in the past, based on the utilization of runoff and floodwater from local rainfall. A comprehensive collation and mapping is presented of the geographic distribution of such farming remains in the northern, central and southern Negev. The time range of these remains is also evaluated in detail with the inclusion of new data from the Neolithic to the present. Farming was and is conducted on a seasonal or annual timescale. However, proxy palaeoclimatic indicators in the southern Levant do not have such resolution. How do the ancient agricultural remains relate to climate? How do we define climate in order to make comparisons through time? How dry is dry? The conventional Köppen and Thornthwaite climatic classifications are cumbersome in both spatial and time-series analysis. This article presents for the first time the spatial positions of the climate zones in southern Israel based on the innovative P/PET climatic classification approach (P = annual precipitation; PET = annual potential evapotranspiration). Instrumental data from 13 meteorological stations were used for the required calculations and mapping. The decade 1990-2000 was selected, because of extreme climatic variations in this period, including the cold and wettest year ever recorded, 1991-92, as well as the warm and severe drought year 1998-99. Main conclusions are as follows: 1. The majority of remains of ancient runoff/floodwater farming are located south of Beer Sheva in the arid zone. 2. Only a few sites are situated in the hyper-arid zone in the southern Negev. 3. The southern and eastern borders of the ancient agricultural region in the central Negev coincide with the average decadal climatic boundary between the arid and hyper-arid zone (P/PET = 0.05). 4. The extremely wet year 1991-92 did not cause a significant displacement southward in the position of the arid and hyper-arid zones. Most runoff-farming areas remained within the arid zone. However, north of Beer Sheva the climatic zones shifted dramatically, as the humid zone, usually not extant in the southern Levant, and the sub-humid zone, moved into southern Israel. 5. The severe drought year 1998-99, on the other hand, caused a dramatic displacement northward of all climate zones. The boundary between the arid zone and hyper-arid zone (P/PET = 0.05) moved north of Beer Sheva and west of Arad. Most runoff/floodwater farming areas were situated in the hyper-arid zone. 6. An area with terraced valleys beyond the Negev in the southern foothills near Hebron experienced an arid climate in 1998/99, instead of average semi-arid conditions. This underlines the rational of runoff capture for drought mitigation in the semi-arid zone. 7. The investigation sets a modern standard of defined climate zones in the Negev and their shifts in wet and drought years as a basis for comparison with past climatic changes in relation to ancient agricultural remains. © 2012 Elsevier Ltd. Source


Arnon S.,Ben - Gurion University of the Negev
Optical Engineering | Year: 2010

The growing need for underwater observation and subsea monitoring systems has stimulated considerable interest in advancing the enabling technologies of underwater wireless communication and underwater sensor networks. This communication technology is expected to play an important role in investigating climate change, in monitoring biological, biogeochemical, evolutionary, and ecological changes in the sea, ocean, and lake environments, and in helping to control and maintain oil production facilities and harbors using unmanned underwater vehicles (UUVs), submarines, ships, buoys, and divers. However, the present technology of underwater acoustic communication cannot provide the high data rate required to investigate and monitor these environments and facilities. Optical wireless communication has been proposed as the best alternative to meet this challenge. Models are presented for three kinds of optical wireless communication links: (a) a line-of-sight link, (b) a modulating retroreflector link, and (c) a reflective link, all of which can provide the required data rate. We analyze the link performance based on these models. From the analysis, it is clear that as the water absorption increases, the communication performance decreases dramatically for the three link types. However, by using the scattered light it was possible to mitigate this decrease in some cases. It is concluded from the analysis that a high-data-rate underwater optical wireless network is a feasible solution for emerging applications such as UUV-to-UUV links and networks of sensors, and extended ranges in these applications could be achieved by applying a multi-hop concept. © 2010 Society of Photo-Optical Instrumentation Engineers. Source


Milo R.,Ben - Gurion University of the Negev
Therapeutic Advances in Neurological Disorders | Year: 2014

Daclizumab is a humanized monoclonal antibody of the immunoglobulin G1 (IgG1) isotype that binds to the α-subunit (CD25) of the high-affinity interleukin-2 (IL-2) receptor expressed on activated T cells and CD4+CD25+FoxP3+ regulatory T cells. Based on the assumption that it would block the activation and expansion of autoreactive T cells that are central to the immune pathogenesis of multiple sclerosis (MS), daclizumab was tested in several small open-label clinical trials in MS and demonstrated a profound inhibition of inflammatory disease activity. Surprisingly, accompanying mechanistic studies revealed that the most important biological effect of daclizumab was rather a dramatic expansion and activation of immunoregulatory CD56bright natural-killer (NK) cells that correlated with treatment response, while there was no or only minor effect on peripheral T-cell activation and function. These CD56bright NK cells were able to gain access to the central nervous system in MS and kill autologous activated T cells. Additional and relatively large phase IIb clinical trials showed that daclizumab, as add-on or monotherapy in relapsing-remitting (RR) MS, was highly effective in reducing relapse rate, disability progression, and the number and volume of gadolinium-enhancing, T1 and T2 lesions on brain magnetic resonance imaging (MRI), and reproduced the expansion of CD56bright NK cells as a biomarker for daclizumab activity. Daclizumab is generally very well tolerated and has shown a favorable adverse event (AE) profile in transplant recipients. However, several potentially serious and newly emerging AEs (mainly infections, skin reactions, elevated liver function tests and autoimmune phenomena in several body organs) may require strict safety monitoring programs in future clinical practice and place daclizumab together with other new and highly effective MS drugs as a second-line therapy. Ongoing phase III clinical trials in RRMS are expected to provide definite information on the efficacy and safety of daclizumab and to determine its place in the fast-growing armamentarium of MS therapies. © The Author(s), 2013. Source


Granek L.,Ben - Gurion University of the Negev
Journal of Loss and Trauma | Year: 2013

In the last few decades, grief and loss research in the psychological domain has focused almost exclusively on its dysfunctional nature. I examine what is underneath these questions about pathology and suggest that our discipline is suffering from an attachment wound where we have dissociated from our historical roots when it comes to the study of grief and loss. I argue that we need to ask new questions about grief and loss and present two examples of my collaborative work to illustrate innovative ways of thinking about and researching grief. © 2013 Copyright Taylor and Francis Group, LLC. Source


Netz H.,Ben - Gurion University of the Negev
Journal of Pragmatics | Year: 2014

In white Anglo-American society, disagreement has traditionally been considered a face-threatening act, which interactants seek to avoid. Conversation analysts have noted that interactants orient toward agreement, and express disagreements as dispreferred responses, via mitigation markers, such as delays, prefacing, and hedges. However, recent studies indicate that disagreement is highly influenced by culture and context. These studies have shown that in some cultures/contexts, disagreements are not necessarily mitigated, and are often shaped as preferred responses and perceived as unmarked. In this paper I offer quantitative and qualitative analyses of data from a fully transcribed corpus of 15. h of interactions in gifted classes, grades 5-8, in Virginia, USA. It will be shown that in comparison to disagreements in other contexts and communities of practice in the US, disagreements in the gifted classes observed were significantly less mitigated. Gifted classes, therefore, constitute another interactional context in which disagreements are unmarked and do not seem to undermine solidarity among interactants. This study provides further empirical support to the claim that disagreement is not inherently face-threatening and needs to be contextualized. © 2013 Elsevier B.V. Source


Cohen S.,Ben - Gurion University of the Negev | Shapiro H.,Haifa University
Journal of Medicine and Philosophy (United Kingdom) | Year: 2013

Recent research, especially with functional brain imaging, demonstrated cases where the administration of a placebo produces objective effects in tissues that are indistinguishable from those of the real therapeutic agents. This phenomenon has been shown in treatments of pain, depression, Parkinsonism, and more. Tloe main ethical complaint against placebo treatment is that it is a kind of deception, where supposedly we substitute what works just psychologically for a real drug that actually works on the tissue level. We claim that the scientific findings bring to a new level the seeming deconstruction of the distinction between 'placebo" and "real" drugs, and that instances of placebo treatment which fulfill this criterion should be recognized as a unique category-we call it "comparable placebo treatment" (CPT). Tloe paper uses an analysis of the notion of deception to argue that CPT does not amount to deception; that it can preserve patient autonomy; and that it is therefore morally legitimate. © The Author 2013. Source


Iecovich E.,Ben - Gurion University of the Negev
Educational Gerontology | Year: 2013

The purpose of this study was to evaluate the reliability and validity of the Hebrew version of the De Jong Gierveld Loneliness Scale. Cross-sectional research methodology was used to measure loneliness felt by older adults. Data of this study were drawn from tree samples that included altogether 2,100 older persons in Israel. The De Jong Gierveld Loneliness scale demonstrated good psychometric properties similar to those of the English version. Internal consistency of Cronbach's alpha was high (α =.86), and the measure showed good convergent and exploratory validity. This was due to significant correlations with frequency of feeling loneliness, quality of life, satisfaction with life, value of life, and perceived health, functional, and economic status. Factor analysis showed three loading factors instead of the two in the original version. The scale can be a useful for comparative research as well as a screening instrument for professionals working with older adults to assess the degree of loneliness felt by them. The scale also provides indications of the need for professional intervention when loneliness is great and negatively affects the older person's quality of life. © 2013 Copyright Taylor and Francis Group, LLC. Source


Solomon S.,Ben - Gurion University of the Negev
Proceedings of the Annual ACM-SIAM Symposium on Discrete Algorithms | Year: 2011

In STOC'95 [5] Arya et al. showed that for any set of n points in ℝd a (1 + ε)-spanner with diameter at most 2 (respectively, 3) and O(n log n) edges (resp., O(n log log n) edges) can be built in O(n log n) time. Moreover, Arya et al. [5] conjectured that one can build in O(n log n) time a (1 + ε)-spanner with diameter at most 4 and O(n log* n) edges. Since then, this conjecture became a central open problem in this area. Nevertheless, very little progress on this problem was reported up to this date. In particular, the previous state-of-the-art subquadratic-time construction of (1 + ε)-spanners with o(n log log n) edges due to Arya et al. [5] produces spanners with diameter 8. In addition, general tradeoffs between the diameter and number of edges were established [5, 26]. Specifically, it was shown in [5, 26] that for any k ≥ 4, one can build in O(n(log n)2kα k(n)) time a (1 + ε)-spanner with diameter at most 2k and O(n2kαk(n)) edges. The function αk is the inverse of a certain Ackermann-style function at the ⌊k/ 2⌋th level of the primitive recursive hierarchy, where α0(n) = ⌈n/2⌉, α1(n) = ⌈√n⌉, α2(n)=⌈log n⌉, α3(n) = [log log n],α4(n) = log* n,α5(n) = ⌊1/2 log* n⌋,. .., etc. It is also known [26] that if one allows quadratic time then these bounds can be improved. Specifically, for any k ≥ 4, a (1 + ε)-spanner with diameter at most k and O(nkαk(n)) edges can be constructed in O(n 2) time [26]. A major open question in this area is whether one can construct within time O(n log n + nkαk(n)) a (1 + ε)-spanner with diameter at most k and O(nkαk(n)) edges. This question in the particular case of k = 4 coincides with the aforementioned conjecture of Arya et al. [5]. In this paper we answer this long-standing question in the affirmative. Moreover, in fact, we provide a stronger result. Specifically, we show that for any k ≥ 4, a (1 + ε)-spanner with diameter at most k and O(nαk(n)) edges can be built in optimal time O(n log n). In particular, our tradeoff for k = 4 provides an O(n log n)-time construction of (1 + ε)-spanners with diameter at most 4 and O(n log* n) edges, thus settling the conjecture of Arya et al. [5]. The tradeoff between the diameter and number of edges of our spanner construction is tight up to constant factors in the entire range of parameters, even if one allows the spanner to use (arbitrarily many) Steiner points. Source


In the recent papers (Varshney et al. [1] and Mansuri et al. [2]) the authors reported data on structure and electrical transport, i.e., resistivity (ρ) versus temperature (T) and magnetic field (H) on polycrystalline manganite systems La 1-xNa xMnO 3+y and La 1-xBa xMnO 3. In particular, the ρ(T) dependences show the shallow minimum at T min ∼ 35 K and ∼40 K for Na- and Ba-doped samples, respectively. Application of the external H up to 8 T leads to marked flattening of such minimum and to notable lowering of the T min on ρ(T,H) curves. The authors attributed this low temperature anomaly to competitive quantum Kondo effect and three-dimensional electron-electron interaction, using the results of numerical fittings of ρ versus T and H dependences in the interval of about 5-50 K. It is suggested in the comment that such a claim seems to be doubtful due to the fundamental inapplicability of this approach for analysis of low temperature conductivity in polycrystalline manganites. It is noted that a natural reason for the both, the low temperature minimum on ρ(T) curve and its evolution upon magnetic field application, is the well known grain boundary effects. © 2012 Elsevier B.V. Source


Daniel Y.,Technion - Israel Institute of Technology | Lavan O.,Technion - Israel Institute of Technology | Levy R.,Ben - Gurion University of the Negev
Journal of Structural Engineering (United States) | Year: 2012

This paper deals with the allocation and sizing of multiple-tuned mass dampers (MTMDs) in an attempt to retrofit footbridges with multiple critical modes resulting from the excitation of pedestrian traffic. The simple and practical methodology proposed herein includes the use of an iterative analysis/redesign type procedure that converges to a given allowable level of accelerations. An example that uses this methodology to retrofit an existing footbridge is presented. © 2012 American Society of Civil Engineers. Source


Leibovitz E.,Ben - Gurion University of the Negev
Pediatrics and Neonatology | Year: 2012

Invasive fungal infections represent the third-leading cause of late-onset sepsis in very-low-birth-weight infants (VLBWI) and have a high rate of infection-associated mortality. The infants at high risk for fungal sepsis are VLBWI with presence of additional risk factors that contribute to increased colonization and concentration of fungal organisms. Colonization with Candida spp. in neonates is secondary to either maternal vertical transmission or nosocomial acquisition in the nursery. Multiple sites may become colonized and a direct correlation between fungal colonization and subsequent progression to invasive candidemia was determined. Randomized, single and multiple-center, placebo-controlled trials found intravenous fluconazole prophylaxis to be effective in decreasing fungal colonization and sepsis for at-risk preterm infants <1500 g birth weight. The prophylactic use of fluconazole was found to be safe with no significant development of fungal resistance. Fluconazole prophylaxis administered to preterm neonates with birth weight <1000 g and/or 27 weeks' gestation or less has the potential of reducing and potentially eliminating invasive fungal infections and Candida-related mortality. Copyright © 2012, Taiwan Pediatric Association. Published by Elsevier Taiwan LLC. All rights reserved. Source


Bykhovsky D.,Ben - Gurion University of the Negev | Arnon S.,Sami Shamoon College of Engineering
Journal of Lightwave Technology | Year: 2015

The growing demand for high interconnection speed in next-generation computers is driving the technology shift for communication from the electronic to the optic domain. One of the favored interconnection technologies for this task is the free-space optical interconnect (FSOI). FSOI technology uses laser links between computer components and provides a lower bound on propagation delay due to the low index of refraction of air, when compared with the indexes common in waveguide technologies. FSOIs based on dc-biased optical orthogonal frequency-division multiplexing (DCO-OFDM) may provide excellent data throughput in intensity modulation/direct detection systems. However, the main drawback limiting the implementation of FSOIs is the inevitable tradeoff between interconnection density and the crosstalk level, resulting from the diffraction effect and from optical misalignment. The purpose of this paper is to promote improved interconnection density of such FSOIs by use of inherent DCO-OFDM resource allocation capabilities. The crosstalk-resulted interference was formulated as joint multilink bit-and-power allocation optimization. The theoretical analysis reveals general guidelines for dense FSOI. Further, a reduced-complexity numerical suboptimal algorithm for joint multilink bit-and-power allocation was proposed. The simulation results show that the proposed suboptimal algorithm outcome is close to the theoretical optimal performance. © 2015 IEEE. Source


Yagupsky P.,Ben - Gurion University of the Negev
Clinical Microbiology Reviews | Year: 2015

Kingella kingae is a common etiology of pediatric bacteremia and the leading agent of osteomyelitis and septic arthritis in children aged 6 to 36 months. This Gram-negative bacterium is carried asymptomatically in the oropharynx and disseminates by close interpersonal contact. The colonized epithelium is the source of bloodstream invasion and dissemination to distant sites, and certain clones show significant association with bacteremia, osteoar-thritis, or endocarditis. Kingella kingae produces an RTX (repeatin-toxin) toxin with broad-spectrum cytotoxicity that probably facilitates mucosal colonization and persistence of the organism in the bloodstream and deep body tissues. With the exception of patients with endocardial involvement, children with K. kingae diseases often show only mild symptoms and signs, necessitating clinical acumen. The isolation of K. kingae on routine solid media is suboptimal, and detection of the bacterium is significantly improved by inoculating exudates into blood culture bottles and the use of PCR-based assays. The organism is generally susceptible to antibiotics that are administered to young patients with joint and bone infections. β-Lactamase production is clonal, and the local prevalence of β-lactamase-producing strains is variable. If adequately and promptly treated, invasive K. kingae infections with no endocardial involvement usually run a benign clinical course. © 2015, American Society for Microbiology. All Rights Reserved. Source


Bilik I.,University of Massachusetts Dartmouth | Tabrikian J.,Ben - Gurion University of the Negev
IEEE Transactions on Aerospace and Electronic Systems | Year: 2010

The problem of maneuvering target tracking in the presence of glint noise is addressed in this work. The main challenge in this problem stems from its nonlinearity and non-Gaussianity. A new estimator, named as nonlinear Gaussian mixture Kalman filter (NL-GMKF) is derived based on the minimum-mean-square error (MMSE) criterion and applied to the problem of maneuvering target tracking in the presence of glint. The tracking performance of the NL-GMKF is evaluated and compared with the interacting multiple modeling (IMM) implemented with extended Kalman filter (EKF), unscented Kalman filter (UKF), particle filter (PF) and the Gaussian sum PF (GSPF). It is shown that the NL-GMKF outperforms these algorithms in several examples with maneuvering target and/or glint noise measurements. © 2006 IEEE. Source


Bilik I.,University of Massachusetts Dartmouth | Tabrikian J.,Ben - Gurion University of the Negev
IEEE Transactions on Aerospace and Electronic Systems | Year: 2010

The problem of sequential Bayesian estimation in linear non-Gaussian problems is addressed. In the Gaussian sum filter (GSF), the non-Gaussian system noise, the measurement noise, and the posterior state densities are modeled by the Gaussian mixture model (GMM). The GSF is optimal under the minimum-mean-square error (MMSE) criterion, however it is impractical due to the exponential model order growth of the system probability density function (pdf). The proposed recursive estimator, named the Gaussian mixture Kalman filter (GMKF), combines the GSF and the model order reduction procedure. The posterior state density at each iteration is approximated by a lower order density. This model order reduction procedure minimizes the estimated Kullback-Leibler divergence (KLD) of the reduced order density from the original density at each step. The estimation performance of the proposed GMKF is compared with the interactive multiple modeling (IMM), particle filter (PF), Gaussian sum PF (GSPF), and the GSF with mixture reduction (MR) method via simulations. It is shown in several examples that the proposed GMKF outperforms the other tested algorithms in terms of estimation accuracy. The superior estimation performance of the GMKF is obtained at the expense of its computational complexity, which is higher than the IMM and the MR algorithms. © 2010 IEEE. Source


Guendelman E.I.,Ben - Gurion University of the Negev
International Journal of Modern Physics A | Year: 2013

It is shown that an alternative to the standard scalar quantum electrodynamics (QED) is possible. In this new version, there is only global gauge invariance as far as the charged scalar fields are concerned, although local gauge invariance is kept for the electromagnetic field. The electromagnetic coupling has the form jμ(Aμ +∂μB) where B is an auxiliary field and the current j μ is Aμ independent, so that no sea gull terms are introduced. As a consequence of the absence of sea gulls, it is seen that no Klein paradox appears in the presence of a strong square well potential. In a model of this kind, spontaneous breaking of symmetry does not lead to photon mass generation, instead the Goldstone boson becomes a massless source for the electromagnetic field. When spontaneous symmetry breaking takes place infrared questions concerning the theory and generalizations to global vector QED are discussed. In this framework, Q-Balls and other nontopological solitons that owe their existence to a global U(1) symmetry can be coupled to electromagnetism and could represent multiply charged particles now in search in the large hadron collider (LHC). Furthermore, we give an example where an Emergent Global Scalar QED can appear from an axion-photon system in an external magnetic field. Finally, formulations of Global Scalar QED that allow perturbative expansions without sea gulls are developed. © 2013 World Scientific Publishing Company. Source


Kontorovich L.,Ben - Gurion University of the Negev
Statistics and Computing | Year: 2012

We investigate bounded-memory estimators of statistical functionals. It is shown that, for nondegenerate functionals and stochastic processes, it is impossible to achieve consistent estimation with bounded memory. In the positive direction, we show that O(log(1/ε)) states suffice to achieve ε-consistent estimation for a natural class of functionals. A canonical optimal construction is conjectured for arbitrary statistical functionals. © 2011 Springer Science+Business Media, LLC. Source


Meng L.,California Institute of Technology | Inbal A.,California Institute of Technology | Inbal A.,Ben - Gurion University of the Negev | Ampuero J.-P.,California Institute of Technology
Geophysical Research Letters | Year: 2011

The 2011 Mw 9 Tohoku-Oki earthquake, recorded by over 1000 near-field stations and multiple large-aperture arrays, is by far the best recorded earthquake in the history of seismology and provides unique opportunities to address fundamental issues in earthquake source dynamics. Here we conduct a high resolution array analysis based on recordings from the USarray and the European network. The mutually consistent results from both arrays reveal rupture complexity with unprecedented resolution, involving phases of diverse rupture speed and intermittent high frequency bursts within slow speed phases, which suggests spatially heterogeneous material properties. The earthquake initially propagates down-dip, with a slow initiation phase followed by sustained propagation at speeds of 3 km/s. The rupture then slows down to 1.5 km/s for 60 seconds. A rich sequence of bursts is generated along the down-dip rim of this slow and roughly circular rupture front. Before the end of the slow phase an extremely fast rupture front detaches at about 5 km/s towards the North. Finally a rupture front propagates towards the south running at about 2.5 km/s for over 100 km. Key features of the rupture process are confirmed by the strong motion data recorded by K-net and KIK-net. The energetic high frequency radiation episodes within a slow rupture phase suggests a patchy image of the brittle-ductile transition zone, composed of discrete brittle asperities within a ductile matrix. The high frequency is generated mainly at the down-dip edge of the principal slip regions constrained by geodesy, suggesting a variation along dip of the mechanical properties of the mega thrust fault or their spatial heterogeneity that affects rise time. Copyright 2011 by the American Geophysical Union. Source


Cohen D.,Ben - Gurion University of the Negev
Physica Scripta | Year: 2012

The analysis of the response to driving in the case of weakly chaotic or weakly interacting systems should go beyond linear response theory. Due to the 'sparsity' of the perturbation matrix, a resistor-network picture of transitions between energy levels is essential. The Kubo formula is modified, replacing the 'algebraic' average over the squared matrix elements by a 'resistor-network' average. Consequently, the response becomes semi-linear rather than linear. Some novel results have been obtained in the context of two prototype problems: the heating rate of particles in billiards with vibrating walls; and the Ohmic Joule conductance of mesoscopic rings driven by electromotive force. The results obtained are contrasted with the 'Wall formula' and the 'Drude formula'. © 2012 The Royal Swedish Academy of Sciences. Source


Allen R.M.,University of California at Berkeley | Ziv A.,Ben - Gurion University of the Negev
Geophysical Research Letters | Year: 2011

We explore the use of real-time high-rate GPS displacement data for earthquake early warning using 1 Hz displacement waveforms from the April 4, 2010, M w 7.2 El Mayor-Cucapah earthquake. We compare these data to those provided by the broadband velocity and accelerometer instrumentation of the Southern California Seismic Network. The unique information provided by the GPS-based displacement timeseries is the permanent/static displacement. Using a simple algorithm that can be applied in real-time, we extract the static offset shortly after the S-wave arrival, around the time of the observed peak shaking at the same site, and before shaking at more distant locations. These data can be used, as they become available, to provide a robust estimate of the earthquake magnitude, which ranges from 6.8 to 7.0 in this case. We therefore conclude that real-time high-rate GPS can provide a useful and independent assessment of earthquake magnitude for the purpose of earthquake early warning and real-time earthquake information systems in general including tsunami warning systems. Copyright 2011 by the American Geophysical Union. Source


Braun-Lewensohn O.,Ben - Gurion University of the Negev
Current Psychiatry Reports | Year: 2015

The goal of this paper was to critically evaluate the literature on children coping with mass trauma published between the years 2011 and 2014 and to emphasize interesting and important findings with the aim of proposing a new comprehensive model for better understanding the process of coping with these events in this unique developmental stage. Using a variety of databases, 26 research papers were selected. The papers were divided into two main categories, natural and manmade disasters. The findings suggest that several areas in this context still lack foundational knowledge and should be further investigated. Thus, it has been suggested that future research should emphasize the developmental stage of the children, the cultural context and atmosphere in which the investigated children grow up and live, and the type of event (acute vs. chronic; natural vs. manmade). A more comprehensive coping model which addresses these omissions and combines main theories is suggested for use in future research as well. © 2015, Springer Science+Business Media New York. Source


Deutch D.,Ben - Gurion University of the Negev | Milo T.,Tel Aviv University
Journal of Computer and System Sciences | Year: 2012

A Business Process consists of multiple business activities, which, when combined in a flow, achieve some particular goal. These processes usually operate in a distributed environment and the software implementing them is fairly complex. Thus, effective tools for analysis of the possible executions of such processes are extremely important for companies (Beeri et al., 2006, 2007 [4,5]); (Deutch and Milo, 2008 [13]); these tools can allow to debug and optimize the processes, and to make an optimal use of them. The goal of the present paper is to consider a formal model underlying Business Processes and study query languages over such processes. We study in details the relationship of the proposed model with previously suggested formalisms for processes modeling and querying. In particular we propose a query evaluation algorithm of polynomial data complexity that can be applied uniformly to two kind of common queries over processes, namely queries on the structure of the process specification as well as temporal queries on the potential behavior of the defined process. We show that unless P=NP the efficiency of our algorithm is asymptotically optimal. Source


Bremler-Barr A.,The Interdisciplinary Center | Hendler D.,Ben - Gurion University of the Negev
IEEE Transactions on Computers | Year: 2012

Ternary content-addressable memories (TCAMs) are increasingly used for high-speed packet classification. TCAMs compare packet headers against all rules in a classification database in parallel and thus provide high throughput unparalleled by software-based solutions. TCAMs are not well-suited, however, for representing rules that contain range fields. Such rules typically have to be represented (or encoded) by multiple TCAM entries. The resulting range expansion can dramatically reduce TCAM utilization. A TCAM range-encoding algorithm script A sign is database-independent if, for all ranges r, it encodes r independently of the database in which it appears; otherwise, we say that script A sign is database-dependent. Typically, when storing a classification database in TCAM, a few dozens of so-called extra bits in each TCAM entry remain unused. These extra bits are used by some (both database-dependent and database-independent) prior algorithms to reduce range expansion. The majority of real-life database ranges are short. We present a novel database-independent algorithm called Short Range Gray Encoding (SRGE) for the efficient representation of short range rules. SRGE encodes range endpoints as binary-reflected Gray codes and then represents the resulting range by a minimal set of ternary strings. To the best of our knowledge, SRGE is the first algorithm that achieves a reduction in range expansion in general, and a significant expansion reduction for short ranges in particular, without resorting to the use of extra bits. The "traditional" database-independent technique for representing range entries in TCAM is prefix expansion. As we show, SRGE significantly reduces the expansion of short ranges in comparison with prefix expansion. We also prove that the SRGE algorithm's range expansion is at least as good as that of prefix expansion for any range. Real-world classification databases contain a small number of unique long ranges, some of which appear in numerous rules. These long ranges cause high expansion which is not significantly reduced by any database-independent range encoding scheme that we are aware of, including SRGE. We introduce hybrid SRGE, a database-dependent encoding scheme that uses SRGE for reducing the expansion of short ranges and uses extra bits for reducing the expansion caused by long ones. Our comparative analysis establishes that hybrid SRGE utilizes TCAM more efficiently than previously published range-encoding algorithms. This work also makes a more theoretic contribution. Prefix expansion for ranges defined by W-bit endpoints has worst-case expansion ratio of 2W - 2. It follows from the work of Schieber et al. [1] that the SRGE algorithm has a slightly better worst-case expansion ratio of 2W - 4. We prove that any independent TCAM encoding scheme has worst-case expansion ratio of at least W. © 2006 IEEE. Source


Iecovich E.,Ben - Gurion University of the Negev
Journals of Gerontology - Series B Psychological Sciences and Social Sciences | Year: 2016

Objectives: (a) To assess the extent to which there are ambivalent dyadic relationships between older care recipients and their primary family caregivers (PFCs) and migrant care workers (MCWs) and (b) to examine the extent to which ambivalence explains loneliness among the care recipients. Method: A sample of 279 triads of respondents (care recipients, their PFCs, and their MCWs) was used and interviewed face-to-face. The Dyadic Relationship Scale was used to assess quality of relationships and ambivalence. The de Jong Gierveld Loneliness Scale was used to assess loneliness. Results: Mean scores for dyadic ambivalence between the care recipients and both types of caregivers were moderate. Lower levels of ambivalence in dyadic relationships with PFCs were associated with decreased loneliness. Ambivalent dyadic relationships explained 6% of the variance in loneliness. Discussion: Some degree of ambivalent relationship exists between care recipients and both types of caregivers. The vigor of ambivalence was significantly associated with the level of loneliness reported by functionally disabled older adults. © 2014 The Author 2014. Published by Oxford University Press on behalf of The Gerontological Society of America. All rights reserved. Source


Alfonta L.,Ben - Gurion University of the Negev
Electroanalysis | Year: 2010

This article reviews the advances that were made towards the understanding and the improvement of electron transfer and communication between living cells and electrodes with a specific emphasis on microbial fuel cells and bioelectrical systems. It summarizes the efforts that were made thus far to improve electron transfer between microorganisms and electrodes using the genetically based understanding of electron transfer in such organisms and the manipulations that can be performed to improve the transfer and subsequently control over power output. Future directions in the field are also reviewed and suggested in this article. © 2010 Wiley-VCH Verlag GmbH &Co. KGaA. Source


Sobol-Shikler T.,Ben - Gurion University of the Negev | Robinson P.,University of Cambridge
IEEE Transactions on Pattern Analysis and Machine Intelligence | Year: 2010

We present a classification algorithm for inferring affective states (emotions, mental states, attitudes, and the like) from their nonverbal expressions in speech. It is based on the observations that affective states can occur simultaneously and different sets of vocal features, such as intonation and speech rate, distinguish between nonverbal expressions of different affective states. The input to the inference system was a large set of vocal features and metrics that were extracted from each utterance. The classification algorithm conducted independent pairwise comparisons between nine affective-state groups. The classifier used various subsets of metrics of the vocal features and various classification algorithms for different pairs of affective-state groups. Average classification accuracy of the 36 pairwise machines was 75 percent, using 10-fold cross validation. The comparison results were consolidated into a single ranked list of the nine affective-state groups. This list was the output of the system and represented the inferred combination of co-occurring affective states for the analyzed utterance. The inference accuracy of the combined machine was 83 percent. The system automatically characterized over 500 affective state concepts from the Mind Reading database. The inference of co-occurring affective states was validated by comparing the inferred combinations to the lexical definitions of the labels of the analyzed sentences. The distinguishing capabilities of the system were comparable to human performance. © 2010 IEEE. Source


David A.,Ben - Gurion University of the Negev
Israel Journal of Chemistry | Year: 2010

A variety of strategies and carrier molecules have been used to direct therapeutic agents to tumor sites. The incorporation of a specific targeting moiety to drug carrier may result in active drug uptake by malignant cells. Carbohydrates are important mediators of cellcell recognition events and have been implicated in related processes such as cell signaling regulation, cellular differentiation, and immune response. The biocompatibility of carbohydrates and their ability to be specifically recognized by cell-surface receptors indicate their potential utility as ligands in targeted drug delivery for therapeutic applications. Yet, carbohydrates are not ideal targeting ligands because they are difficult to synthesize, bind weakly to carbohydrate receptors, and are prone to suffer from enzyme degradation due to labile glycosidic linkages. This review describes the design and development of HPMA-based biomedical copolymers to facilitate the selective delivery of drugs to tumor tissues via carbohydrateendogenous lectin interactions. Various carbohydrate-decorated HPMA copolymerdrug conjugates are presented and the application of the copolymers for drug delivery is discussed. Current efforts to increase the affinity of carbohydrate ligands for their target receptors through multivalent display are also discussed. These novel HPMA copolymer carbohydrate conjugates hold promise as clinically relevant drug delivery systems for cancer therapy. © 2010 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim. Source


Maric I.,Aviat Networks | Dabora R.,Ben - Gurion University of the Negev | Goldsmith A.J.,Stanford University
IEEE Transactions on Information Theory | Year: 2012

The smallest network model that captures relaying in the presence of multiple communicating pairs causing interference to each other is the interference channel with a relay. In this paper, an achievable rate region for the interference channel with a relay is derived. Special cases of strong interference under which this region is the capacity region are presented. The results obtained demonstrate the benefits of interference forwarding at a relay. By forwarding interfering messages, the relay can improve their reception at unintended receivers and, thus, facilitate interference cancellation. We show that intentionally forwarding interfering messages can improve the achievable rates. The achievable rates and interference forwarding gains are also illustrated by numerical results in Gaussian channels. Finally, a sum-rate outer bound to the capacity region of the Gaussian interference channel with a relay is derived and compared with the achievable rate region. The cut-set bound for this channel is also derived and shown to be much looser than the new sum-rate outer bound. © 2012 IEEE. Source


Dabora R.,Ben - Gurion University of the Negev
IEEE Transactions on Information Theory | Year: 2012

The interference channel with a relay (ICR) is the fundamental building block of cooperation in wireless networks where there are multiple communicating pairs interfering with each other. This paper considers ICRs in which the links are subject to i.i.d. fading, and each node has channel state information (CSI) only on its incoming links (receive CSI). Two channel models are considered: phase fading and Rayleigh fading. Strong interference conditions are derived for the case where the links from the sources to the relay are good in the sense that the achievable region for decoding both messages at the relay contains the maximal achievable region at the destinations. This leads to the characterization of the capacity region for such scenarios. This is the first time the capacity region of the ICR is characterized for a nondegraded, noncognitive scenario, with a causal relay when all links are active. © 1963-2012 IEEE. Source


Vardi A.,Ben - Gurion University of the Negev
Romanian Reports in Physics | Year: 2015

I describe two minimal Bose-Hubbard models which exhibit classical chaos and demonstrate its relation to phase space ergodization and thermalization. The first model is a realization of a kicked top via driving of the mode-coupling term in a Bose-Hubbard dimer. Coherent preparations in the chaotic phase-space regions of this model constitute far-from-equilibrium wavepackets with Floquet participation numbers which scale linearly with the size of the Hilbert space. Consequently, such preparations exhibit ergodization and effectively loose their one-particle coherence irreversibly, as compared to the collapse and revival dynamics obtained for coherent preparations in the integrable dimer limit. The second model is a Bose-Hubbard trimer which is the minimal system for obtaining chaos without driving. Considering two weakly coupled trimers, thermalization between them is attained via the linear diffusive response of each subsystem to the effective drive exerted on it by the other. This energy diffusion is captured well by a Fokker Planck equation, which implies thermodynamical Einstein relations. © 2015, Editura Academiei Romane. All rights reserved. Source


Mader R.,Rheumatic Diseases Unit | Verlaan J.-J.,University Utrecht | Buskila D.,Ben - Gurion University of the Negev
Nature Reviews Rheumatology | Year: 2013

Diffuse idiopathic skeletal hyperostosis (DISH) is a systemic condition characterized by the ossification and calcification of ligaments and entheses. DISH is observed on all continents and in all races, but most commonly in men over 50 years of age. Although DISH is asymptomatic in most individuals, the condition is often an indicator of underlying metabolic disease, and the presence of spinal or extraspinal ossifications can sometimes lead to symptoms including pain, stiffness, a reduced range of articular motion, and dysphagia, as well as increasing the risk of unstable spinal fractures. The aetiology of DISH is poorly understood, and the roles of the many factors that might be involved in the development of excess bone are not well delineated. The study of pathophysiological aspects of DISH is made difficult by the formal diagnosis requiring the presence of multiple contiguous fully formed bridging ossifications, which probably represent advanced stages of DISH. In this Review, the reader is provided with an up-to-date discussion of the epidemiological, aetiological and clinical aspects of DISH. Existing classification criteria (which, in the absence of diagnostic criteria, are used to establish a diagnosis of DISH) are also considered, together with the need for modified criteria that enable timely identification of early phases in the development of DISH. © 2013 Macmillan Publishers Limited. All rights reserved. Source


Karp B.,Ben - Gurion University of the Negev
International Journal of Solids and Structures | Year: 2011

The dynamic response of an elastic, semi-infinite strip with sliding surfaces, subjected to various forms of end excitations, was solved analytically employing the property of bi-orthogonality of wave modes. An explicit relation between the amplitudes of evanescent waves and the form of the excitation was obtained. Quantitative measure for dynamic end effects was suggested, termed Saint-Venant ratio (SVR). It was shown that two qualities of that ratio are useful for monitoring the health of structural joints (SHM): being that ratio not affected by the intensity of the end excitation and its high sensitivity to small variations in the form of the excitation. The axial behavior of the strip subjected to several forms of end excitations was further used to demonstrate the validity of a previously suggested dynamic version of Saint-Venant's principle. © 2010 Elsevier Ltd. All rights reserved. Source


Ablin J.N.,Tel Aviv University | Buskila D.,Ben - Gurion University of the Negev
Current Rheumatology Reports | Year: 2013

Despite advances made in its understanding and treatment, chronic pain remains an unsolved and all too common problem. One of the main obstacles to successful management of pain is the high variability of many patients regarding both response to treatment and susceptibility to adverse effects, which curtails the utility of therapeutic intervention. Understanding the causes of this variability is an important challenge which may lead to a new era in rational pain management. As described in this review, however, there currently seems to be more than one possible explanation of this variability. Rational personalized pain management must take into consideration both everincreasing knowledge of pharmacogenetics and pharmacokinetics and a broad, clinically based attitude incorporating co-morbidities, both physical and psychiatric, and concomitant medications. Novel models for testing in-vivo pain processing, for example assessment of conditioned pain modulation (CPM), are also promising approaches to use of rational data for empirical treatment of pain. Last, listening to the patient and understanding the context in which pain has affected his or her life is an important part of maintaining the personal nature of therapeutic interaction with patients suffering from pain. © 2012 Springer Science+Business Media New York. Source


Moreh R.,Ben - Gurion University of the Negev
Nuclear Instruments and Methods in Physics Research, Section B: Beam Interactions with Materials and Atoms | Year: 2012

In some recent studies it is claimed that the electron elastic scattering intensities at keV energies from atoms and molecules at high momentum transfers do not conform to the Rutherford relation. Huge reductions in the ratios of the electron scattering intensities were reported in the following binary gas mixtures: H 2/D 2, H 2/He, H 2/Ar, D 2/Ar and He/Ar where the scattering intensities from the light partners were found to be lower than the heavier ones by ∼30% and higher. Of particular interest is the case of the H 2/D 2 isotopic mixture found to deviate from the Rutherford relation by 30%. Similar intensity reductions were reported in samples of solid polymers where the scattering intensity from H was compared to that of C; the strong anomalous H-scattering intensity reduction was attributed to short lived quantum entanglement of a proton pair in the solid sample. It was stated that a quantum mechanical treatment of this scattering process in the framework of the Born approximation could not yield an explanation for the above observation. Here a critical examination of the above reports is given with the conclusion that the origin of all above deviations is very likely instrumental and not due to any real deviation from the Rutherford formula. © 2011 Elsevier B.V. All rights reserved. Source


Kalichman L.,Ben - Gurion University of the Negev | Hernandez-Molina G.,Instituto Nacional Of Ciencias Medicas Y Nutrion Salvador Zubiran
Seminars in Arthritis and Rheumatism | Year: 2010

Objectives: Hand osteoarthritis (OA) is a highly prevalent condition with a wide spectrum of clinical presentations. We review herein the prevalence, impact on hand function, and various risk factors related to hand OA. Methods: PubMed and MEDLINE databases (1950-2009) were searched for the keywords: "hand," "hand osteoarthritis," "distal interphalangeal," "proximal interphalangeal," "metacarpophalangeal," and "carpometacarpal." Published material emphasizing cohort, cross-sectional, and case-control studies regarding epidemiology, clinical features, functional impairment, and associated risk factors of hand OA were included. Results: Hand OA is a heterogeneous, age- and gender-dependent disorder, occurring more frequently in women over 50 years of age. In the elderly population, the prevalence of radiographic hand OA can reach 80%. OA has a strong genetic predisposition, apparently gender- and phenotype-specific. A history of heavy manual labor or a repetitive use of the hand also has been linked to OA. Other variables such as weight, smoking, joint hyperlaxity, age of menarche, bone and cartilage mineralization factors, grip strength, and handedness may play a role. Symptomatic hand OA may cause functional impairment due to loss of strength, thus limiting the individual's ability to perform daily tasks. Conclusions: Several risk factors for hand OA have been identified; however, their interrelationship is not clearly understood. The development of preventive strategies and future research goals is needed. © 2010 Elsevier Inc. Source


Basha O.,Ben - Gurion University of the Negev
Nucleic acids research | Year: 2013

Genome sequencing and transcriptomic profiling are two widely used approaches for the identification of human disease pathways. However, each approach typically provides a limited view of disease pathways: Genome sequencing can identify disease-related mutations but rarely reveals their mode-of-action, while transcriptomic assays do not reveal the series of events that lead to the transcriptomic change. ResponseNet is an integrative network-optimization approach that we developed to fill these gaps by highlighting major signaling and regulatory molecular interaction paths that connect disease-related mutations and genes. The ResponseNet web-server provides a user-friendly interface to ResponseNet. Specifically, users can upload weighted lists of proteins and genes and obtain a sparse, weighted, molecular interaction subnetwork connecting them, that is biased toward regulatory and signaling pathways. ResponseNet2.0 enhances the functionality of the ResponseNet web-server in two important ways. First, it supports analysis of human data by offering a human interactome composed of proteins, genes and micro-RNAs. Second, it offers a new informative view of the output, including a randomization analysis, to help users assess the biological relevance of the output subnetwork. ResponseNet2.0 is available at http://netbio.bgu.ac.il/respnet . Source


Dahan A.,Ben - Gurion University of the Negev | Lennernas H.,Uppsala University | Amidon G.L.,University of Michigan
Molecular Pharmaceutics | Year: 2012

The drug intestinal permeability (Peff) measure has been widely used as one of the main factors governing both the rate and/or extent of drug absorption (Fabs) in humans following oral administration. In this communication we emphasize the complexity behind and the care that must be taken with this in vivo Peff measurement. Intestinal permeability, considering the whole of the human intestine, is more complex than generally recognized, and this can lead to misjudgment regarding Fabs and Peff in various settings, e.g. drug discovery, formulation design, drug development and regulation. Setting the adequate standard for the low/high permeability class boundary, the different experimental methods for the permeability measurement, and segmental-dependent permeability throughout the human intestine due to different mechanisms are some of the main points that are discussed. Overall, the use of jejunal Peff as a surrogate for extent of absorption is sound and scientifically justified; a compound with high jejunal Peff will have high Fabs, eliminating the risk for misclassification as a BCS class I drug. Much more care should be taken, however, when jejunal Peff does not support a high-permeability classification; a thorough examination may reveal high-permeability after all, attributable to e.g. segmental-dependent permeability due to degree of ionization or transporter expression. In this situation, the use of multiple permeability experimental methods, including the use of metabolism, which except for luminal degradation requires absorption, is prudent and encouraged. © 2012 American Chemical Society. Source


Barak M.,Ben - Gurion University of the Negev
European Journal of Engineering Education | Year: 2012

Project-based learning is undoubtedly one of the best instructional methods for developing students' broad learning capabilities, beyond teaching specific subject matter. However, experienced engineering teachers often tend to concentrate on merely teaching pupils the technical side of project work. This paper describes a programme aimed at fostering self-regulated learning among high schools pupils working on projects in electronics, control systems and robotics. The programme sought, for example, to promote pupils' conceptual knowledge regarding the subjects they dealt with in their projects, enabling the learners to gain experience using simulation, laboratory testing and troubleshooting in a system that they were constructing. The change in project work also required pupils to document systematically all stages of system development and reflect on their learning by preparing a printed or web-based ePortfolio on the project. Outcomes of the in-service training course given to teachers countrywide and the first signs of change in project work in schools are reported. © 2012 SEFI. Source


Moreh R.,Ben - Gurion University of the Negev | Nemirovsky D.,Sami Shamoon College of Engineering
Journal of Chemical Physics | Year: 2010

We calculated the kinetic energies of the H-atom in ice Ih between 5 and 269 K and in water in the range 293-673 K. To do so we used the literature optical vibration frequencies of ice and water of the different phases assuming the harmonic approximation and decoupling between the degrees of freedom of translation, rotation (libration), and internal vibrations. Apart for ice at 269 K, good agreement was obtained with published experimental values for both cases. Similar agreement with experiment was found for the case of water confined in 14 Å diameter carbon nanotubes at 268 K. However, serious deviations from measured values were found in nanotube water in the range 5-230 K. Possible reasons for those deviations are discussed. © 2010 American Institute of Physics. Source


Schwartz M.,Ben - Gurion University of the Negev
IEEE Transactions on Information Theory | Year: 2014

Gray codes for vector spaces are considered in two graphs: the Grassmann graph, and the projective-space graph, both of which have recently found applications in network coding. For the Grassmann graph, constructions of cyclic optimal codes are given for all parameters. As for the projective-space graph, two constructions for specific parameters are provided, as well some nonexistence results. Furthermore, encoding and decoding algorithms are given for the Grassmannian Gray code, which induce an enumerative-coding scheme. The computational complexity of the algorithms is at least as low as known schemes, and for certain parameter ranges, the new scheme outperforms previously known ones. © 1963-2012 IEEE. Source


Permuter H.H.,Ben - Gurion University of the Negev | Kim Y.-H.,University of California at San Diego | Weissman T.,Stanford University
IEEE Transactions on Information Theory | Year: 2011

We investigate the role of directed information in portfolio theory, data compression, and statistics with causality constraints. In particular, we show that directed information is an upper bound on the increment in growth rates of optimal portfolios in a stock market due to causal side information. This upper bound is tight for gambling in a horse race, which is an extreme case of stock markets. Directed information also characterizes the value of causal side information in instantaneous compression and quantifies the benefit of causal inference in joint compression of two stochastic processes. In hypothesis testing, directed information evaluates the best error exponent for testing whether a random process Y causally influences another process X or not. These results lead to a natural interpretation of directed information I(Yn to Xn) as the amount of information that a random sequence Y n = (Y1,Y2, Yn) causally provides about another random sequence Xn = (X1,X 2,Xn). A new measure, directed lautum information, is also introduced and interpreted in portfolio theory, data compression, and hypothesis testing. © 2011 IEEE. Source


Shahar Y.,Ben - Gurion University of the Negev
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) | Year: 2012

Currently, most clinical knowledge is in free text and is not easily accessible to clinicians and medical researchers. A major grand challenge for medical informatics is the creation of a distributed, universal, formal, sharable, reusable, and computationally accessible medical knowledge base. The required knowledge consists of both procedural knowledge, such as clinical guidelines, and declarative knowledge, such as context-sensitive interpretations of longitudinal patterns of raw clinical data accumulating from several sources. In this position paper, I first demonstrate the feasibility of such an enterprise, and explain in detail the overall lifecycle of a clinical guideline, by reviewing the main current components and their respective evaluations of one such comprehensive architecture for management of clinical guidelines: The Digital Electronic Guideline Library (DeGeL), a Web-based, modular, distributed architecture that facilitates gradual conversion of clinical guidelines from text to a formal representation in chosen target guideline ontology. The architecture supports guideline classification, semantic markup, context-sensitive search, browsing, run-time application to a specific patient at the point of care, and retrospective quality assessment. The DeGeL architecture operates closely with a declarative-knowledge temporal-abstraction architecture, IDAN. Thus, there is significant evidence that building a distributed, multiple-ontology architecture that caters for the full life cycle of a significant portion of current clinical procedural and declarative knowledge, which I refer to as "the Human Clin-knowme Project," has become a feasible task for a joint, coordinated, international effort involving clinicians and medical informaticians. © 2012 Springer-Verlag. Source


Wisniak J.,Ben - Gurion University of the Negev
Educacion Quimica | Year: 2013

Pierre-Jean Robiquet (1780-1840), a French pharmacist, made important contributions in the areas of mineral chemistry, mineral and vegetable pigments, and extractive and analytical chemistry. Alone, or with his collaborators he discovered asparagine (with Vauquelin), alizarin and purpurin in madder (with Colin), orcin, and orcein in lichens, glycyrrhizin in licorice, cantharidin in cantharides, amygdaline in bitter almonds (with Boutron-Charlard), caffeine (independently of Pelletier, Caventou, Runge) and narcotine and codeine in opium. Source


Shahar G.,Ben - Gurion University of the Negev
Psychotherapy | Year: 2013

This article presents the factors on which I focus as an integrative psychotherapist when treating self-critical patients. I first describe my personal version of psychotherapy integration. Drawing principally from Wachtel's cyclical psychodynamic model, I also incorporate existential and neurocognitive elements highlighting patients' future-oriented thinking and goal-directed action. I then relate this integrative model to the vexing clinical problem of self-criticism. Finally, I outline three types of interventions I attempt to implement in each session: (1) Multiple-Selves Analysis (MSA); (2) Behavioral Activation (BA), conceptualized integratively; and (3) use of therapist's presence. © 2013 American Psychological Association. Source


Garin-Shkolnik T.,Weizmann Institute of Science | Rudich A.,Ben - Gurion University of the Negev | Hotamisligil G.S.,Harvard University | Rubinstein M.,Weizmann Institute of Science
Diabetes | Year: 2014

Fatty acid binding protein 4 (FABP4, also known as aP2) is a cytoplasmic fatty acid chaperone expressed primarily in adipocytes and myeloid cells and implicated in the development of insulin resistance and atherosclerosis. Here we demonstrate that FABP4 triggers the ubiquitination and subsequent proteasomal degradation of peroxisome proliferator-activated receptor g (PPARγ), a master regulator of adipogenesis and insulin responsiveness. Importantly, FABP4-null mouse preadipocytes as well as macrophages exhibited increased expression of PPARγ, and complementation of FABP4 in the macrophages reversed the increase in FABP4 expression. The FABP4-null preadipocytes exhibited a remarkably enhanced adipogenesis compared with wild-type cells, indicating that FABP4 regulates adipogenesis by downregulating PPARγ. We found that the FABP4 level was higher and PPARγ level was lower in human visceral fat and mouse epididymal fat compared with their subcutaneous fat. Furthermore, FABP4 was higher in the adipose tissues of obese diabetic individuals compared with healthy ones. Suppression of PPARγ by FABP4 in visceral fat may explain the reported role of FABP4 in the development of obesity-related morbidities, including insulin resistance, diabetes, and atherosclerosis. © 2014 by the American Diabetes Association.. Source


Skoge M.,University of California at San Diego | Meir Y.,Ben - Gurion University of the Negev | Wingreen N.S.,Princeton University
Physical Review Letters | Year: 2011

Cooperative interactions among sensory receptors provide a general mechanism to increase the sensitivity of signal transduction. In particular, bacterial chemotaxis receptors interact cooperatively to produce an ultrasensitive response to chemoeffector concentrations. However, cooperativity between receptors in large macromolecular complexes is necessarily based on local interactions and consequently is fundamentally connected to slowing of receptor-conformational dynamics, which increases intrinsic noise. Therefore, it is not clear whether or under what conditions cooperativity actually increases the precision of the concentration measurement. We explicitly calculate the signal-to-noise ratio (SNR) for sensing a concentration change using a simple, Ising-type model of receptor-receptor interactions, generalized via scaling arguments, and find that the optimal SNR is always achieved by independent receptors. © 2011 American Physical Society. Source


Barrett J.W.,Imperial College London | Prigozhin L.,Ben - Gurion University of the Negev
Superconductor Science and Technology | Year: 2012

We derive a variational formulation for thin film magnetization problems for type-II superconductors written in terms of two variables: the electric field and the magnetization function. A numerical method, based on this formulation, makes it possible to accurately compute all variables of interest, including the electric field, for any value of the power in the power law current-voltage relation characterizing the superconducting material. For high power values we obtain a good approximation to the critical state model solution. Numerical simulation results are presented for simply and multiply connected films, and also for an inhomogeneous film. © 2012 IOP Publishing Ltd. Source


Zaritsky A.,Ben - Gurion University of the Negev | Woldringh C.L.,University of Amsterdam
Frontiers in Microbiology | Year: 2015

The origins of Molecular Biology and Bacterial Physiology are reviewed, from our personal standpoints, emphasizing the coupling between bacterial growth, chromosome replication and cell division, dimensions and shape. Current knowledge is discussed with historical perspective, summarizing past and present achievements and enlightening ideas for future studies. An interactive simulation program of the bacterial cell division cycle (BCD), described as "The Central Dogma in Bacteriology," is briefly represented. The coupled process of transcription/translation of genes encoding membrane proteins and insertion into the membrane (so-called transertion) is invoked as the functional relationship between the only two unique macromolecules in the cell, DNA and peptidoglycan embodying the nucleoid and the sacculus respectively. We envision that the total amount of DNA associated with the replication terminus, so called "nucleoid complexity," is directly related to cell size and shape through the transertion process. Accordingly, the primary signal for cell division transmitted by DNA dynamics (replication, transcription and segregation) to the peptidoglycan biosynthetic machinery is of a physico-chemical nature, e.g., stress in the plasma membrane, relieving nucleoid occlusion in the cell's center hence enabling the divisome to assemble and function between segregated daughter nucleoids. © 2015 Zaritsky and Woldringh. Source


Bronstein J.,Bar - Ilan University | Tzivian L.,Ben - Gurion University of the Negev
Library and Information Science Research | Year: 2013

This research investigated self-efficacy perceptions of Israeli library and information science (LIS) professionals regarding their information retrieval skills, examining the judgments that participants make about their own searching abilities. The study was based on Bandura's four sources of self-efficacy information: (a) past performance or mastery experiences; (b) vicarious observation of others' experiences; (c) verbal or social feedback; and, (d) affective states. An online survey presenting the Information Retrieval Self-Efficacy Scale was distributed among three existing Israeli LIS discussion groups. The questionnaire was completed by 201 LIS professionals. Findings show that participants reported a high level of self-efficacy regarding information retrieval and all four sources of self-efficacy information influenced the construction of self-efficacy beliefs. Correlations between self-efficacy perceptions and several socio-demographic variables were investigated. The data analysis revealed that men and women are impacted differently by self-efficacy information; women reported a higher score for affective states and men are more prone to frustration. Also, a significant relation was found between age and years of experience, as well as the sources that exerted more influence on participants. Older and more experienced participants reported being more impacted by their mastery experiences and their affective states. Participants in the middle of their careers reported a greater influence of social feedback on their self-perception of self-efficacy. © 2013 Elsevier Inc. Source


Horowitz Y.S.,Ben - Gurion University of the Negev | Moscovitch M.,Georgetown University
Radiation Protection Dosimetry | Year: 2013

The technical and dosimetric aspects of computerised glow curve analysis are described in detail including a review of the current 'state-of-the-achieved' in applications to environmental and personal dosimetry, clinical dosimetry, quality control, characterisation of new materials, continuing characterisation of 'old' materials, heavy charged particle dosimetry, mixed field n-gamma dosimetry, X-ray dosimetry and other aspects of thermoluminescence dosimetry. Fearless emphasis is placed on 'pitfalls' as well as successes. © The Author 2012. Published by Oxford University Press. All rights reserved. Source


Background and Objective: The pharmacokinetic models that are applied to describe the disposition of therapeutic antibodies assume that the interaction between an antibody and its target takes place in the central compartment. However, an increasing number of therapeutic antibodies are directed towards soluble/mobile targets. A flawed conclusion can be reached if the pharmacokinetic and pharmacodynamic analysis assumes that the interaction between the therapeutic antibody and its target takes place in the central compartment. The objective of this study was to assess the relative importance of local versus systemic interactions between adalimumab and tumour necrosis factor (TNF)-α in rheumatoid arthritis (RA), identify localization of the site of adalimumab action and assess the efficacy of local (intra-articular) versus systemic adalimumab administration for treatment of RA. Methods: The clinical and preclinical data on adalimumab and TNFα disposition were analysed using a pharmacokinetic modelling and simulation approach. The disposition of adalimumab and TNFα and the interaction between them at the individual compartments (the synovial fluid of the affected joints, central and peripheral compartments) following different routes of adalimumab administration were studied. Results: Outcomes of modelling and simulation using the pharmacokinetic model developed indicate that adalimumab can efficiently permeate from the diseased joints to the central circulation in RA patients. Permeability of TNFα, which is excessively secreted in the joints, is even higher than that of adalimumab. As a result, subcutaneous, intravenous and intra-articular administration of the clinically used dose of adalimumab (40 mg) exert similar effects on the time course of TNFα concentrations at different locations in the body and efficiently deplete the TNFα in all of the compartments for a prolonged period of time (8-10 weeks). At this dose, adalimumab exhibits predominantly systemic anti-TNFα effects at the central and peripheral compartments (∼93% of the overall effect) and the contribution of the local effects in the rheumatic joints is ∼7% for all of the studied routes, including the local intra-articular injections. The major pathway of TNFα elimination from the synovial fluid (∼77% for subcutaneous administration, and ∼72% for intravenous and intra-articular administration of adalimumab 40 mg) is interaction with adalimumab, which reaches the joints following local or systemic administration. Conclusions: The kinetics of adalimumab permeation to the synovial fluid (0.00422L/h clearance of permeation) versus the rate of TNFα turnover in the affected joints (1.84 pmol/h synthesis rate and 0.877 h -1 degradation rate constant) are apparently the major parameters that determine the time course of TNFα concentrations in the synovial fluid and the TNFα-neutralizing effects of adalimumab in RA patients. Outcomes of this study suggest that intra-articular administration of adalimumab is not preferable to subcutaneous or intravenous treatment. Local and systemic permeability, turnover and interactions between the drug and the target should be taken into account for optimization of the use of drugs acting on soluble targets (growth factors, interferons, interleukins, immunoglobulins, etc.). © 2012 Springer International Publishing AG. All rights reserved. Source


Mashov R.,Ben - Gurion University of the Negev
Harefuah | Year: 2011

Musculoskeletal conditions are common reasons for consultation in primary care and constitute 14-28% of primary care visits and visits to emergency services. General practitioners [GP] diagnose and treat the majority of patients with musculoskeletal problems. Surveys conducted confirmed the discrepancy between the number of GP musculoskeLetal consultations and the amount of time spent on orthopedic and musculoskeletal teaching in undergraduate and postgraduate education in different countries. It would be considered negligent for a GP to be incompetent in assessing the function of the heart or lungs, yet it is quite common for students to leave medical school without being able to make a general assessment of the musculoskeletal system. This review analyses the forms and duration of medical teaching on musculoskeletal disorders in several parts of the world and in Israel. Some studies have investigated the current situation in the undergraduate education of musculoskeletal teaching. The recent survey by the Bone and Joint Decade of undergraduate teaching in different specialties in 32 countries considered that the average length of medical teaching time of orthopedics, rheumatology and physical medical rehabilitation is insufficient and usually emphasize surgically managed musculoskeletal problems that are not relevant for the future practice of most doctors. The surveys that investigate postgraduate training have tested the confidence of GPs in performing regional musculoskeletal examinations and management of specific conditions. They found the different levels of confidence between GPs in UK, Canada, USA (including Hawaii) and developing, countries, with the tests showing deficient knowledge and skills in assessment and treatment of musculoskeletal conditions. CONCLUSIONS: It should be the personal obligation of GPs to update themselves regularly and monitor their performance to ensure the appropriate care of musculoskeletal problems. This will be possibLe through increasing the curriculum time of studying musculoskeletal diseases to at least 6 weeks and developing a CME musculoskeletal program. Different CME Musculoskeletal programs are being established in Family Medicine departments in Israel. It is important to investigate all musculoskeletal programs and to develop the universal musculoskeletal program for postgraduate education. Source


Toker L.,Ben - Gurion University of the Negev
Expert review of neurotherapeutics | Year: 2012

Lithium salts are among the drugs of choice for the treatment of bipolar disorder. Despite six decades of intensive research and an accumulating number of known cellular targets, lithium's mechanism of action still needs to be unraveled. The evolution of large-scale gene-expression analysis methodologies has provided a promising tool to understand the cellular events underlying the mood-stabilizing effect of the drug. However, despite great improvement achieved in transcriptome studies, findings of genes differentially expressed by lithium treatment exhibit, so far, a low reproducibility rate. This review discusses the different design and data analysis strategies applied in the studies and summarizes the possible reasons for the discrepancies among the reports. Source


Gaita-Arino A.,University of British Columbia | Schechter M.,Ben - Gurion University of the Negev
Physical Review Letters | Year: 2011

Tunneling two-level systems (TLSs) are believed to be the source of phenomena such as the universal low temperature properties in disordered and amorphous solids, and 1/f noise. The existence of these phenomena in a large variety of dissimilar physical systems testifies for the universal nature of the TLSs, which however, is not yet known. Following a recent suggestion that attributes the low temperature TLSs to inversion pairs we calculate explicitly the TLS-phonon coupling of inversion symmetric and asymmetric TLSs in a given disordered crystal. Our work (a) estimates parameters that support the theory in M. Schechter and P.C.E. Stamp, arXiv:0910.1283, in its general form, and (b) positively identifies, for the first time, the relevant TLSs in a given system. © 2011 American Physical Society. Source


Keshet U.,Ben - Gurion University of the Negev
Astrophysical Journal | Year: 2012

We argue that bulk spiral flows are ubiquitous in the cool cores (CCs) of clusters and groups of galaxies. Such flows are gauged by spiral features in the thermal and chemical properties of the intracluster medium, by the multiphase properties of CCs, and by X-ray edges known as cold fronts. We analytically show that observations of piecewise-spiral fronts impose strong constraints on the CC, implying the presence of a cold, fast flow, which propagates below a hot, slow inflow, separated by a slowly rotating, trailing, quasi-spiral, tangential discontinuity surface. This leads to the nearly logarithmic spiral pattern, two-phase plasma, ρ r -1 density (or T r 0.4 temperature) radial profile, and 100 kpc size, characteristic of CCs. By advecting heat and mixing the gas, such flows can eliminate the cooling problem, provided that a feedback mechanism regulates the flow. In particular, we present a quasi-steady-state model for an accretion-quenched, composite flow, in which the fast phase is an outflow, regulated by active galactic nucleus bubbles, reproducing the observed low star formation rates and explaining some features of bubbles such as their Rb r size. The simplest two-component model reproduces several key properties of CCs, so we propose that all such cores harbor a spiral flow. Our results can be tested directly in the next few years, for example by ASTRO-H. © 2012. The American Astronomical Society. All rights reserved.. Source


Guan Z.,Duke University | Eichler J.,Ben - Gurion University of the Negev
Biochimica et Biophysica Acta - Molecular and Cell Biology of Lipids | Year: 2011

Across evolution, dolichols and polyprenols serve as sugar carriers in biosynthetic processes that include protein glycosylation and lipopolysaccharide biogenesis. Liquid chromatography coupled with electrospray ionization mass spectrometry offers a powerful tool for studying dolichols and polyprenols in their alcohol or glycan-modified forms in members of all three domains of life. In the following, recent examples of the how different versions of this analytical approach, namely reverse phase liquid chromatography-multiple reaction monitoring, normal phase liquid chromatography/tandem mass spectrometry and normal phase liquid chromatography-precursor ion scan detection have respectively served to address novel aspects of dolichol or polyprenol biology in Eukarya, Archaea and Bacteria. This article is part of a Special Issue entitled Lipodomics and Imaging Mass Spectrometry. © 2011 Elsevier B.V. Source


Bin Nun G.,Ben - Gurion University of the Negev
Israel Journal of Health Policy Research | Year: 2013

The private health insurance (commercial and supplementary health insurance) sector has undergone a revolutionary transformation in recent years, both in the number of individuals who own private plans, and in the financial scope of these plans. With these developments in the background, leaders of the Israeli healthcare system convened in December 2012 at the Dead Sea for a discussion on "Private healthcare insurance plans in Israel: Developments, concerns, and directions for a solution" This meeting report summarizes the main issues discussed at the conference. © 2013 Bin Nun; licensee BioMed Central Ltd. Source


Farago O.,Ben - Gurion University of the Negev
Physical Review E - Statistical, Nonlinear, and Soft Matter Physics | Year: 2010

We use scaling arguments and coarse-grained Monte Carlo simulations to study the fluctuation-mediated interactions between a pair of adhesion sites of a bilayer membrane and a supporting surface. We find that the potential of mean force is an infinitely long range attractive potential that grows logarithmically with the pair distance r: φ (r) / kB T=clnr, where the constant c=2 and c=1 for nonstressed and stressed membranes, respectively. When, in addition to excluded volume repulsion, the membrane also interacts with the underlying surface through a height-dependent attractive potential, the potential φ (r) is screened at large pair distances. © 2010 The American Physical Society. Source


Suarez A.M.,Pennsylvania State University | Radovic L.R.,Pennsylvania State University | Radovic L.R.,University of Concepcion | Bar-Ziv E.,Ben - Gurion University of the Negev | Sofo J.O.,Pennsylvania State University
Physical Review Letters | Year: 2011

We analyze the diffusion of oxygen atoms on graphene and its dependence on the carrier density controlled by a gate voltage. We use density functional theory to determine the equilibrium adsorption sites, the transition state, and the attempt frequency for different carrier densities. The ease of diffusion is strongly dependent on carrier density. For neutral graphene, we calculate a barrier of 0.73eV; however, upon electron doping the barrier decreases almost linearly to reach values as low as 0.15eV for densities of -7. 6×1013cm-2. This implies an increase of more than 9orders of magnitude in the diffusion coefficient at room temperature. This dramatic change is due to a combined effect of bonding reduction in the equilibrium state and bonding increase at the transition state and can be used to control the patterning of oxidized regions by an adequate variation of the gate voltage. © 2011 American Physical Society. Source


Dubi Y.,Ben - Gurion University of the Negev
Journal of Physical Chemistry C | Year: 2014

Self-assembled-monolayer (SAM) molecular junctions (MJs) constitute a promising building block candidate for future molecular electronic devices. Transport properties of SAM-MJs are usually calculated using either the phenomenological Simmons model, or a fully coherent transport theory, employing the SAMs periodicity. We suggest that dephasing plays an important role in determining the transport properties of SAM-MJs. We present an approach for calculating the transport properties of SAM-MJs that inherently takes into account in-plane dephasing in the electron motion as it traverses the SAM plane. The calculation is based on the nonequilibrium Greens function formalism, with a local dynamics approximation that describes incoherent motion along the SAM plane. Our approach describes well the two hallmarks of transport through SAM-MJs, namely, the exponential decay of current with molecular chain length and the reduction of the current per molecule as compared to single-molecule junctions. Specifically, we show that dephasing leads to an exponential decay of the current as a function of molecular length, even for resonant tunneling, where the fully coherent calculation shows little or no length-dependence of the current. The dephasing is also shown to lead to a substantial reduction of the current in a SAM-MJ as compared to the single molecule junction, in a realistic parameter regime, where the coherent calculation shows only a very small reduction of the current. Finally, we discuss the effect of dephasing on more subtle transport phenomena such as the conductance even-odd effect and negative differential resistance. © 2014 American Chemical Society. Source


Yochelis A.,Ben - Gurion University of the Negev
Journal of Physical Chemistry C | Year: 2014

Studies of room-temperature ionic liquids showed that electrical diffuse layers in these highly concentrated electrolytes may exhibit spatially extended nonmonotonic (oscillatory) and monotonic decays. These unconventional properties are fundamentally different from traditional (dilute) electrolytes and demonstrate the limited mechanistic understanding of highly concentrated electrolytes. Moreover, electrolyte behavior placed in close proximity of two charged surfaces becomes even more unclear due to the possible overlap between diffuse layers. The latter is important as many applications require confinement into narrow spaces, e.g., energy and lubrication related applications. To advance the understanding of electrical diffuse layers in highly concentrated electrolytes (and ionic liquids) we use a semiphenomenological modified Poisson-Nernst-Planck equation and regulate weak dilutions. Using spatial dynamics methods and numerical computations, we analyze distinct diffuse layer characteristics (nonmonotonic and monotonic) and provide for each type the analytic conditions and the validity limits in terms of applied voltage, domain size, molecular packing, and short-range electrostatic correlations. We also discuss the qualitative generality of the results and thus believe that these insights will allow us to advance the electrochemical understanding of confined highly concentrated electrolytes and their technological applications. © 2014 American Chemical Society. Source


Shahaf G.,Ben - Gurion University of the Negev
Molecular medicine (Cambridge, Mass.) | Year: 2011

Antiinflammatory clinical-grade, plasma-derived human α-1 antitrypsin (hAAT) protects islets from allorejection as well as from autoimmune destruction. hAAT also interferes with disease progression in experimental autoimmune encephalomyelitis (EAE) and in collagen-induced arthritis (CIA) mouse models. hAAT increases IL-1 receptor antagonist expression in human mononuclear cells and T-regulatory (Treg) cell population size in animal models. Clinical-grade hAAT contains plasma impurities, multiple hAAT isoforms and various states of inactive hAAT. We thus wished to establish islet-protective activities and effect on Treg cells of plasmid-derived circulating hAAT in whole animals. Islet function was assessed in mice that received allogeneic islet transplants after mice were given hydrodynamic tail-vein injection with pEF-hAAT, a previously described Epstein-Barr virus (EBV) plasmid construct containing the EBV nuclear antigen 1 (EBNA1) and the family of repeat EBNA1 binding site components (designated "EF") alongside the hAAT gene. Sera collected from hAAT-expressing mice were added to lipopolysaccharide (LPS)-stimulated macrophages to assess macrophage responsiveness. Also, maturation of peritoneal cells from hAAT-expressing mice was evaluated. hAAT-expressing mice accepted islet allografts (n = 11), whereas phosphate-buffered saline-injected animals (n = 11), as well as mice treated with truncated-hAAT-plasmid (n = 6) and untreated animals (n = 20) rapidly rejected islet allografts. In hAAT-expressing animals, local Treg cells were abundant at graft sites, and the IL-1 receptor antagonist was elevated in grafts and circulation. Sera from hAAT-expressing mice, but not control mice, inhibited macrophage responses. Finally, peritoneal cells from hAAT-expressing mice exhibited a semimature phenotype. We conclude that plasmid-derived circulating hAAT protects islet allografts from acute rejection, and human plasma impurities are unrelated to islet protection. Future studies may use this in vivo approach to examine the structure-function characteristics of the protective activities of AAT by manipulation of the hAAT plasmid. Source


Harapin J.,University of Zurich | Eibauer M.,University of Zurich | Medalia O.,University of Zurich | Medalia O.,Ben - Gurion University of the Negev
Structure | Year: 2013

Structural analysis of macromolecular assemblies in their physiological environment is a challenging task that is instrumental in answering fundamental questions in cellular and molecular structural biology. The continuous development of computational and analytical tools for cryo-electron tomography (cryo-ET) enables the study of these assemblies at a resolution of a few nanometers. Through the implementation of thinning procedures, cryo-ET can now be applied to the reconstruction of macromolecular structures located inside thick regions of vitrified cells and tissues, thus becoming a central tool for structural determinations in various biological disciplines. Here, we focus on the successful in situ applications of cryo-ET to reveal structures of macromolecular complexes within eukaryotic cells. © 2013 Elsevier Ltd. Source


Farago O.,Ben - Gurion University of the Negev
Advances in Planar Lipid Bilayers and Liposomes | Year: 2011

Supported lipid membranes are useful and important model systems for studying cell membrane properties and membrane-mediated processes. One attractive application of supported membranes is the design of phantom cells exhibiting well-defined adhesive properties and receptor densities. Adhesion of membranes may be achieved by specific and nonspecific interactions and typically requires the clustering of many adhesion bonds into "adhesion domains." One potential mediator of the early stages of the aggregation process is the Casimir-type forces between adhesion sites induced by the membrane thermal fluctuations. In this review, I will present a theoretical analysis of fluctuation-induced aggregation of adhesion sites in supported membranes. I will first discuss the influence of a single attachment point on the spectrum of membrane thermal fluctuations, from which the free energy cost of the attachment point will be deduced. I will then analyze the problem of a supported membrane with two adhesion points. Using scaling arguments and Monte Carlo simulations, I will demonstrate that two adhesion points attract each other via an infinitely long range effective potential that grows logarithmically with the pair distance. Finally, I will discuss the many-body nature of the fluctuation-induced interactions. I will show that while these interactions alone are not sufficient to allow the formation of aggregation clusters, they greatly reduce the strength of the residual interactions required to facilitate cluster formation. Specifically, for adhesion molecules interacting via a short-range attractive potential, the strength of the direct interactions required for aggregation is reduced by about a factor of two to below the thermal energy k BT. © 2011 Elsevier Inc. Source


Rokach L.,Ben - Gurion University of the Negev
Artificial Intelligence Review | Year: 2010

The idea of ensemble methodology is to build a predictive model by integrating multiple models. It is well-known that ensemble methods can be used for improving prediction performance. Researchers from various disciplines such as statistics and AI considered the use of ensemble methodology. This paper, review existing ensemble techniques and can be served as a tutorial for practitioners who are interested in building ensemble based systems. © 2009 Springer Science+Business Media B.V. Source


Gao Y.,University of Alabama at Birmingham | Riklin-Raviv T.,Ben - Gurion University of the Negev | Bouix S.,Harvard University
Human Brain Mapping | Year: 2014

In the last two decades, the statistical analysis of shape has become an actively studied field and finds applications in a wide range of areas. In addition to algorithmic development, many researchers have distributed end-user orientated toolboxes, which further enable the utilization of the algorithms in an "off the shelf" fashion. However, there is little work on the evaluation and validation of these techniques, which poses a rather serious challenge when interpreting their results. To address this lack of validation, we design a validation framework and then use it to test some of the most widely used toolboxes. Our initial results show inconsistencies and disagreement among four different methods. We believe this type of analysis to be critical not only for the community of algorithm designers but also perhaps more importantly to researchers who use these tools without knowing the algorithm details and seek objective criteria for tool selection. © 2014 Wiley Periodicals, Inc. Source


Krasnov B.R.,Ben - Gurion University of the Negev | Poulin R.,University of Otago | Mouillot D.,Montpellier University
Ecography | Year: 2011

The non-independence of traits among closely related species is a well-documented phenomenon underpinning modern methods for comparative analyses or prediction of trait values in new species. Surprisingly such studies have mainly focused on life-history or morphological traits of free-living organisms, ignoring ecological attributes of parasite species in spite of the fact that they are critical for conservation and human health. We tested for a phylogenetic signal acting on two ecological traits, abundance and host specificity, using data for 218 flea species parasitic on small mammals in 19 regions of the Palaearctic and Nearctic, and a phylogenetic tree for these species. We tested for the presence of a phylogenetic signal at both regional and continental scales using three measures (Abouheif/Moran's I, Pagel's λ, and Blomberg et al.'s K). Our results show 1) a consistent positive phylogenetic signal for flea abundance, but only a weaker and erratic signal for host specificity, and 2) a clear dependence on scale, with the signals being stronger at the continental scale and relatively weaker or inconsistent at the regional scale. Whenever values of Blomberg et al.'s K were found significant, they were <1 suggesting that the effects of phylogeny on the evolution of abundance and host specificity in fleas are weaker than expected from a Brownian motion model. The most striking finding is that, within a continental fauna, closely-related flea species are characterized by similar levels of abundance, though this pattern is weaker within local assemblages, possibly eroded by local biotic or abiotic conditions. We discuss the link between history (represented by phylogeny) and pattern of variation among species in morphological and ecological traits, and use comparisons between the Palaearctic and Nearctic to infer a role of historical events in the probability of detecting phylogenetic signals. © 2011 The Authors. Source


Friedman A.,Ben - Gurion University of the Negev
Epilepsia | Year: 2011

Status epilepticus is often associated with endothelial dysfunction and increased vessels permeability. We discuss here the direct role of blood-brain barrier (BBB) dysfunction in epileptogenesis and brain damage. On the cellular level, astrocytes respond early to the efflux of serum proteins in the presence of dysfunctional BBB, with activation of the innate immune system and disturbed homeostasis of extracellular potassium and glutamate. In turn, there is enhanced excitability of neurons and altered network connectivity. Transforming growth factor beta (TGF-β) signaling appears to be a potential new target for the prevention of epileptogenesis and secondary damage following status epilepticus. © Wiley Periodicals, Inc. © 2011 International League Against Epilepsy. Source


Friedman A.,Ben - Gurion University of the Negev | Dingledine R.,Emory University
Epilepsia | Year: 2011

Experimental evidence strongly indicates a significant role for inflammatory and immune mediators in initiation of seizures and epileptogenesis. Here we will summarize data supporting the involvement of IL-1β, TNF-α and toll-like receptor 4 in seizure generation and the process of epileptogenesis. The physiological homeostasis and control over brain immune response depends on the integrity of the blood-brain barrier, transforming growth factor (TGF)-β signaling and leukocyte migration. To what extent targeting the immune system is successful in preventing epileptogenesis, and which signaling pathway should be beleaguered is still under intensive research. © 2011 International League Against Epilepsy. Source


Gelbstein Y.,Ben - Gurion University of the Negev
Acta Materialia | Year: 2013

Seeking novel thermoelectric materials for power generation applications, the p-type Pb0.25Sn0.25Ge0.5Te composition is currently being investigated. This composition, lying in the miscibility gap of the quasi-ternary phase diagram of PbTe-GeTe-SnTe, exhibits a tendency for phase separation into fine submicron domains in addition to nearly optimized electronic properties. The maximal figure of merit, ZT, of ∼1.2 was obtained at 450 °C, showing a high thermoelectric potential. Possible routes for further enhancement of ZT up to ∼1.8, upon precise controlling of the morphological alignment of the involved phases, are described. © 2012 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved. Source


Ginossar E.,University of Surrey | Grosfeld E.,Ben - Gurion University of the Negev
Nature Communications | Year: 2014

Solid-state Majorana fermions are generating intensive interest because of their unique properties and possible applications in fault tolerant quantum memory devices. Here we propose a method to detect signatures of Majorana fermions in hybrid devices by employing the sensitive apparatus of the superconducting charge-qubit architecture and its efficient coupling to microwave photons. In the charge and transmon regimes of this device, we find robust signatures of the underlying Majorana fermions that are, remarkably, not washed out by the smallness of the Majorana contribution to the Josephson current. It is predicted that at special gate bias points the photon-qubit coupling can be switched off via quantum interference, and in other points it is exponentially dependent on the control parameter EJ/EC. We propose that this device could be used to manipulate the quantum state of the Majorana fermion and realize a tunable high coherence four-level system in the superconducting-circuit architecture. © 2014 Macmillan Publishers Limited. All rights reserved. Source


Liesa M.,Boston University | Shirihai O.S.,Boston University | Shirihai O.S.,Ben - Gurion University of the Negev
Cell Metabolism | Year: 2013

Mitochondrial fusion, fission, and mitophagy form an essential axis of mitochondrial quality control. However, quality control might not be the only task carried out by mitochondrial dynamics. Recent studies link mitochondrial dynamics to the balance between energy demand and nutrient supply, suggesting changes in mitochondrial architecture as a mechanism for bioenergetic adaptation to metabolic demands. By favoring either connected or fragmented architectures, mitochondrial dynamics regulates bioenergetic efficiency and energy expenditure. Placement of bioenergetic adaptation and quality control as competing tasks of mitochondrial dynamics might provide a new mechanism, linking excess nutrient environment to progressive mitochondrial dysfunction, common to age-related diseases. © 2013 Elsevier Inc. Source


Skoge M.,University of California at San Diego | Naqvi S.,Princeton University | Meir Y.,Ben - Gurion University of the Negev | Wingreen N.S.,Princeton University
Physical Review Letters | Year: 2013

Cooperativity arising from local interactions in equilibrium receptor systems provides gain, but does not increase sensory performance, as measured by the signal-to-noise ratio (SNR) due to a fundamental tradeoff between gain and intrinsic noise. Here we allow sensing to be a nonequilibrium process and show that energy dissipation cannot circumvent the fundamental tradeoff, so that the SNR is still optimal for independent receptors. For systems requiring high gain, nonequilibrium 2D-coupled receptors maximize the SNR, revealing a new design principle for biological sensors. © 2013 American Physical Society. Source


Hirshberg I.,Ben - Gurion University of the Negev | Winter W.,University of Munster | Zacharias J.,University of Glasgow
Communications in Mathematical Physics | Year: 2015

We develop the concept of Rokhlin dimension for integer and for finite group actions on C*-algebras. Our notion generalizes the so-called Rokhlin property, which can be thought of as Rokhlin dimension 0. We show that finite Rokhlin dimension is prevalent and appears in cases in which the Rokhlin property cannot be expected: the property of having finite Rokhlin dimension is generic for automorphisms of Z-stable C*-algebras, where Z denotes the Jiang–Su algebra. Moreover, crossed products by automorphisms with finite Rokhlin dimension preserve the property of having finite nuclear dimension, and under a mild additional hypothesis also preserve Z -stability. In topological dynamics our notion may be interpreted as a topological version of the classical Rokhlin lemma: automorphisms arising from minimal homeomorphisms of finite dimensional compact metrizable spaces always have finite Rokhlin dimension. The latter result has by now been generalized by Szabó to the case of free and aperiodic Z-actions on compact metrizable and finite dimensional spaces. © 2015, The Author(s). Source


The Levant is an important migration bridge, harboring border-zones between Afrotropical and palearctic species. Accordingly, Chameleo chameleon, a common species throughout the Mediterranean basin, is morphologically divided in the southern Levant (Israel) into two subspecies, Chamaeleo chamaeleon recticrista (CCR) and C. c. musae (CCM). CCR mostly inhabits the Mediterranean climate (northern Israel), while CCM inhabits the sands of the north-western Negev Desert (southern Israel). AFLP analysis of 94 geographically well dispersed specimens indicated moderate genetic differentiation (PhiPT = 0.097), consistent with the classical division into the two subspecies, CCR and CCM. In contrast, sequence analysis of a 637 bp coding mitochondrial DNA (mtDNA) fragment revealed two distinct phylogenetic clusters which were not consistent with the morphological division: one mtDNA cluster consisted of CCR specimens collected in regions northern of the Jezreel Valley and another mtDNA cluster harboring specimens pertaining to both the CCR and CCM subspecies but collected southern of the Jezreel Valley. AMOVA indicated clear mtDNA differentiation between specimens collected northern and southern to the Jezreel Valley (PhiPT = 0.79), which was further supported by a very low coalescent-based estimate of effective migration rates. Whole chameleon mtDNA sequencing (∼17,400 bp) generated from 11 well dispersed geographic locations revealed 325 mutations sharply differentiating the two mtDNA clusters, suggesting a long allopatric history further supported by BEAST. This separation correlated temporally with the existence of an at least 1 million year old marine barrier at the Jezreel Valley exactly where the mtDNA clusters meet. We discuss possible involvement of gender-dependent life history differences in maintaining such mtDNA genetic differentiation and suggest that it reflects (ancient) local adaptation to mitochondrial-related traits. Source


Schwartz M.,Ben - Gurion University of the Negev
IEEE International Symposium on Information Theory - Proceedings | Year: 2011

We consider lattice tilings of ℝn by a shape we call a (k+, k-,n)-quasi-cross. Such lattices form perfect error-correcting codes which correct a single limited-magnitude error with prescribed maximal-magnitudes of positive error and negative error (the ratio of which is called the balance ratio). These codes can be used to correct both disturb and retention errors in flash memories, which are characterized by having limited magnitudes and different signs. We construct infinite families of perfect codes for any rational balance ratio, and provide a specific construction for (2, 1, n)-quasi-cross lattice tiling. The constructions are related to group splitting and modular B1 sequences. We also study bounds on the parameters of lattice-tilings by quasi-crosses, connecting the arm lengths of the quasi-crosses and the dimension. We also prove constraints on group splitting, a specific case of which shows that the parameters of the lattice tiling by (2, 1, n)-quasi-crosses is the only ones possible. © 2011 IEEE. Source


Greenberg J.,Ben - Gurion University of the Negev
PloS one | Year: 2012

Two experiments examined the relation between mindfulness practice and cognitive rigidity by using a variation of the Einstellung water jar task. Participants were required to use three hypothetical jars to obtain a specific amount of water. Initial problems were solvable by the same complex formula, but in later problems ("critical" or "trap" problems) solving was possible by an additional much simpler formula. A rigidity score was compiled through perseverance of the complex formula. In Experiment 1, experienced mindfulness meditators received significantly lower rigidity scores than non-meditators who had registered for their first meditation retreat. Similar results were obtained in randomized controlled Experiment 2 comparing non-meditators who underwent an eight meeting mindfulness program with a waiting list group. The authors conclude that mindfulness meditation reduces cognitive rigidity via the tendency to be "blinded" by experience. Results are discussed in light of the benefits of mindfulness practice regarding a reduced tendency to overlook novel and adaptive ways of responding due to past experience, both in and out of the clinical setting. Source


Kratsch S.,TU Berlin | Philip G.,Max Planck Institute for Informatics | Ray S.,Ben - Gurion University of the Negev
Proceedings of the Annual ACM-SIAM Symposium on Discrete Algorithms | Year: 2014

The input to the NP-hard Point Line Cover problem (PLC) consists of a set V of n points on the plane and a positive integer k, and the question is whether there exists a set of at most k lines which pass through all points in V. By straightforward reduction rules one can efficiently reduce any input to one with at most k2 points. We show that this easy reduction is already essentially tight under standard assumptions. More precisely, unless the polynomial hierarchy collapses to its third level, for any ε > 0, there is no polynomial-time algorithm that reduces every instance (P, k) of PLC to an equivalent instance with O(k2-ε) points. This answers, in the negative, an open problem posed by Lokshtanov (PhD Thesis, 2009). Our proof uses the notion of a kernel from parameterized complexity, and the machinery for deriving lower bounds on the size of kernels developed by Dell and van Melkebeek (STOC 2010). It has two main ingredients: We first show, by reduction from Vertex Cover, that-unless the polynomial hierarchy collapses-PLC has no kernel of total size O(k2-ε) bits. This does not directly imply the claimed lower bound on the number of points, since the best known polynomial-time encoding of a PLC instance with n points requires ω(n 2) bits. To get around this hurdle we build on work of Goodman, Pollack and Sturmfels (STOC 1989) and devise an oracle communication protocol of cost O(n log n) for PLC; its main building blocks are a bound of O(N O(n)) for the order types of n points that are not necessarily in general position and an explicit (albeit slow) algorithm that enumerates a superset of size NO(n) of all possible order types of n points. This protocol, together with the lower bound on the total size (which also holds for such protocols), yields the stated lower bound on the number of points. While a number of essentially tight polynomial lower bounds on total sizes of kernels are known, our result is-to the best of our knowledge-the first to show a nontrivial lower bound for structural/secondary parameters. Copyright © 2014 by the Society for Industrial and Applied Mathematics. Source


Kozuch S.,Ben - Gurion University of the Negev | Kozuch S.,University of North Texas
Journal of Chemical Theory and Computation | Year: 2015

The π bond shifting (automerization) by carbon tunneling of cyclopropenyl anions was computationally analyzed by the small curvature tunneling methodology. Similar to other antiaromatic cases, the process is hindered by substituents departing from planarity, since these groups must be realigned along with the π bond shifting. With hydrogens as substituents the tunneling is extremely fast, in a case of both heavy and light atom tunneling. But, with more massive substituents (such as Me and F), and especially with longer groups (such as CN), the tunneling probability is reduced or even virtually canceled. The automerization of triphenylcyclopropyl anion by tunneling was supposed to be impossible due to the high mass of the phenyl groups. However, it was found that the ground state of this species is actually a D3h aromatic triplet, a single-well system that cannot undergo automerization. For this and other systems with π acceptor groups, the superposition of states that generates the second-order Jahn-Teller distortion is diminished, and by Hund's rule, the triplet results in the ground state. © 2015 American Chemical Society. Source


Granot R.,CNRS Paris Institute of Global Physics | Granot R.,Ben - Gurion University of the Negev | Dyment J.,CNRS Paris Institute of Global Physics | Gallet Y.,CNRS Paris Institute of Global Physics
Nature Geoscience | Year: 2012

Prolonged periods of stable polarity in the Earth's magnetic field are termed superchrons. The most recent of these intervals, the Cretaceous Normal Superchron, lasted from approximately 121 to 83 million years ago 1,2 and is most commonly observed in the lack of a prominent stripe pattern 3 in the sea-surface magnetic anomaly above the oceanic crust formed during this period. The exact behaviour of the geomagnetic field during this interval, however, remains unclear, as palaeomagnetic data from igneous 4-6 and sedimentary 7,8 sections yield conflicting results. Here we report a deep-tow magnetic profile from the Central Atlantic Ocean, African flank, spanning the entire Cretaceous Normal Superchron. We suggest that this profile, along with widely distributed sea-surface magnetic anomaly data, records the rising variability of the dipolar geomagnetic field at the beginning of the interval, which culminates in a highly fluctuating field between 110 and 100 million years ago. We interpret the subdued magnetic signal in the last 9 million years of the superchron as the return to a more stable geomagnetic field. This variability allows us to define two internal time markers valuable for plate reconstructions. Based on the degree of variability observed, we conclude that geodynamo models that call for low field variability may provide an oversimplified view of superchrons. © 2012 Macmillan Publishers Limited. All rights reserved. Source


Grafi G.,Ben - Gurion University of the Negev
Biogerontology | Year: 2013

Stem cells have become a major topic, both publicly and scientifically, owing to their potential to cure diseases and repair damaged tissues. Particular attention has been given to the so-called "induced pluripotent stem cells" (iPSCs) in which somatic cells are induced by the expression of transcription factor encoding transgenes - a methodology first established by Takahashi and Yamanaka (Cell 126:663-676, 2006) - to acquire pluripotent state. This methodology has captured researchers' imagination as a potential procedure to obtain patient-specific therapies while also solving both the problem of transplant rejection and the ethical concerns often raised regarding the use of embryonic stem cells in regenerative medicine. The study of the biology of stem cells/iPSCs, in recent years, has uncovered some fundamental weaknesses that undermine their potential use in transplantation therapies. © 2013 Springer Science+Business Media Dordrecht. Source


Kalichman L.,Ben - Gurion University of the Negev
Rheumatology International | Year: 2010

Massage therapy is widely used by patients with fibromyalgia seeking symptom relief. We performed a review of all available studies with an emphasis on randomized controlled trials to determine whether massage therapy can be a viable treatment of fibromyalgia symptoms. Extensive narrative review. PubMed, PsychInfo, CINAHL, PEDro, ISI Web of Science, and Google Scholar databases (inception-December 2009) were searched for the key words "massage", "massotherapy", "self-massage", "soft tissue manipulation", "soft tissue mobilization", "complementary medicine", "fibromyalgia" "fibrositis", and "myofascial pain". No language restrictions were imposed. The reference lists of all articles retrieved in full were also searched. The effects of massage on fibromyalgia symptoms have been examined in two single-arm studies and six randomized controlled trials. All reviewed studies showed short-term benefits of massage, and only one single-arm study demonstrated long-term benefits. All reviewed studies had methodological problems. The existing literature provides modest support for use of massage therapy in treating fibromyalgia. Additional rigorous research is needed in order to establish massage therapy as a safe and effective intervention for fibromyalgia. In massage therapy of fibromyalgia, we suggest that massage will be painless, its intensity should be increased gradually from session to session, in accordance with patient's symptoms; and the sessions should be performed at least 1-2 times a week. © Springer-Verlag 2009. Source


Seligmann H.,Hebrew University of Jerusalem | Seligmann H.,Ben - Gurion University of the Negev
Journal of Theoretical Biology | Year: 2012

Suppressor tRNAs induce expression of additional (off-frame) genes coded by stopless genetic codes without lengthening genomes, decreasing DNA replication costs. RNA 3'-to-5' polymerization by tRNAHis guanylyltransferase suggests further cryptic code: hypothetical 'invertases' polymerizing in the 3'-to-5' direction, advancing in the 5'-to-3' direction would produce non-complementary RNA templated by regular genes, with different coding properties. Assuming 'invertase' activity, BLAST analyses detect GenBank-stored RNA ESTs and proteins (some potentially coding for the hypothesized invertase) for human mitochondrial genes. These peptides' predicted secondary structures resemble their GenBank homologues'. 3'-to-5' EST lengths increase with their self-hybridization potential: Single-stranded RNA degradation perhaps limits 3'-to-5' elongation. Independent methods confirm predicted 3'-to-5' overlapping genes: (a) Presumed 3'-to-5' overlapping genes avoid codons belonging to circular codes; (b) Spontaneous replicational deamination (mutation) gradients occur at 3rd codon positions, unless these are involved in overlap coding, because mutations are counter selected in overlapping genes. Tests a and b converge on predicted 3'-to-5' gene expression levels. Highly expressed ones include also fewer stops, and mitochondrial genomes (in Primates and Drosophila) adapt to avoid dependence of 3'-to-5' coding upon antitermination tRNA activity. Secondary structure, circular code, gradient and coevolution analyses yield each clear positive results independently confirming each other. These positive results (including physical evidence for 3'-to-5' ESTs) indicate that 3'-to-5' coding and invertase activity is an a priori improbable working hypothesis that cannot be dismissed. Note that RNAs produced by invertases potentially produce triple-stranded DNA:RNA helices by antiparallel Hoogsteen pairings at physiological pH, as previously observed for mitochondrial genomes. © 2012 Elsevier Ltd. Source


Eichler J.,Ben - Gurion University of the Negev | Maupin-Furlow J.,University of Florida
FEMS Microbiology Reviews | Year: 2013

As an ever-growing number of genome sequences appear, it is becoming increasingly clear that factors other than genome sequence impart complexity to the proteome. Of the various sources of proteomic variability, post-translational modifications (PTMs) most greatly serve to expand the variety of proteins found in the cell. Likewise, modulating the rates at which different proteins are degraded also results in a constantly changing cellular protein profile. While both strategies for generating proteomic diversity are adopted by organisms across evolution, the responsible pathways and enzymes in Archaea are often less well described than are their eukaryotic and bacterial counterparts. Studies on halophilic archaea, in particular Haloferax volcanii, originally isolated from the Dead Sea, are helping to fill the void. In this review, recent developments concerning PTMs and protein degradation in the haloarchaea are discussed. In this review, the latest findings on the post-translational modification and regulated degradation of archaeal proteins, as learned from studies of the haloarchaea, in particular Haloferax volcanii, are presented. © 2012 Federation of European Microbiological Societies. Published by Blackwell Publishing Ltd. All rights reserved. Source


Folman R.,Ben - Gurion University of the Negev
Quantum Information Processing | Year: 2011

In its most general form, the atom chip is a device in which neutral or charged particles are positioned in an isolating environment such as vacuum (or even a carbon solid state lattice) near the chip surface. The chip may then be used to interact in a highly controlled manner with the quantum state. I outline the importance of material science to quantum computing (QC) with atom chips, where the latter may be utilized for many, if not all, suggested implementations of QC. Material science is important both for enhancing the control coupling to the quantum system for preparation and manipulation as well as measurement, and for suppressing the uncontrolled coupling giving rise to low fidelity through static and dynamic effects such as potential corrugations and noise. As a case study, atom chips for neutral ground state atoms are analyzed and it is shown that nanofabricated wires will allow for more than 10 4 gate operations when considering spin-flips and decoherence. The effects of fabrication imperfections and the Casimir-Polder force are also analyzed. In addition, alternative approaches to current-carrying wires are briefly described. Finally, an outlook of what materials and geometries may be required is presented, as well as an outline of directions for further study. © Springer Science+Business Media, LLC 2011. Source


Miller Y.,Ben - Gurion University of the Negev | Ma B.,SAIC | Nussinov R.,SAIC | Nussinov R.,Tel Aviv University
Coordination Chemistry Reviews | Year: 2012

Neurodegenerative diseases constitute a worldwide health problem. Metal ions are essential for life, but they are also involved in several neurodegenerative mechanisms such as protein aggregation, free radical generation and oxidative stress. Here, we address the role of metal ions and their pathological mechanisms in common neurodegenerative diseases, such as Alzheimer's disease (AD), Parkinson's disease (PD), diabetes type II and dialysis-related amyloidosis. In some diseases the metal ions accelerate the aggregation of the amyloids, whereas in others they inhibit it. In particular, we focus on amyloid heterogeneity and the consequent range of possible metal binding modes in amyloids, and the effects of metal ion binding. Together, this leads to an overview of the structural variability and the underlying mechanisms of oligomeric amyloids complexed with metal ions. Knowledge of the metal-amyloid interactions and understanding the mechanism of the metal-induced oligomerization in amyloids are important for effective drug design to prevent and alleviate aggregation. © 2012 Elsevier B.V. Source


Harel D.,Weizmann Institute of Science | Marron A.,Weizmann Institute of Science | Weiss G.,Ben - Gurion University of the Negev
Communications of the ACM | Year: 2012

Behavioral programming is a novel, language-independent paradigm for programming reactive systems, centered on natural and incremental specification of behavior, and implemented in the visual formalism of live sequence charts (LSC), and in the BPJ Java package. To facilitate full behavioral modularity via the independent coding of separate facets of behavior, all scenarios run simultaneously, and all are consulted at every decision point during execution. Naturally, composing behaviors that were programmed without direct consideration of mutual dependencies raises questions about conflicting requirements, under-specification, and synchronization. When all b-threads are at a synchronization point, an event is chosen, that is requested by at least one b-thread and is not blocked by any b-thread. The selected event is then triggered by resuming all the b-threads that either requested it or are waiting for it. Source


Levitas A.,Ben - Gurion University of the Negev
European Journal of Human Genetics | Year: 2015

Dilated cardiomyopathy (DCM) and malignant ventricular arrhythmias are important causes of congestive heart failure, heart transplantation, and sudden cardiac death in young patients. Cypher/ZASP is a cytoskeletal protein localized in the sarcomeric Z-line that has a pivotal role in maintaining adult cardiac structure and function. The putative mutation p.(D117N) in Cypher/ZASP has been suggested to cause systolic dysfunction, dilated left ventricle with hypertrabeculated myocardium, and intraventricular conduction disturbance, based on two reported sporadic cases. In two unrelated Bedouin families, one with pediatric DCM and the other with DCM and ventricular arrhythmias at young adulthood searching for the causative mutation by exome sequencing we identified the p.(D117N) variant in Cypher/ZASP. However, p.(D117N) did not segregate as the causative mutation in these families, i.e. it was not present in some patients and was found in several individuals who had no clinical manifestations. Furthermore, the carrier frequency in the Bedouin population of origin is estimated to be 5.2%, which is much higher than the incidence of idiopathic DCM in this population. Thus, our data support the notion that the p.(D117N) variant in Cypher/ZASP is not a causative mutation in the families tested by us. The results also indicates that at least in some cases, the p.(D117N) in Cypher/ZASP is not a causative mutation and the role of D117N in Cypher/ZASP in cardiac pathologies should be further clarified and re-evaluated.European Journal of Human Genetics advance online publication, 30 September 2015; doi:10.1038/ejhg.2015.195. © 2015 Macmillan Publishers Limited Source


Lyubarsky Y.E.,Ben - Gurion University of the Negev
Monthly Notices of the Royal Astronomical Society | Year: 2010

The acceleration of relativistic jets from the Poynting- to the matter-dominated stage is considered. There are generally two collimation regimes, which we call equilibrium and non-equilibrium respectively. In the first regime, the jet is efficiently accelerated up to the equipartition between the kinetic and electromagnetic energy. We show that after equilibrium the jet ceases to be Poynting-dominated and the ratio of electromagnetic to kinetic energy decreases only logarithmically, so that such jets become truly matter-dominated only at extremely large distances. Non-equilibrium jets remain generally Poynting-dominated until logarithmically large distances. In the only case when a non-equilibrium jet is accelerated to the equipartition level, we found that the flow is not continued to infinity but is focused towards the axis at a finite distance from the origin. © 2009 The Author. Journal compilation © 2009 RAS. Source


Manor E.,Ben - Gurion University of the Negev
International journal of medical sciences | Year: 2012

Three hundred and twenty-two patients (192 male and 130 female) with cystic lesions of the jaw were successfully diagnosed and treated. One hundred and fifty-five (48%) were radicular cysts, 80 (25%) were dentigerous cysts, 23 (7%) were odontogenic keratocyst (=keratocystic odontogenic tumor), 19 (6%) were eruption cysts, 16 (5%) were traumatic bone cysts, and 29 (9%) were non-odontogenic cysts. There were 95 in the pediatric age group (1 month to 16 years) and 227 in the adult age group (17 years and older). Male to female ratio was 1 in the pediatric age group and 1.7 in the adult age group. The treatment modalities were: marsupialization, enucleation, enucleation with bone grafting, or resection. The distribution and characteristics of jaw cysts in children are different from those in adults. In children there is a relatively high rate of developmental cysts, whereas in adults the inflammatory cysts are more common. Following enucleation of a cystic jaw lesion, the entire surgical specimen and not only a biopsy specimen, should be examined histopathologically to prevent any possibility of an intramural squamous cell carcinoma that may be overlooked. The differences in prevalence of each type of jaw cyst during a lifetime may point toward a multifactorial polygenic pattern rather than a monogenic pattern. Source


Tzivian L.,Ben - Gurion University of the Negev
Journal of Asthma | Year: 2011

Background. Asthma, a chronic inflammatory disease of the airways, is associated with reversible airway obstruction and hyperresponsiveness to triggers; clinical symptoms include wheezing, episodic cough, shortness of breath, and increased mucous production. Ambient or outdoor environmental exposure to ozone, particulate matter, sulfur dioxide, and nitrogen oxides has been well documented to exacerbate asthma. Children appear to be most vulnerable to the harmful effects of ambient air pollutants. As their lungs are not completely developed, children may experience greater exposure to environmental pollutants than adults and the higher doses of varied composition may remain in their lungs for a greater duration. Altogether, the negative effects of air pollutants on pulmonary function place children at a greater risk of air pollutant-induced exacerbation of asthma for the duration of their lives. Objectives. The aim of this review was to assess recently published literature regarding the influence of air pollution on asthma in children. Methods. For this work, we reviewed articles found in PubMed using the key words "outdoor air pollution, asthma, and children" which were published between 2006 and 2009. Only those articles that had a full version available in PubMed were analyzed. Results. We reviewed studies published between 2006 and 2009 examining the effect of outdoor air pollution on asthma in children. In total, we evaluated 25 articles; of these, 9 were published in 2006, 3 in 2007, 8 in 2008, and 5 in 2009. Of these 25 studies, 1 was a clinical trial, 6 were cross-sectional, 4 were case-control (2 with a case-crossover design), 12 were cohort prospective, and 2 were cohort retrospective studies with varied follow-up times ranging from 10 days to 7 years. The ages of children also differed, ranging from birth to 18 years of age. Conclusions. All studies reviewed in this work indicate that outdoor air pollution affects the appearance and exacerbation of asthma in children. Although these findings are of great interest, the limitations of noted works make future investigations of the effect of air pollution on asthma in children essential. © 2011 Informa Healthcare USA, Inc. Source


Brustein R.,Ben - Gurion University of the Negev | Medved A.J.M.,Rhodes University
Physical Review D - Particles, Fields, Gravitation and Cosmology | Year: 2013

Lovelock gravity is a class of higher-derivative gravitational theories for which the linearized equations of motion have no more than two time derivatives. Here, it is shown that any Lovelock theory can be effectively described as Einstein gravity coupled to a p-form gauge field. This extends the known example of an f(R) theory of gravity, which can be described as Einstein gravity coupled to a scalar field. © 2013 American Physical Society. Source


Tikotzky L.,Ben - Gurion University of the Negev
Behavioral Sleep Medicine | Year: 2016

This study examined the links between maternal sleep, maternal depressive symptoms, and mothers’ perceptions of their emotional relationship with their infant in a self-recruited sample of mothers. Eighty mothers of infants 3–18 months old completed sleep diaries for 5 consecutive nights, and questionnaires assessing sleep (Insomnia Severity Index [ISI]), depressive symptom severity (Edinburgh Postnatal Depression Scale [EPDS]), and perceived mother–infant relationship (Postpartum Bonding Questionnaire [PBQ] and Maternal Postnatal Attachment Questionnaire [MPAQ]). Significant correlations, controlling for depression severity, were found between more disturbed maternal sleep and more negative maternal perceptions of the mother-infant relationship. Regression analyses revealed that EPDS showed the strongest association with PBQ, whereas ISI demonstrated the strongest association with MPAQ. The present study highlights the importance of deepening and expanding our understanding of the negative implications of maternal sleep problems. Copyright © Taylor & Francis Group, LLC. Source


Brafman R.I.,Ben - Gurion University of the Negev
Artificial Intelligence | Year: 2011

Value functions are defined over a fixed set of outcomes. In work on preference handling in AI, these outcomes are usually a set of assignments over a fixed set of state variables. If the set of variables changes, a new value function must be elicited. Given that in most applications the state variables are properties (attributes) of objects in the world, this implies that the introduction of new objects requires re-elicitation of preferences. However, often, the user has in mind preferential information that is much more generic, and which is relevant to a given type of domain regardless of the precise number of objects of each kind and their properties. Such information requires the introduction of relational models. Following in the footsteps of work on probabilistic relational models (PRMs), we suggest in this work a rule-based, relational language of preferences. This language extends regular rule-based languages and leads to a much more flexible approach for specifying control rules for autonomous systems. It also extends standard generalized-additive value functions to handle a dynamic universe of objects. Given any specific set of objects this specification induces a generalized-additive value function over assignments to the controllable attributes associated with these objects. We then describe a prototype of a decision support system for command and control centers we developed to illustrate and study the use of these rules. © 2010 Elsevier B.V. All rights reserved. Source


Filc D.,Ben - Gurion University of the Negev
Social Science and Medicine | Year: 2014

The present paper argues that current mainstream understandings of civil society as ontologically different from the state and essentially positive (either normative or functionally) are problematic in order to understand the development of health care reforms. The paper proposes to ground an explanation of the role of civil society in health care reforms in a Gramscian understanding of civil society as analytically different from the state, and as an arena for hegemonic struggles. The study of health care reform in Israel serves as a case study for this claim. © 2014 Elsevier Ltd. Source


Deichmann U.,Ben - Gurion University of the Negev
Perspectives in Biology and Medicine | Year: 2015

The concept of chromatin as a complex of nucleic acid and proteins in the cell nucleus was developed by cytologists and biochemists in the late 19th century. It was the starting point for biochemical research on DNA and nuclear proteins. Although interest in chromatin declined rapidly at the beginning of the 20th century, a few decades later a new focus on chromatin emerged, which was not only related to its structure, but also to its function in gene regulatory processes in the development of higher organisms. Since the late 20th century, research on chromatin modifications has also been conducted under the label of epigenetics. This article highlights the major phases of chromatin research until the present time and introduces major investigators and their scientific and philosophical outlooks. © 2016 by Johns Hopkins University Press. Source


Negev M.,Ben - Gurion University of the Negev
Environmental Impact Assessment Review | Year: 2012

Stakeholder participation is considered an integral part of HIA. However, the challenges that participation implies in a multi-disciplinary and multi-ethnic society are less studied. This paper presents the manifestations of the multiplicity of sectors and population groups in HIA and discusses the challenges that such diversity imposes. Specifically, there is no common ground between participants, as their positions entail contradictory knowledge regarding the current situation, reliance on distinct data and conflicting interests. This entails usage of multiple professional and ethnic languages, disagreements regarding the definition of health and prioritizing health issues in HIA, and divergent perceptions of risk. These differences between participants are embedded culturally, socially, individually and, maybe most importantly, professionally. This complex picture of diverse stakeholder attributes is grounded in a case study of stakeholder participation in HIA, regarding zoning of a hazardous industry site in Israel. The implication is that participatory HIAs should address the multiplicity of stakeholders and types of knowledge, data and interests in a more comprehensive way. © 2011 Elsevier Inc. Source


Vurm I.,Hebrew University of Jerusalem | Lyubarsky Y.,Ben - Gurion University of the Negev | Piran T.,Hebrew University of Jerusalem
Astrophysical Journal | Year: 2013

The low-energy spectral slopes of the prompt emission of most gamma-ray bursts (GRBs) are difficult to reconcile with radiatively efficient optically thin emission models irrespective of the radiation mechanism. An alternative is to ascribe the radiation around the spectral peak to a thermalization process occurring well inside the Thomson photosphere. This quasi-thermal spectrum can evolve into the observed non-thermal shape by additional energy release at moderate to small Thomson optical depths, which can readily give rise to the hard spectral tail. The position of the spectral peak is determined by the temperature and Lorentz factor of the flow in the thermalization zone, where the total number of photons carried by the jet is established. To reach thermalization, dissipation alone is not sufficient and photon generation requires an efficient emission/absorption process in addition to scattering. We perform a systematic study of all relevant photon production mechanisms searching for possible conditions in which thermalization can take place. We find that a significant fraction of the available energy should be dissipated at intermediate radii, ≳1010 to a few ×1011 cm, and the flow there should be relatively slow: the bulk Lorentz factor could not exceed a few tens for all but the most luminous bursts with the highest E pk values. The least restrictive constraint for successful thermalization, Γ ≲ 20, is obtained if synchrotron emission acts as the photon source. This requires, however, a non-thermal acceleration deep below the Thomson photosphere transferring a significant fraction of the flow energy to relativistic electrons with Lorentz factors between 10 and 100. Other processes require bulk flow Lorentz factors of order of a few for typical bursts. We examine the implications of these results to different GRB photospheric emission models. © 2013. The American Astronomical Society. All rights reserved. Source


Gil M.,Ben - Gurion University of the Negev
Mathematics of Control, Signals, and Systems | Year: 2013

We consider a vector homogeneous neutral type functional differential vector equation of a certain class. It is proved that, if the corresponding nonhomogeneous equation with the zero initial conditions and an arbitrary free term bounded on the positive half-line, has a bounded solution, then the considered homogeneous equation is exponentially stable. © 2012 Springer-Verlag London. Source


Asnani H.,Stanford University | Permuter H.,Ben - Gurion University of the Negev | Weissman T.,Stanford University
IEEE Transactions on Information Theory | Year: 2011

We consider the problem of optimal probing of states of a channel by transmitter and receiver for maximizing rate of reliable communication. The channel is discrete memoryless (DMC) with i.i.d. states. The encoder takes probing actions dependent on the message. It then uses the state information obtained from probing causally or noncausally to generate channel input symbols. The decoder may also take channel probing actions as a function of the observed channel output and use the channel state information thus acquired, along with the channel output, to estimate the message. We refer to the maximum achievable rate for reliable communication for such systems as the Probing Capacity. We characterize this capacity when the encoder and decoder actions are cost constrained. To motivate the problem, we begin by characterizing the trade-off between the capacity and fraction of channel states the encoder is allowed to observe, while the decoder is aware of channel states. In this setting of 'to observe or not to observe' state at the encoder, we compute certain numerical examples which exhibit a pleasing phenomenon, where encoder can observe a relatively small fraction of states and yet communicate at maximum rate, i.e., rate when observing states at encoder is not cost constrained. © 2011 IEEE. Source


Bujoczek M.,Agricultural University of Krakow | Ciach M.,Agricultural University of Krakow | Yosef R.,Ben - Gurion University of the Negev
Biological Conservation | Year: 2011

Owing to the extensive development of road networks, millions of animals are killed annually. This impact on the natural environment has been questioned: is there a selection of victims in the car collisions? and do road-kills, as do predators, influence a population by eliminating individuals in poor condition? We compared road-killed individuals to those killed by predators in SE Poland in three bird species: Yellowhammer Emberiza citrinella, Barn Swallow Hirundo rustica, and Chaffinch Fringilla coelebs. We applied ptilochronology, in which the width of feather growth bars represents an individual's relative nutritional condition. Our results show that the analyzed species were in significantly better body condition than those killed by raptors. Our study does not concur with previous studies which concluded that weaker individuals are more vulnerable on the road. Raptors select prey in poor condition, but road-killed individuals are in significantly better condition, so apparently road-kill results in the random elimination of healthy individuals. The conservation implications of this study are far-reaching; future road construction and safety regulations must take wildlife into consideration. If these human-created habitats are killing a significant proportion of the healthier part of natural populations in a non-selective manner, this could result in situations where fragmented populations could be driven to a critical stage, and/or the situation of declining or endangered populations worsened further. © 2010 Elsevier Ltd. Source


Herzberg M.,Ben - Gurion University of the Negev | Berry D.,University of Michigan | Raskin L.,University of Michigan
Water Research | Year: 2010

The effects of microfiltration (MF) as pretreatment for reverse osmosis (RO) on biofouling of RO membranes were analyzed with secondary wastewater effluents. MF pretreatment reduced permeate flux decline two- to three-fold, while increasing salt rejection. Additionally, the oxygen uptake rate (OUR) in the biofouling layer of the RO membrane was higher for an RO system that received pretreated secondary wastewater effluent compared to a control RO system that received untreated secondary effluent, likely due to the removal of inert particulate/colloidal matter during MF. A higher cell viability in the RO biofilm was observed close to the membrane surface irrespective of pretreatment, which is consistent with the biofilm-enhanced concentration polarization effect. Bacterial 16S rRNA gene clone library analysis revealed dominant biofilm communities of Proteobacteria and Bacteroidetes under all conditions. The Cramer-von Mises test statistic showed that MF pretreatment did not significantly change the bacterial community structure of RO membrane biofilms, though it affected bacterial community structure of non-membrane-associated biofilms (collected from the feed tank wall). The finding that the biofilm community developed on the RO membrane was not influenced by MF pretreatment may imply that RO membranes select for a conserved biofilm community. © 2009 Elsevier Ltd. All rights reserved. Source


Bar-Sadan M.,Julich Research Center | Bar-Sadan M.,Ben - Gurion University of the Negev | Barthel J.,Julich Research Center | Shtrikman H.,Weizmann Institute of Science | Houben L.,Julich Research Center
Nano Letters | Year: 2012

Incorporation of catalyst atoms during the growth process of semiconductor nanowires reduces the electron mean free path and degrades their electronic properties. Aberration-corrected scanning transmission electron microscopy (STEM) is now capable of directly imaging single Au atoms within the dense matrix of a GaAs crystal, by slightly tilting the GaAs lattice planes with respect to the incident electron beam. Au doping values in the order of 10 17-18 cm 3 were measured, making ballistic transport through the nanowires practically inaccessible. © 2012 American Chemical Society. Source


Willett S.D.,ETH Zurich | McCoy S.W.,ETH Zurich | McCoy S.W.,Massachusetts Institute of Technology | McCoy S.W.,University of Nevada, Reno | And 4 more authors.
Science | Year: 2014

River networks evolve as migrating drainage divides reshape river basins and change network topology by capture of river channels. We demonstrate that a characteristic metric of river network geometry gauges the horizontal motion of drainage divides. Assessing this metric throughout a landscape maps the dynamic states of entire river networks, revealing diverse conditions: Drainage divides in the Loess Plateau of China appear stationary; the young topography of Taiwan has migrating divides driving adjustment of major basins; and rivers draining the ancient landscape of the southeastern United States are reorganizing in response to escarpment retreat and coastal advance. The ability to measure the dynamic reorganization of river basins presents opportunities to examine landscape-scale interactions among tectonics, erosion, and ecology. Source


Solovchenko A.,Moscow State University | Khozin-Goldberg I.,Ben - Gurion University of the Negev
Biotechnology Letters | Year: 2013

Recent developments in the field of microalgal biotechnology, including CO2 biomitigation and the discovery of new species of microalgae that are tolerant to extremely high CO2 levels (40-100 vol%), have renewed interest in the physiological effects and mechanisms of high-CO2 tolerance in photoautotrophs. Photosynthetic apparatus state transitions that increase ATP generation, upregulation of H+-ATPases pumping protons out of the cell, rapid shutdown of CO2-concentrating mechanisms, and adjustment of membranes' fatty acid composition are currently believed to be the key mechanisms governing cellular pH homeostasis and hence microalgae's tolerance to high CO2 levels, which is especially characteristic of extremophile and symbiotic species. The mechanisms governing acclimation to high CO2 comprise the subject of this review and are discussed in view of the use of CO2 enrichment to increase the productivity of microalgal cultures, as well as the practice of carbon capture from flue gases. © 2013 Springer Science+Business Media Dordrecht. Source


Abrahamson N.A.,Pacific Gas and Electric Company | Silva W.J.,Pacific Engineering and Analysis | Kamai R.,Ben - Gurion University of the Negev
Earthquake Spectra | Year: 2014

Empirical ground motion models for the average horizontal component from shallow crustal earthquakes in active tectonic regions are derived using the PEER NGA-West2 database. The model is applicable to magnitudes 3.0-8.5, distances 0-300 km, and spectral periods of 0-10 s. The model input parameters are the same as those used by Abrahamson and Silva (2008), with the following exceptions: the loading level for nonlinear effects is based on the spectral acceleration at the period of interest rather than the PGA; and the distance scaling for hanging wall (HW) effects off the ends of the rupture includes a dependence on the source-to-site azimuth. Regional differences in large-distance attenuation and VS30scaling between California, Japan, China, and Taiwan are included. The scaling for the HW effect is improved using constraints from numerical simulations. The standard deviation is magnitude-dependent, with smaller magnitudes leading to larger standard deviations at short periods, but smaller standard deviations at long periods. Directivity effects are not included through explicit parameters, but are captured through the variability of the empirical data. © 2014, Earthquake Engineering Research Institute. Source


Ablin J.N.,Tel Aviv University | Buskila D.,Ben - Gurion University of the Negev
Maturitas | Year: 2013

Fibromyalgia is a syndrome characterized by the presence of chronic widespread pain, representing sensitization of the central nervous system. The pthophysiology of fibromyalgia is a complex and remains in evolution, encompassing diverse issues such as disturbed patterns of sleep, alter processing and decreased conditioned pain modulation at the spinal level, as well as increased connectivity between various pain - processing areas of the brain. This evolution is continuously uncovering potential novel therapeutic targets. Treatment of fibromyalgia is a multi - faceted endeavor, inevitably combining pharmacological as well as non - pharmacological approaches. 2δ ligands and selective nor-epinephrine - serotonin reuptake inhibitors are the current mainstays of pharmacological treatment. Novel re-uptake inhibitors targeting both nor -epinephrine and dopamine are potential additions to this armamentarium as are substance P antagonists, Opiod antagonism is another intriguing possibility. Canabinoid agonists hold promise in the treatment of fibromyalgia although current evidence is incomplete. Sodium Oxybate is a unique sleep - promoting medication while drugs those promot arousals such as modafilnil are also under investigation. In the current review, current and emerging therapeutic options for the syndrome of fibromyalgia are covered. © 2013 Elsevier Ireland Ltd. Source


Hoshino M.,University of Tokyo | Lyubarsky Y.,Ben - Gurion University of the Negev
Space Science Reviews | Year: 2012

This chapter mainly deals with magnetic reconnection and particle acceleration in relativistic astrophysical plasmas, where the temperature of the current sheet exceeds the rest mass energy and the Alfvén velocity is close to the speed of light. Magnetic reconnection now receives a great deal of interest for its role in many astrophysical systems such as pulsars, magnetars, galaxy clusters, and active galactic nucleus jets. We review recent advances that emphasize the roles of reconnection in high-energy astrophysical phenomena. © 2012 Springer Science+Business Media B.V. Source


Dubi Y.,Ben - Gurion University of the Negev
Journal of Physical Chemistry C | Year: 2015

The striking efficiency of energy transfer in natural photosynthetic systems and the recent evidence of long-lived quantum coherence in biological light harvesting complexes has triggered much excitement, due to the evocative possibility that these systems - essential to practically all life on earth-use quantum mechanical effects to achieve optimal functionality. A large body of theoretical work has addressed the role of local environments in determining the transport properties of excitons in photosynthetic networks and the survival of quantum coherence in a classical environment. Nonetheless, understanding the connection between quantum coherence, exciton network geometry, and energy transfer efficiency remains a challenge. Here we address this connection from the perspective of heat transfer within the exciton network. Using a nonequilibrium open quantum system approach and focusing on the Fenna-Matthews-Olson complex, we demonstrate that finite local dephasing can be beneficial to the overall power output. The mechanism for this enhancement of power output is identified as a gentle balance between quantum and classical contributions to the local heat flow, such that the total heat flow is directed along the shortest paths and dissipation is minimized. Strongly related to the spatial network structure of the exciton transfer complex, this mechanism elucidates how energy flows in photosyntetic excitonic complexes. © 2015 American Chemical Society. Source


Shahar G.,Ben - Gurion University of the Negev
Journal of Psychotherapy Integration | Year: 2012

Ms. T.'s case is a rich and moving example of a complex clinical case requiring integrative psychotherapy. In this brief commentary, I highlight the centrality of intentional processes in Ms. T.'s life, symptoms, and treatment, using an existential-psychoanalytic-integrative framework. Drawing from previous work on the tension between projectuality, or goal-directedness, and eventuality, or life stress blocking goal-directedness (Shahar, 2011), I argue that Ms. T.'s somatization conveys a subjective experience of life as not worth living ("I don't want to be here"). Such an experience is brought about by an accumulation of traumatic stress, culminating with Ms. T.'s loss of custody over her children. As an older adult, Ms. T. exhibits characteristic of Depressive Personality Disorder (DPD), a diagnosis that sheds light on her dejection, mistrust of others, and conversion symptoms. I concur with Dr. Kores that Behavioral Activation should lead the way in this integrative treatment, but I also call for the "existentializing" and psycho-dynamizing of this powerful intervention. © 2012 American Psychological Association. Source


Xiao X.,University of Connecticut | Javidi B.,University of Connecticut | Martinez-Corral M.,University of Valencia | Stern A.,Ben - Gurion University of the Negev
Applied Optics | Year: 2013

Three-dimensional (3D) sensing and imaging technologies have been extensively researched for many applications in the fields of entertainment, medicine, robotics, manufacturing, industrial inspection, security, surveillance, and defense due to their diverse and significant benefits. Integral imaging is a passive multiperspective imaging technique, which records multiple two-dimensional images of a scene from different perspectives. Unlike holography, it can capture a scene such as outdoor events with incoherent or ambient light. Integral imaging can display a true 3D color image with full parallax and continuous viewing angles by incoherent light; thus it does not suffer from speckle degradation. Because of its unique properties, integral imaging has been revived over the past decade or so as a promising approach for massive 3D commercialization. A series of key articles on this topic have appeared in the OSA journals, including Applied Optics. Thus, it is fitting that this Commemorative Review presents an overview of literature on physical principles and applications of integral imaging. Several data capture configurations, reconstruction, and display methods are overviewed. In addition, applications including 3D underwater imaging, 3D imaging in photon-starved environments, 3D tracking of occluded objects, 3D optical microscopy, and 3D polarimetric imaging are reviewed. © 2013 Optical Society of America. Source


Brustein R.,Ben - Gurion University of the Negev | Brustein R.,Ludwig Maximilians University of Munich | Medved A.J.M.,Rhodes University
Journal of High Energy Physics | Year: 2014

In a recent article, we have shown how quantum fluctuations of the background geometry modify Hawking's density matrix for black hole (BH) radiation. Hawking's diagonal matrix picks up small off-diagonal elements whose influence becomes larger with the number of emitted particles. We have calculated the "time-of-first-bit", when the first bit of information comes out of the BH, and the "transparency time", when the rate of information release becomes order unity. We have found that the transparency time is equal to the "Page time", when the BH has lost half of its initial entropy to the radiation, in agreement with Page's results. Here, we improve our previous calculation by keeping track of the time of emission of the Hawking particles and their back-reaction on the BH. Our analysis reveals a new time scale, the radiation "coherence time", which is equal to the geometric mean of the evaporation time and the light crossing time. We find, as for our previous treatment, that the time-of-first-bit is equal to the coherence time, which is much shorter than the Page time. But the transparency time is now much later than the Page time, just one coherence time before the end of evaporation. Close to the end, when the BH is parametrically of Planckian dimensions but still large, the coherence time becomes parametrically equal to the evaporation time, thus allowing the radiation to purify. We also determine the time dependence of the entanglement entropy of the early and late-emitted radiation. This entropy is small during most of the lifetime of the BH, but our qualitative analysis suggests that it becomes parametrically maximal near the end of evaporation. © 2014 The Authors. Source


Nimrod G.,Ben - Gurion University of the Negev
International Journal on Disability and Human Development | Year: 2012

The cybernetic revolution changed the way individuals suffering from depression look for and receive help. Many have begun turning towards online communities for help in understanding and dealing with symptoms. This article aims to examine the contents and characteristics of the discussions on depression in dedicated online communities and to explore the potential benefi ts these communities offer people with depression. For that purpose, quantitative content analysis of a full year ' s data from 25 leading online communities was performed, using a novel computerized system, the Forum Monitoring System. The overall database included 149,216 messages. Results indicated that there was an increase in the daily activity level during the winter. Content analysis identifi ed nine main subjects discussed in the communities, including (in descending order) ' symptoms ' , ' relationships ' , ' coping ' , ' life ' , ' formal care ' , ' medications ' , ' causes ' , ' suicide ' , and ' work ' . The overall tone was rather balanced. Findings indicated that online depression communities serve as a sphere for knowledge exchange, sharing the experience of living with depression, and getting inspiration for coping. Involvement in these communities seems to inspire and empower participants by enhancing better understanding of their condition and by encouraging them to fi ght depression. Therefore, it is suggested that the communities can complement formal care. Participating in these communities could be regarded as a coping technique, and more fragile depressed people should be encouraged to visit them and participate. © 2012 by Walter de Gruyter Berlin Boston. Source


Permuter H.,Ben - Gurion University of the Negev | Permuter H.,Israel Electric | Weissman T.,Stanford University
IEEE Transactions on Information Theory | Year: 2011

We study source coding in the presence of side information, when the system can take actions that affect the availability, quality, or nature of the side information. We begin by extending the Wyner-Ziv problem of source coding with decoder side information to the case where the decoder is allowed to choose actions affecting the side information. We then consider the setting where actions are taken by the encoder, based on its observation of the source. Actions may have costs that are commensurate with the quality of the side information they yield, and an overall per-symbol cost constraint may be imposed. We characterize the achievable tradeoffs between rate, distortion, and cost in some of these problem settings. Among our findings is the fact that even in the absence of a cost constraint, greedily choosing the action associated with the best side information is, in general, suboptimal. A few examples are worked out. © 2011 IEEE. Source


Ben-Hail D.,Ben - Gurion University of the Negev
Cold Spring Harbor protocols | Year: 2014

To make biophysical measurements of functions such as the pore-forming activity of mitochondrial voltage-dependent anion-selective channel protein 1 (VDAC1), it is first necessary to obtain a source of purified VDAC protein. In this protocol, we present a method for obtaining rat liver mitochondria as a source of VDAC1 and then describe two methods, one using a nonionic detergent and the other an ionic detergent, for purifying VDAC1 from the isolated mitochondria. This produces a source of VDAC1 proteins that are suitable for subsequent incorporation into artificially prepared phospholipid bilayers. Furthermore, the isolated mitochondria can be used for assaying the mitochondrial permeability transition pore (MPTP). Source


Gutman E.M.,Ben - Gurion University of the Negev
Journal of Solid State Electrochemistry | Year: 2014

It is shown that the so-called "equation of solid-state electrocapillarity" derived by Gokhshtein from the simplest thermodynamic model, earlier used by Lippmann, is an incorrect modification that is returned to the classic Lippmann equation if to take into account the usual definition of differential capacity of the electrical double layer. Consequently, Gokhshtein's experiments can actually confirm only the validity of the Lippmann equation (with deviations caused by physicochemical processes in the double layer, which could not be taken into account in the thermodynamic equations). The thermodynamic model is insufficient for the interpretations of these experimental results, whose understanding requires physical models of the different phenomena occurring at the electrode surface. © 2014 Springer-Verlag Berlin Heidelberg. Source


Schechter M.,Ben - Gurion University of the Negev | Stamp P.C.E.,University of British Columbia
Physical Review B - Condensed Matter and Materials Physics | Year: 2013

The low-temperature universal properties in disordered and amorphous solids are considered. We introduce a model that includes two types of two-level systems (TLSs), which, based on their local symmetry, interact weakly or strongly with the phonon field. This accounts well for the experimental results and addresses some long-standing questions: the nature of the TLSs, the smallness and universality of the phonon attenuation, and the energy scale of 3 K below which universality is observed. Our model describes disordered lattices; we also discuss its application to amorphous solids. © 2013 American Physical Society. Source


Lyubarsky Y.,Ben - Gurion University of the Negev
Astrophysical Journal Letters | Year: 2010

Reconnection of alternating magnetic fields is an important energy transformation mechanism in Poyntingdominated outflows. We show that the reconnection is facilitated by the Kruskal-Schwarzschild instability of current sheets separating the oppositely directed fields. This instability, which is a magnetic counterpart of the Rayleigh-Taylor instability, develops if the flow is accelerated. Then the plasma drips out of the current sheet, providing conditions for rapid reconnection. Since the magnetic dissipation leads to the flow acceleration, the process is self-sustaining. In pulsar winds, this process could barely compete with the earlier proposed dissipation mechanisms. However, the novel mechanism turns out to be very efficient at active galactic nucleus and gamma-ray burst conditions. © 2010. The American Astronomical Society. All rights reserved. Source


Granek R.,Ben - Gurion University of the Negev
Soft Matter | Year: 2011

We study the dynamics of a nearly flat membrane bilayer that is surrounded by two semi-infinite viscoelastic fluids, and its response to local transverse forces. By modeling the surrounding fluids as continuous media with frequency dependent shear moduli G1(ω) and G2(ω) we derive the dispersion relation for undulations. We deduce the frequency-dependent transverse mean square displacement of a membrane segment and find that it is proportional to κ-1/3[G1(ω ) + G2(ω)]-2/3, where κ is the membrane bending modulus. We then consider the linear response of a membrane to external forces. Possible implications are elucidated for experiments probing the viscoelasticity of cells and vesicles encapsulating and/or embedded in viscoelastic fluids, for the dynamic structure factor of such systems, and for lamellipodia dynamics. © 2011 The Royal Society of Chemistry. Source


Gur E.,Ben - Gurion University of the Negev
Sub-cellular biochemistry | Year: 2013

As the first ATP-dependent protease to be identified, Lon holds a special place in the history of cellular biology. In fact, the concept of ATP-dependent protein degradation was established through the findings that led to the discovery of Lon. Therefore, this chapter begins with a historical perspective, describing the milestones that led to the discovery of Lon and ATP-dependent proteolysis, starting from the early findings in the 1960s until the demonstration of Lon's ATP-dependent proteolytic activity in vitro, in 1981. Most of our knowledge on Lon derives from studies of the Escherichia coli Lon ortholog, and, therefore, most of this chapter relates to this particular enzyme. Nonetheless, Lon is not only found in most bacterial species, it is also found in Archaea and in the mitochondrion and chloroplast of eukaryotic cells. Therefore many of the conclusions gained from studies on the E. coli enzyme are relevant to Lon proteases in other organisms. Lon, more than any other bacterial or organellar protease, is associated with the degradation of misfolded proteins and protein quality control. In addition, Lon also degrades many regulatory proteins that are natively folded, thus it also plays a prominent role in regulation of physiological processes. Throughout the years, many Lon substrates have been identified, confirming its role in the regulation of diverse cellular processes, including cell division, DNA replication, differentiation, and adaptation to stress conditions. Some examples of these functions are described and discussed here, as is the role of Lon in the degradation of misfolded proteins and in protein quality control. Finally, this chapter deals with the exquisite sensitivity of protein degradation inside a cell. How can a protease distinguish so many substrates from cellular proteins that should not be degraded? Can the specificity of a protease be regulated according to the physiological needs of a cell? This chapter thus broadly discusses the substrate specificity of Lon and its allosteric regulation. Source


Horowitz Y.S.,Ben - Gurion University of the Negev
Radiation Measurements | Year: 2014

The state-of-the-art in the use of thermoluminescence for the measurement of energy imparted by ionizing radiation is discussed. Emphasis is on the advantages obtainable by the use of computerized glow curve analysis in (i) quality control, (ii) low dose environmental dosimetry, (iii) medical applications (especially precision) and microdosimetric applications, and (iv) mixed field ionization-density-dosimetry. Possible frontiers of future research are highlighted: (i) vector representation in glow curve analysis, (ii) combined OSL/TL measurements, (iii) detection of sub-ionization electrons, (iv) requirements for new TL materials and (v) theoretical subjects involving kinetic modeling invoking localized/delocalized recombination applied to dose response and track structure theory including creation of defects. © 2014 Elsevier Ltd. All rights reserved. Source


Pascal R.,Montpellier University | Pross A.,Ben - Gurion University of the Negev
Chemical Communications | Year: 2015

Bridging between the phenomenologically distinct biological and physical worlds has been a major scientific challenge since Boltzmann's probabilistic formulation of the second law of thermodynamics. In this review we summarize our recent theoretical attempts to bridge that divide through analysis of the thermodynamic-kinetic interplay in chemical processes and the manner in which that interplay impacts on material stability. Key findings are that the term 'stability' manifests two facets - time and energy - and that stability's time facet, expressed as persistence, is more general than its energy facet. That idea, together with the proposed existence of a logical law of nature, the persistence principle, leads to the mathematically-based insight that stability can come about through either Boltzmann's probabilistic considerations or Malthusian kinetics. Two mathematically-based forms of material persistence then lead directly to the physical likelihood of two material forms, animate and inanimate. Significantly, the incorporation of kinetic considerations into the stability concept appears to bring us closer to enabling two of the central theories in science - the second law of thermodynamics and Darwin's theory of evolution - to be reconciled within a single conceptual framework. © The Royal Society of Chemistry. Source


Negative differential resistance - a decrease in current with increasing bias voltage - is a counter-intuitive effect that is observed in various molecular junctions. Here, we present a novel mechanism that may be responsible for such an effect, based on strong Coulomb interaction between electrons in the molecule and electrons on the atoms closest to the molecule. The Coulomb interaction induces electron-hole binding across the molecule-electrode interface, resulting in a renormalized and enhanced molecule-electrode coupling. Using a self-consistent non-equilibrium Green's function approach, we show that the effective coupling is non-monotonic in bias voltage, leading to negative differential resistance. The model is in accord with recent experimental observations that showed a correlation between the negative differential resistance and the coupling strength. We provide detailed suggestions for experimental tests which may help to shed light on the origin of the negative differential resistance. Finally, we demonstrate that the interface Coulomb interaction affects not only the I-V curves but also the thermoelectric properties of molecular junctions. © 2013 AIP Publishing LLC. Source


Gelbstein Y.,Ben - Gurion University of the Negev
Journal of Electronic Materials | Year: 2011

The search for alternative energy sources is at the forefront of applied research. In this context, thermoelectricity, i.e., direct conversion of thermal into electrical energy, plays an important role, particularly for exploitation of waste heat. Materials for such applications should exhibit thermoelectric potential and mechanical stability. PbTe-based compounds include well-known n-type and p-type compounds for thermoelectric applications in the 50°C to 600°C temperature range. This paper is concerned with the mechanical and transport properties of p-type Pb 0.5Sn 0.5Te:Te and PbTe samples, both of which have a hole concentration of ∼1 × 10 20 cm -3. The ZT values of PbTe were found to be higher than those of Pb 0.5Sn 0.5Te:Te, and they exhibited a maximal value of 0.8 compared with 0.5 for Pb 0.5Sn 0.5Te:Te at 450°C. However, the microhardness value of 49 H V found for Pb 0.5Sn 0.5Te:Te was closer to that of the mechanically stable n-type PbTe (30 H V) than to that of PbTe (71 H V). Thus, although lower ZT values were obtained, from a mechanical point of view Pb 0.5Sn 0.5Te:Te is preferable over PbTe for practical applications. © 2010 TMS. Source


Iecovich E.,Ben - Gurion University of the Negev
Gerontologist | Year: 2011

Purpose:The study aims to examine job satisfaction of migrant live-in home care workers who provide care to frail older adults and to examine the extent to which quality of relationships between the care provider and care recipient and workplace characteristics is associated with job satisfaction.Design and Methods:A convenience sample that included 335 dyads of Philippine workers and their frail care recipients were recruited through 2 national home care agencies and snowballing. Multiple regression analyses examined the extent to which workplace characteristics, quality of relationships, care recipient characteristics, and care worker characteristics explain job satisfaction.Results:Scores for job satisfaction, quality of relationships, and workplace characteristics were strongly positive. Overall and intrinsic job satisfactions were explained by workers' qualifications, workplace characteristics, and quality of relationships from the perspective of care recipients, whereas satisfaction with benefits was affected by workplace characteristics and quality of relationships from the perspective of the care workers.Implications:Findings suggest that workers who were better qualified in terms of more years of formal education and more years as care workers and who reported improved workplace characteristics, in particular more job decision authority and variety, reported increased job satisfaction. Therefore, enabling migrant live-in care workers more job decision authority and variety may increase their job satisfaction. More research is needed to deepen our understanding of additional job-related characteristics that explain job satisfaction among this group of care workers. © 2011 The Author. Source


Achiam-Montal M.,Ben - Gurion University of the Negev
Child psychiatry and human development | Year: 2013

Adults with panic disorder (PD) often present to medical settings with noncardiac chest pain (NCCP), but less is known about children and adolescents with this complaint. We sought to characterize PD in youth with NCCP and compare features with PD in youth in psychiatric outpatient settings. Using a semi-structured diagnostic interview we evaluated 132 youth (ages 8-17) with NCCP recruited from two medical settings. Twenty-seven (20.5 %) met full DSM-IV criteria for PD, eleven of which were children (<13 years). Most frequent panic symptoms were somatic complaints, although cognitive symptoms were also common. Only 14.8 % had clinically significant agoraphobia. Comorbid anxiety disorders and major depression were common. Overall, clinical features of PD among youth with NCCP are similar to PD in psychiatric settings. Interventions for PD may benefit youth who present initially with NCCP. Systematic psychiatric screening could increase detection of PD and improve care for this population. Source


Kessler Y.,Ben - Gurion University of the Negev | Oberauer K.,University of Zurich
Journal of Experimental Psychology: Learning Memory and Cognition | Year: 2014

Updating and maintenance of information are 2 conflicting demands on working memory (WM). We examined the time required to update WM (updating latency) as a function of the sequence of updated and not-updated items within a list. Participants held a list of items in WM and updated a variable subset of them in each trial. Four experiments that vary the number of to-be-updated and to-be-maintained items, as well as their positions in the list, are reported. The pattern of latencies was best explained by a model assuming forward scanning of the list, updating modified items, and maintaining nonmodified items. Switching between updating and maintenance incurred a response time cost, which increased with overall set-size. The formation of new item-position associations accounted for an additional response time component. The finding of an update-switch cost provides novel behavioral support for a class of physiologically inspired computational models, in which updating and maintenance require 2 different states of WM. © 2014 American Psychological Association. Source


Ivensky G.,Ben - Gurion University of the Negev | Bronshtein S.,Sami Shamoon College of Engineering | Abramovitz A.,University of California at Irvine
IEEE Transactions on Power Electronics | Year: 2011

This paper presents an approximate analysis of LLC resonant converter with capacitive filter operating above and below resonance. An equivalent ac resistance model of the rectifier valid for discontinuous as well as continuous conduction modes is proposed. The dc voltage conversion ratio is then obtained using the fundamental harmonic approximation analysis method. Based on the analysis, LLC converter design plots and guidelines are suggested. The theory is verified by simulation and experiment. © 2011 IEEE. Source


Tarasiuk A.,Health Science University | Reuveni H.,Ben - Gurion University of the Negev
Current Opinion in Pulmonary Medicine | Year: 2013

PURPOSE OF REVIEW: Obstructive sleep apnea (OSA) has a substantial economic impact on healthcare systems. We reviewed parameters affecting healthcare costs (race, low education, and socioeconomic status) on OSA comorbidity, and costs and the effect of OSA treatment on medical costs. RECENT FINDINGS: OSA is associated with increased cardiovascular disease (CVD) morbidity and substantially increased medical costs. Risk for OSA and resulting CVD are associated with obesity, tobacco smoking, black race, and low socioeconomic status; all these are associated with poor continuous positive airway pressure (CPAP) adherence. Healthcare costs are not normally distributed, that is, the costliest and the sickest upper third of patients consume 65-82% of all medical costs. Only a limited number of studies have explored the effect of CPAP on medical costs. SUMMARY: Costs of untreated OSA may double the medical expenses mainly because of CVD. Identifying the costliest, sickest upper third of OSA patients will reduce expenses to healthcare systems. Studies exploring the effect of CPAP on medical costs are essential. In addition, tailoring intervention programs to reduce barriers to adherence have the potential to improve CPAP treatment, specially in at-risk populations that are sicker and consume more healthcare costs. © 2013 Wolters Kluwer Health | Lippincott Williams & Wilkins. Source


Henrich C.C.,Georgia State University | Shahar G.,Ben - Gurion University of the Negev
Journal of the American Academy of Child and Adolescent Psychiatry | Year: 2013

Objective: The effects of Israeli adolescents' exposure to rocket attacks over time were examined, focusing on anxiety, depression, aggression, and violence commission. Method: A sample of 362 adolescents from southern Israel was followed from 2008 through 2011 with four annual assessments. Measures included exposure to rocket attacks (gauging whether children were affected by rocket attacks, both directly and indirectly, through friends and family), anxiety (items from the State Anxiety Inventory), depression (the Center for Epidemiological Studies Child Depression Scale), aggression (the Orpinas Aggression Scale), and violence commission (from the Social and Health Assessment). Results: Concurrent and longitudinal findings differed. Wave 1 exposure to rockets attacks was associated with Wave 1 anxiety, depression, and aggression. Longitudinal results evinced only modest effects of exposure on anxiety and depression, no effects on aggression, but robust effects on violence commission. Exposure to terror attacks before the study predicted increased odds of violence commission at the fourth and final wave, controlling for violence commission at the first, second, and third wave. Exposure to rocket attacks in the second wave predicted increased odds of violence commission at the third wave. Conclusion: This is the first longitudinal study attesting to the prospective longitudinal effect of exposure to terrorism on adolescent violence. Findings should serve as a red flag for health care practitioners working in civil areas afflicted by terrorism and political violence. © 2013 American Academy of Child and Adolescent Psychiatry. Source


Gur E.,Ben - Gurion University of the Negev
Sub-cellular biochemistry | Year: 2013

Bacteria are frequently exposed to changes in environmental conditions, such as fluctuations in temperature, pH or the availability of nutrients. These assaults can be detrimental to cell as they often result in a proteotoxic stress, which can cause the accumulation of unfolded proteins. In order to restore a productive folding environment in the cell, bacteria have evolved a network of proteins, known as the protein quality control (PQC) network, which is composed of both chaperones and AAA+ proteases. These AAA+ proteases form a major part of this PQC network, as they are responsible for the removal of unwanted and damaged proteins. They also play an important role in the turnover of specific regulatory or tagged proteins. In this review, we describe the general features of an AAA+ protease, and using two of the best-characterised AAA+ proteases in Escherichia coli (ClpAP and ClpXP) as a model for all AAA+ proteases, we provide a detailed mechanistic description of how these machines work. Specifically, the review examines the physiological role of these machines, as well as the substrates and the adaptor proteins that modulate their substrate specificity. Source


As originally shown by King and Altman, graph theory, and specifically the use of spanning trees, provides the means to solve the kinetics of any catalytic network in a steady state regime, taking as input data all the rate constants. Herein, it is shown that the translation of the rate constants to Gibbs energies provides a simpler way to estimate the energy span (i.e., the apparent activation energy of the full reaction), the determining states, and the turnover frequency (TOF) of any and all catalytic networks. By re-examining the concepts of chemical kinetics through rigorous mathematical treatment, an alternative definition is suggested for the term "chemical mechanism". In addition, and in analogy to electrical circuits, the chemical resistor terms (called here "kinestors") are identified for parallel and series chemical circuits, providing a new Ohmic interpretation for catalysis. © 2015 American Chemical Society. Source


Ram M.,Ben - Gurion University of the Negev
Antipode | Year: 2014

The role of mimicry in the construction and deconstruction of social identities has enriched our understanding of power relations considerably. However, as a spatial practice, mimicry has received scant consideration. In what ways can space itself become an object of mimicry? What strategies and practices are involved in this process and with what political objectives? The current paper treats these questions by analyzing processes of mimetic spatial production aiming to transform the Israeli-occupied territory of Mount Hermon into an "ordinary" western ski resort. Yet this concerted effort produces a variety of tensions and contradictions that ultimately undo the normalization of the colonial space, comprising a test case of the convoluted ways in which mimicry of space, not merely in space, generates various forms of slippage, excess and ambivalence. © 2013 Antipode Foundation Ltd. Source


Gluskin E.,Ben - Gurion University of the Negev
International Journal of Circuit Theory and Applications | Year: 2015

Summary The suggestion of writing, for some problems, nonlinear state equations not as dx/dt = F(x,u,t), but as dx/dt = [A(t,x)]x + [B(t,x)]u(t), which is more 'constructive' as re system perception and possible structural generalizations, is considered, supported by arguments related to the classification of switched circuits as linear and nonlinear. The point of the distinction is mainly that when solving dx/dt = F(x,u,t), one immediately dwells into the analytical problems related to pure mathematics, whereas for dx/dt = [A(t,x)]x + [B(t,x)]u(t), considering first a constant matrix [A], one introduces the system's physical structure and considering then [A(x)] sees the nonlinearity of the system as a dependence of the structure on the processes in it or on system's input. (This might be named structural response). The thinking in terms of structure better observes the engineering and physical degrees of freedom, which are relevant regarding applications. Some electronic systems and physical systems (e.g., hydrodynamic) are considered in these terms. The logical side is always the focus, and the pedagogical (even philosophical) side is not ignored. © 2013 John Wiley & Sons, Ltd. Source


Brafman R.I.,Ben - Gurion University of the Negev | Domshlak C.,Technion - Israel Institute of Technology
Artificial Intelligence | Year: 2013

If the complexity of planning for a single agent is described by some function f of the input, how much more difficult is it to plan for a team of n cooperating agents? If these agents are completely independent, we can simply solve n single agent problems, scaling linearly with the number of agents. But if all the agents interact tightly, we really need to solve a single problem that is n times larger, which could be exponentially (in n) harder to solve. Is a more general characterization possible? To formulate this question precisely, we minimally extend the standard STRIPS model to describe multi-agent planning problems. Then, we identify two problem parameters that help us answer our question. The first parameter is independent of the precise task the multi-agent system should plan for, and it captures the structure of the possible direct interactions between the agents via the treewidth of a graph induced by the team. The second parameter is task-dependent, and it captures the minimal number of interactions by the "most interacting" agent in the team that is needed to solve the problem. We show that multi-agent planning problems can be solved in time exponential only in these parameters. Thus, when these parameters are bounded, the complexity scales only polynomially in the size of the agent team. These results also have direct implications for the single-agent case: by casting singleagent planning tasks as multi-agent planning tasks, we can devise novel methods for decomposition-based planning for single agents. We analyze one such method, and use the techniques developed to provide some of the strongest tractability results for classical single-agent planning to date. © 2012 Elsevier B.V. All rights reserved. Source


Zarmi Y.,Ben - Gurion University of the Negev
Nonlinearity | Year: 2014

A functional of the solution of the Kadomtsev-Petviashvili II equation maps multi-soliton solutions onto systems of vertices - structures that are localized around soliton junctions. A solution with one junction is mapped onto a single vertex, which emulates a free, spatially extended, particle. In solutions with several junctions, each junction is mapped onto a vertex. Moving in the x-y plane, the vertices collide, coalesce upon collision and then split up. When well separated, they emulate free particles. Multi-soliton solutions, whose structure does not change under space-time inversion as |t|→∞, are mapped onto vertex systems that undergo elastic collisions. Solutions, whose structure does change, are mapped onto systems that undergo inelastic collisions. The inelastic vertex collisions generated from the infinite family of (M,1) solutions (M external solitons, (M-2) Y-shaped soliton junctions, M≤ 4) play a unique role: the only definition of vertex mass consistent with momentum conservation in these collisions is the spatial integral of the vertex profile. This definition ensures, in addition, that, in these collisions, the total mass and kinetic energy due to the motion in the y-direction are conserved. In general, the kinetic energy due to the motion in the x-direction is not conserved in these collisions. © 2014 IOP Publishing Ltd & London Mathematical Society. Source


Liang Z.,University of Michigan | Tzfira T.,University of Michigan | Tzfira T.,Ben - Gurion University of the Negev
Nature Communications | Year: 2013