Entity

Time filter

Source Type

Hong Kong, China

The Hong Kong University of Science and Technology is a public research university in Clear Water Bay Peninsula, Hong Kong. Established in 1991, it is one of the territory's youngest statutory universities.The University currently consists of four disciplinary schools, which offer degrees in Business, Engineering, Science and Social Science & Humanities, alongside Interdisciplinary Programs Office, which provides cross-disciplinary programs, and Fok Ying Tung Graduate School/Fok Ying Tung Research Institute, which aims at technology transfer and commercialization. HKUST has been continuously viewed as one of the top three higher education institutions in Hong Kong. Wikipedia.


Venkatesh V.,University of Arkansas | Chan F.K.Y.,Curtin University Australia | Thong J.Y.L.,Hong Kong University of Science and Technology
Journal of Operations Management | Year: 2012

Advances in Internet technologies have led to the popularity of technology-based self-services, with the design of such services becoming increasingly important. Using technology-based services in the public sector as the setting, we identified the key service attributes driving adoption and use of transactional e-government services, and citizens' preference structures across these attributes. After identifying four key attributes, i.e., usability, computer resource requirement, technical support provision and security provision, we conducted a Web-based survey and a conjoint experiment among 2465 citizens. In a two-stage Web-based survey, citizens reported their perceptions about a smartcard technology for transactional e-government services before use, and their use and satisfaction 4 months later. Results showed that the key attributes (noted above) influenced citizens' intentions, subsequent use and satisfaction. In the conjoint experiment, citizens reported their preferences for key service attributes for two transactional e-government services. Further, a cluster analysis uncovered four distinct citizen segments, i.e., balanced, usability-focused, risk-conscious and resource-conservative, that can inform efforts in designing e-government services. A post hoc analysis confirmed the appropriateness of the market segmentation in understanding citizens' adoption and use of transactional e-government services. © 2011 Published by Elsevier B.V. Source


Yang C.,Tsinghua University | Wong C.P.,Georgia Institute of Technology | Wong C.P.,Chinese University of Hong Kong | Yuen M.M.F.,Hong Kong University of Science and Technology
Journal of Materials Chemistry C | Year: 2013

Enormous efforts have been made towards the next generation of flexible, low-cost, environmentally benign printed electronics. In this regard, advanced materials for the printed conductive lines and interconnects are of significant importance. To improve efficiency and effectiveness, for several decades, conductive fillers have been filled into dispersants, which lead to the so-called electrically conductive composites (ECCs), which are a key material to the printed electronics varying from substituting the traditional solders to finding new applications in the blooming field of flexible printed electronics. ECCs in various formulations have converged in the current efforts to develop platforms with the desired specifications of electrical and thermal conductance, mechanical strength, and others. This platform is highly versatile and valuable for the emerging novel electronic devices, which emphasize tailoring processing conditions to cater to the key functional materials to optimize outcomes. The properties obtained can facilitate decisions about modifications to treatment. Noble metal fillers, such as silver flakes, have long been studied as active fillers for the ECCs. Owing to the recent progress in nanotechnology and surface modifications, many new avenues have opened for them. By taking advantage of the well-developed surface chemistry of these materials, researchers are enhancing their electrical conductivity, which is essential for broader applications. In recent years, the advances of ECCs have benefited the development of the applications of optoelectronics, e-papers, electromagnetic shielding, clinical diagnosis, radio frequency devices, etc. Despite the various advantages that they can offer over the traditional technologies, their limitations, e.g. low electrical conductivity, poor impact strength, increased contact resistance at elevated temperatures and humidity aging, have been considered as the major obstacles. In this feature article, we introduce the surface engineering techniques of the conductive filler materials that we and others have developed, with an emphasis on how these techniques influence the performance of the ECCs, especially for the improvement of the filler-to-filler electron transfer in the resin dispersants, some of which have potentially been approaching the theoretical upper limit of what they can reach in electrical conductivity. We and others have developed a set of chemical and engineering methods to modify the conductive fillers, enabling tailor-made surface functionalities and charges. These features, in turn, can be harnessed to adjust the electrical property and reliability of the ECCs, and further, to cater to various novel printed electronics, based on e.g. low temperature processing conditions. © The Royal Society of Chemistry 2013. Source


Shimokawa S.,Hong Kong University of Science and Technology
Food Policy | Year: 2013

Improving dietary knowledge has the potential to prevent obesity and overweight and, if effective, is a highly feasible policy measure. This paper proposes a new framework to examine the effects of dietary knowledge on nutrient intake and diet quality. The framework allows the effects to differ by one's expectation about food availability (EFA). Using data from China, we find that dietary knowledge affects mainly the quantity of diet (e.g., lowering total calorie intake) when EFA is increasing, while it affects mainly the quality of diet (e.g., lowering the share of calories from oils) when EFA is decreasing. The effect on the quantity is larger among overweight adults, while the effect on the quality is more significant among non-overweight adults. Without distinguishing the direction of changes in EFA as in previous studies, the estimated effects of dietary knowledge tend to be smaller. Thus, as an anti-obesity measure, dietary education may be more effective than indicated by previous studies under the situations where EFA increases (e.g., introducing food coupons), while only marginally effective under the situations where EFA decreases (e.g., increasing real food prices). © 2012 Elsevier Ltd. Source


Zhou J.,Shanghai University of Finance and Economics | Chen Y.-J.,Hong Kong University of Science and Technology
Operations Research | Year: 2016

As various firms initially make information and access to their products/services scarce within a social network, identifying influential players emerges as a pivotal step for their success. In this paper, we tackle this problem using a stylized model that features payoff externalities and local network effects. The network designer is allowed to release information to only a subset of players (leaders); these targeted players make their contributions first and the rest (followers) move subsequently after observing the leaders' decisions. In the presence of incomplete information, the signaling incentive drives the optimal selection of leaders and can lead to a first-order materialistic effect on equilibrium contributions. We propose a novel index for the key leader selection with incomplete information that can be substantially different from the key player index in Ballester et al. (2006) [Ballester C, Calvó-Armengol A, Zenou Y (2006) Who's who in networks. wanted: The key player. Econometrica 74(5):1403-1417] and the key leader index with complete information proposed in Zhou and Chen (2015) [Zhou J, Chen Y-J (2015) Key leaders in social networks. J. Econom. Theory 157:212-235]. We also show that in undirected graphs, the optimal leader group identified in Zhou and Chen (2015) is exactly the optimal follower group when signaling is present. In particular, if the graphs are complete, the network designer ranks the players by the ascending order of their intrinsic valuations, and the leaders are those with lower intrinsic valuations. In the out-tree hierarchical structure, the key leader turns out to be the one that stays in the middle, and it is not necessarily exactly the central player in the network. © 2016 INFORMS. Source


Xiao N.,Nanyang Technological University | Xie L.,Nanyang Technological University | Qiu L.,Hong Kong University of Science and Technology
IEEE Transactions on Automatic Control | Year: 2012

This paper addresses the mean square stabilization problem for discrete-time networked control systems over fading channels. We show that there exists a requirement on the network over which an unstable plant can be stabilized. In the case of state feedback, necessary and sufficient conditions on the network for mean square stabilizability are derived. Under a parallel transmission strategy and the assumption that the overall mean square capacity of the network is fixed and can be assigned among parallel input channels, a tight lower bound on the overall mean square capacity for mean square stabilizability is presented in terms of the Mahler measure of the plant. The minimal overall capacity for stabilizability is also provided under a serial transmission strategy. For the case of dynamic output feedback, a tight lower bound on the capacity requirement for stabilization of SISO plants is given in terms of the anti-stable poles, nonminimum phase zeros and relative degree of the plant. Sufficient and necessary conditions are further derived for triangularly decoupled MIMO plants. The effect of pre- and post-channel processing and channel feedback is also discussed, where the channel feedback is identified as a key component in eliminating the limitation on stabilization induced by the nonminimum phase zeros and high relative degree of the plant. Finally, the extension to the case with output fading channels and the application of the results to vehicle platooning are presented. © 2012 IEEE. Source


Ng C.W.W.,Hohai University | Ng C.W.W.,Hong Kong University of Science and Technology
Journal of Zhejiang University: Science A | Year: 2014

Geotechnical centrifuge modelling is an advanced physical modelling technique for simulating and studying geotechnical problems. It provides physical data for investigating mechanisms of deformation and failure and for validating analytical and numerical methods. Due to its reliability, time and cost effectiveness, centrifuge modelling has often been the preferred experimental method for addressing complex geotechnical problems. In this ZENG Guo-xi Lecture, the kinematics, fundamental principles and principal applications of geotechnical centrifuge modelling are introduced. The use of the state-of-the-art geotechnical centrifuge at the Hong Kong University of Science and Technology (HKUST), China to investigate four types of complex geotechnical problems is reported. The four geotechnical problems include correction of building tilt, effect of tunnel collapse on an existing tunnel, excavation effect on pile capacity and liquefied flow and non-liquefied slide of loose fill slopes. By reporting major findings and new insights from these four types of centrifuge tests, it is hoped to illustrate the role of state-of-the-art geotechnical centrifuge modelling in advancing the scientific knowledge of geotechnical problems. © 2014 Zhejiang University and Springer-Verlag Berlin Heidelberg. Source


Gao L.,Shanghai JiaoTong University | Wang X.,Shanghai JiaoTong University | Xu Y.,Shanghai JiaoTong University | Zhang Q.,Hong Kong University of Science and Technology
IEEE Journal on Selected Areas in Communications | Year: 2011

Cognitive radio is a promising paradigm to achieve efficient utilization of spectrum resource by allowing the unlicensed users (i.e., secondary users, SUs) to access the licensed spectrum. Market-driven spectrum trading is an efficient way to achieve dynamic spectrum accessing/sharing. In this paper, we consider the problem of spectrum trading with single primary spectrum owner (or primary user, PO) selling his idle spectrum to multiple SUs. We model the trading process as a monopoly market, in which the PO acts as monopolist who sets the qualities and prices for the spectrum he sells, and the SUs act as consumers who choose the spectrum with appropriate quality and price for purchasing. We design a monopolist-dominated quality-price contract, which is offered by the PO and contains a set of quality-price combinations each intended for a consumer type. A contract is feasible if it is incentive compatible (IC) and individually rational (IR) for each SU to purchase the spectrum with the quality-price intended for his type. We propose the necessary and sufficient conditions for the contract to be feasible. We further derive the optimal contract, which is feasible and maximizes the utility of the PO, for both discrete-consumer-type model and continuous-consumer-type model. Moreover, we analyze the social surplus, i.e., the aggregate utility of both PO and SUs, and we find that, depending on the distribution of consumer types, the social surplus under the optimal contract may be less than or close to the maximum social surplus. © 2006 IEEE. Source


Li D.-Q.,Wuhan University | Qi X.-H.,Wuhan University | Phoon K.-K.,National University of Singapore | Zhang L.-M.,Hong Kong University of Science and Technology | Zhou C.-B.,Wuhan University
Structural Safety | Year: 2014

This paper studies the reliability of infinite slopes in the presence of spatially variable shear strength parameters that increase linearly with depth. The mean trend of the shear strength parameters increasing with depth is highlighted. The spatial variability in the undrained shear strength and the friction angle is modeled using random field theory. Infinite slope examples are presented to investigate the effect of spatial variability on the depth of critical slip line and the probability of failure. The results indicate that the mean trend of the shear strength parameters has a significant influence on clay slope reliability. The probability of failure will be overestimated if a linearly increasing trend underlying the shear strength parameters is ignored. The possibility of critical slip lines occurring at the bottom of the slope decreases considerably when the mean trend of undrained shear strength is considered. The linearly increasing mean trend of the friction angle has a considerable effect on the distribution of the critical failure depths of sandy slopes. The most likely critical slip line only lies at the bottom of the sandy slope under the special case of a constant mean trend. © 2013 Elsevier Ltd. Source


Zhu X.,Jinan University | Xiang Y.,Hong Kong University of Science and Technology
Journal of the Mechanics and Physics of Solids | Year: 2014

We present a continuum framework for dislocation structure, energy and dynamics of dislocation arrays and low angle grain boundaries that are allowed to be nonplanar or nonequilibrium. In our continuum framework, we define a dislocation density potential function on the dislocation array surface or grain boundary to describe the orientation dependent continuous distribution of dislocations in a very simple and accurate way. The continuum formulations incorporate both the long-range dislocation interaction and the local dislocation line energy, and are derived from the discrete dislocation model. The continuum framework recovers the classical Read-Shockley energy formula when the long-range elastic fields of the low angle grain boundaries are canceled out. Applications of our continuum framework in this paper are focused on dislocation structures on static planar and nonplanar low angle grain boundaries and misfitting interfaces. We present two methods under our continuum framework for this purpose, including the method based on the Frank×3. Source


Shi L.,Hong Kong University of Science and Technology | Xie L.,Nanyang Technological University
IEEE Transactions on Signal Processing | Year: 2012

We consider sensor power scheduling for estimating the state of a general high-order Gauss-Markov system. A sensor decides whether to use a high or low transmission power to communicate its local state estimate or raw measurement data with a remote estimator over a packet-dropping network. We construct the optimal sensor power schedule which minimizes the expected terminal estimation error covariance at the remote estimator under the constraint that the high transmission power can only be used m < T + 1 times, given the time-horizon from k = 0 to k = T. We also discuss how to extend the result to cases involving multiple power levels scheduling. Simulation examples are the provided to demonstrate the results. © 1991-2012 IEEE. Source


Chasnov J.R.,Hong Kong University of Science and Technology
Theoretical Population Biology | Year: 2010

A fast algorithm for computing recombination is developed for model organisms with selection on haploids. Haplotype frequencies are transformed to marginal frequencies; random mating and recombination are computed; marginal frequencies are transformed back to haplotype frequencies. With L diallelic loci, this algorithm is theoretically a factor of a constant times (3 / 8)L faster than standard computations with selection on diploids, and up to 16 recombining loci have been computed. This algorithm is then applied to model the opposing evolutionary forces of multilocus epistatic selection and recombination. Selection is assumed to favor haplotypes with specific alleles either all present or all absent. When the number of linked loci exceeds a critical value, a jump bifurcation occurs in the two-dimensional parameter space of the selection coefficient s and the recombination frequency r. The equilibrium solution jumps from high to low mean fitness with increasing r or decreasing s. These numerical results display an unexpected and dramatic nonlinear effect occurring in linkage models with a large number of loci. © 2010 Elsevier Inc. All rights reserved. Source


Yuan P.,South China Normal University | Ding C.,Hong Kong University of Science and Technology
Finite Fields and their Applications | Year: 2014

Permutation polynomials are an interesting subject of mathematics and have applications in other areas of mathematics and engineering. In this paper, we develop general theorems on permutation polynomials over finite fields. As a demonstration of the theorems, we present a number of classes of explicit permutation polynomials on Fq. © 2014 Elsevier Inc. Source


Zhang K.,Lawrence Berkeley National Laboratory | Kwok J.T.,Hong Kong University of Science and Technology
IEEE Transactions on Neural Networks | Year: 2010

Kernel (or similarity) matrix plays a key role in many machine learning algorithms such as kernel methods, manifold learning, and dimension reduction. However, the cost of storing and manipulating the complete kernel matrix makes it infeasible for large problems. The Nystrm method is a popular sampling-based low-rank approximation scheme for reducing the computational burdens in handling large kernel matrices. In this paper, we analyze how the approximating quality of the Nystrm method depends on the choice of landmark points, and in particular the encoding powers of the landmark points in summarizing the data. Our (non-probabilistic) error analysis justifies a "clustered Nyström method" that uses the k-means clustering centers as landmark points. Our algorithm can be applied to scale up a wide variety of algorithms that depend on the eigenvalue decomposition of kernel matrix (or its variant), such as kernel principal component analysis, Laplacian eigenmap, spectral clustering, as well as those involving kernel matrix inverse such as least-squares support vector machine and Gaussian process regression. Extensive experiments demonstrate the competitive performance of our algorithm in both accuracy and efficiency. © 2010 IEEE. Source


Tan P.,National University of Singapore | Quan L.,Hong Kong University of Science and Technology | Zickler T.,Harvard University
IEEE Transactions on Pattern Analysis and Machine Intelligence | Year: 2011

Different materials reflect light in different ways, and this reflectance interacts with shape, lighting, and viewpoint to determine an object's image. Common materials exhibit diverse reflectance effects, and this is a significant source of difficulty for image analysis. One strategy for dealing with this diversity is to build computational tools that exploit reflectance symmetries, such as reciprocity and isotropy, that are exhibited by broad classes of materials. By building tools that exploit these symmetries, one can create vision systems that are more likely to succeed in real-world, non-Lambertian environments. In this paper, we develop a framework for representing and exploiting reflectance symmetries. We analyze the conditions for distinct surface points to have local view and lighting conditions that are equivalent under these symmetries, and we represent these conditions in terms of the geometric structure they induce on the Gaussian sphere and its abstraction, the projective plane. We also study the behavior of these structures under perturbations of surface shape and explore applications to both calibrated and uncalibrated photometric stereo. © 2011 IEEE. Source


Tumuluru V.K.,Hong Kong University of Science and Technology | Wang P.,Nanyang Technological University | Niyato D.,Nanyang Technological University | Song W.,University of New Brunswick
IEEE Transactions on Vehicular Technology | Year: 2012

Dynamic spectrum access (DSA) is an important design aspect for cognitive radio networks. Most of existing DSA schemes are to govern unlicensed user (i.e., secondary user, SU) traffic in a licensed spectrum without compromising the transmissions of the licensed users, in which all the unlicensed users are typically treated equally. In this paper, prioritized unlicensed user traffic is considered. Specifically, the unlicensed user traffic is divided into two priority classes (i.e., high and low priority). We consider a general setting in which the licensed users' transmissions can happen at any time instant. Therefore, the DSA scheme should perform spectrum handoff to protect the licensed user's transmission. Different DSA schemes (i.e., centralized and distributed) are considered to manage the prioritized unlicensed user traffic. These DSA schemes use different handoff mechanisms for the two classes of unlicensed users. We also study the impact of subchannel reservation for high-priority SUs in both DSA schemes. Each of the proposed DSA schemes is analyzed using a continuous-time Markov chain. For performance measures, we derive blocking probability, the probability of forced termination, call completion rate, and mean handoff delay for both high-and low-priority unlicensed users. The numerical results are verified using simulations. © 2012 IEEE. Source


Quadeer A.A.,King Fahd University of Petroleum and Minerals | Al-Naffouri T.Y.,Hong Kong University of Science and Technology | Al-Naffouri T.Y.,King Abdullah University of Science and Technology
IEEE Transactions on Signal Processing | Year: 2012

Sparse signal reconstruction algorithms have attracted research attention due to their wide applications in various fields. In this paper, we present a simple Bayesian approach that utilizes the sparsity constraint and a priori statistical information (Gaussian or otherwise) to obtain near optimal estimates. In addition, we make use of the rich structure of the sensing matrix encountered in many signal processing applications to develop a fast sparse recovery algorithm. The computational complexity of the proposed algorithm is very low compared with the widely used convex relaxation methods as well as greedy matching pursuit techniques, especially at high sparsity. © 1991-2012 IEEE. Source


Liu Y.,Third Security | Ni L.,Hong Kong University of Science and Technology | Hu C.,Third Security
IEEE Journal on Selected Areas in Communications | Year: 2012

Topology control is an effective method to improve the energy-efficiency and increase the communication capacity of Wireless Sensor Networks (WSNs). Traditional topology control algorithms are based on deterministic model that fails to consider lossy links which provide only probabilistic connectivity. Noticing this fact, we propose a novel probabilistic network model. We meter the network connectivity using network reachability. It is defined as the minimal of the upper limit of the end-to-end delivery ratio between any pair of nodes in the network. We attempt to find a minimal transmission power for each node while the network reachability is above a given application-specified threshold. The whole procedure is called probabilistic topology control (PTC). We prove that PTC is NP-hard and propose a fully distributed algorithm called BRASP. We prove that BRASP has the guaranteed performance and the communication overhead is O(\vert E \vert + \vert V \vert). The experimental results show that the network energy-efficiency can be improved by up to 250% and the average node degree is reduced by 50%. © 2012 IEEE. Source


Yuan P.,South China Normal University | Ding C.,Hong Kong University of Science and Technology
Finite Fields and their Applications | Year: 2011

Using a lemma proved by Akbary, Ghioca, and Wang, we derive several theorems on permutation polynomials over finite fields. These theorems give not only a unified treatment of some earlier constructions of permutation polynomials, but also new specific permutation polynomials over Fq. A number of earlier theorems and constructions of permutation polynomials are generalized. The results presented in this paper demonstrate the power of this lemma when it is employed together with other techniques. © 2011 Elsevier Inc. All Rights Reserved. Source


Ho E.S.L.,University of Edinburgh | Komura T.,University of Edinburgh | Tai C.-L.,Hong Kong University of Science and Technology
ACM Transactions on Graphics | Year: 2010

This paper presents a new method for editing and retargeting motions that involve close interactions between body parts of single or multiple articulated characters, such as dancing, wrestling, and sword fighting, or between characters and a restricted environment, such as getting into a car. In such motions, the implicit spatial relationships between body parts/objects are important for capturing the scene semantics. We introduce a simple structure called an interaction mesh to represent such spatial relationships. By minimizing the local deformation of the interaction meshes of animation frames, such relationships are preserved during motion editing while reducing the number of inappropriate interpenetrations. The interaction mesh representation is general and applicable to various kinds of close interactions. It also works well for interactions involving contacts and tangles as well as those without any contacts. The method is computationally efficient, allowing real-time character control. We demonstrate its effectiveness and versatility in synthesizing a wide variety of motions with close interactions. © 2010 ACM. Source


Ng C.Y.,Hong Kong University of Science and Technology
Applied Soft Computing Journal | Year: 2016

With the growing general awareness of the need to protect the environment as well as the increasingly stringent regulatory requirements imposed by various national and cross-national bodies, manufacturers have to minimise the environmental impacts of their products. Environmental considerations have therefore become a new key criterion for evaluating design alternatives during the product development stage. To facilitate non-Life Cycle Assessment (LCA) experts, such as most product designers, in evaluating the design alternatives in terms of environmental friendliness, this paper introduces a decision-making mechanism that combines the multiple criteria decision making (MCDM) approaches with LCA methodology. This evidential reasoning-based approach is a fast-track and objective tool which ranks the available design alternatives according to their potential environmental impacts. The environmental impacts of design alternatives assessed by the LCA are used for the weight elicitation processes of the proposed approach. A case application is conducted to illustrate the use of the proposed method to evaluate the environmental performances of design alternatives. © 2016 Elsevier B.V. All rights reserved. Source


Xu J.,University of Texas at Austin | Zhang J.,Hong Kong University of Science and Technology | Andrews J.G.,University of Texas at Austin
IEEE Transactions on Wireless Communications | Year: 2011

The Wyner model has been widely used to model and analyze cellular networks due to its simplicity and analytical tractability. Its key aspects include fixed user locations and the deterministic and homogeneous interference intensity. While clearly a significant simplification of a real cellular system, which has random user locations and interference levels that vary by several orders of magnitude over a cell, a common presumption by theorists is that the Wyner model nevertheless captures the essential aspects of cellular interactions. But is this true? To answer this question, we compare the Wyner model to a model that includes random user locations and fading. We consider both uplink and downlink transmissions and both outage-based and average-based metrics. For the uplink, for both metrics, we conclude that the Wyner model is in fact quite accurate for systems with a sufficient number of simultaneous users, e.g., a CDMA system. Conversely, it is broadly inaccurate otherwise. Turning to the downlink, the Wyner model becomes inaccurate even for systems with a large number of simultaneous users. In addition, we derive an approximation for the main parameter in the Wyner model the interference intensity term, which depends on the path loss exponent. © 2006 IEEE. Source


Feng J.,Zhejiang University of Technology | Qin Z.,Zhejiang University of Technology | Yao S.,Hong Kong University of Science and Technology
Langmuir | Year: 2012

The coalescence-induced condensate drop motion on some superhydrophobic surfaces (SHSs) has attracted increasing attention because of its potential applications in sustained dropwise condensation, water collection, anti-icing, and anticorrosion. However, an investigation of the mechanism of such self-propelled motion including the factors for designing such SHSs is still limited. In this article, we fabricated a series of superhydrophobic copper surfaces with nanoribbon structures using wet chemical oxidation followed by fluorization treatment. We then systematically studied the influence of surface roughness and the chemical properties of as-prepared surfaces on the spontaneous motion of condensate drops. We quantified the "frequency" of the condensate drop motion based on microscopic sequential images and showed that the trend of this frequency varied with the nanoribbon structure and extent of fluorination. More obvious spontaneous condensate drop motion was observed on surfaces with a higher extent of fluorization and nanostructures possessing sufficiently narrow spacing and higher perpendicularity. We attribute this enhanced drop mobility to the stable Cassie state of condensate drops in the dynamic dropwise condensation process that is determined by the nanoscale morphology and local surface energy. © 2012 American Chemical Society. Source


Wu R.,University of Science and Technology of China | Sin J.K.O.,Hong Kong University of Science and Technology
IEEE Transactions on Power Electronics | Year: 2012

In this paper, high-efficiency silicon-embedded coreless coupled inductors are demonstrated for power supply on chip applications. The embedded coupled inductors have two interleaved thick inductor coils embedded in the bottom layer of the Si substrate and four copper vias formed in the top substrate layer. The embedded coupled inductors can be stacked underneath the active circuitry for compact on-chip integration, while small resistances can be achieved with the thick embedded coils, which lead to high efficiency. As a demonstration, embedded coupled inductors with a small area of 0.5mm 2 were designed and fabricated according to the on-chip dc-dc converter with the highest reported efficiency. The fabricated embedded coupled inductors show a much higher efficiency of 93% compared to the 84% efficiency of the originally used on-substrate coupled inductors, allowing the total converter loss to be reduced by 38% and the converter efficiency to be improved from 78% to 85%. © 2012 IEEE. Source


Bian L.,CAS Institute of Theoretical Physics | Liu T.,Hong Kong University of Science and Technology | Shu J.,CAS Institute of Theoretical Physics
Physical Review Letters | Year: 2015

We present a class of cancellation conditions for suppressing the total contributions of Barr-Zee diagrams to the electron electric dipole moment (eEDM). Such a cancellation is of particular significance after the new eEDM upper limit was released by the ACME Collaboration, which strongly constrains the allowed magnitude of CP violation in Higgs couplings and hence the feasibility of electroweak baryogenesis (EWBG). Explicitly, if both the CP-odd Higgs-photon-photon (Z boson) and the CP-odd Higgs-electron-positron couplings are turned on, a cancellation may occur either between the contributions of a CP-mixing Higgs boson, with the other Higgs bosons being decoupled, or between the contributions of CP-even and CP-odd Higgs bosons. With a cancellation, large CP violation in the Higgs sector is still allowed, yielding successful EWBG. The reopened parameter regions would be probed by future neutron, mercury EDM measurements, and direct measurements of Higgs CP properties at the Large Hadron Collider Run II and future colliders. © 2015 American Physical Society. © 2015 American Physical Society. Source


Flowerdew L.,Hong Kong University of Science and Technology
ReCALL | Year: 2012

This paper illustrates how a freely available online corpus has been exploited in a module on teaching business letters covering the following four speech acts (functions) commonly found in business letters: invitations, requests, complaints and refusals. It is proposed that different strategies are required for teaching potentially non-face-threatening (invitations, requests) and face-threatening (complaints, refusals) speech acts. The hands-on pedagogic activities follow the 'guided inductive approach' advocated by Johansson (2009) and draw on practices and strategies covered in the literature on using corpora in language learning and teaching, viz. the need for 'pedagogic mediation', and the 'noticing' hypothesis from second language acquisition studies. © 2012 European Association for Computer Assisted Language Learning. Source


Rowell C.,Hong Kong University of Science and Technology | Lam E.Y.,University of Hong Kong
IEEE Transactions on Antennas and Propagation | Year: 2012

The performance characteristics of the capacitive slot, or a slot placed between the feed and ground connections, in a planar inverted-F antenna (PIFA) are comprehensively analyzed. The PIFA capacitive slot behavior is measured inside a two antenna system within a mobile phone where the first antenna is a multiple band PIFA and the second antenna is a higher frequency band PIFA directly overlapping with the first antenna higher frequency band. The dual band PIFA in this paper is designed to be resonant in the quad-band GSM+3G/4G, and the second PIFA is resonant in the 3G/4G frequency bands. The capacitive slot has three types of behaviors: affect the matching of existing frequency resonances, induce another frequency resonance, and improve the isolation between the two antennas. Together with optimal antenna ground and feed placement, the capacitive slot can act as a notched bandstop filter to decrease the S 21 mutual coupling between the two antennas by over 20 dB and decrease the envelope correlation by almost one order of magnitude. © 1963-2012 IEEE. Source


McFerran B.,University of Michigan | Mukhopadhyay A.,Hong Kong University of Science and Technology
Psychological Science | Year: 2013

Obesity is a major public health problem, but despite much research into its causes, scientists have largely neglected to examine laypeople's personal beliefs about it. Such naive beliefs are important because they guide actual goal-directed behaviors. In a series of studies across five countries on three continents, we found that people mainly believed either that obesity is caused by a lack of exercise or that it is caused by a poor diet. Moreover, laypeople who indicted a lack of exercise were more likely to actually be overweight than were those who implicated a poor diet. This effect held even after controlling for several known correlates of body mass index (BMI), thereby explaining previously unexplained variance. We also experimentally demonstrated the mechanism underlying this effect: People who implicated insufficient exercise tended to consume more food than did those who indicted a poor diet. These results suggest that obesity has an important, pervasive, and hitherto overlooked psychological antecedent. © The Author(s) 2013. Source


Zhang S.,Huawei | Lau V.K.N.,Hong Kong University of Science and Technology
IEEE Transactions on Wireless Communications | Year: 2011

In this paper, we consider the problem of multi-relay selection for multi-stream cooperative MIMO systems with M relay nodes. Traditionally, relay selection approaches are primarily focused on selecting one relay node to improve the transmission reliability given a single-antenna destination node. As such, in the cooperative phase whereby both the source and the selected relay nodes transmit to the destination node, it is only feasible to exploit cooperative spatial diversity (for example by means of distributed space time coding). For wireless systems with a multi-antenna destination node, in the cooperative phase it is possible to opportunistically transmit multiple data streams to the destination node by utilizing multiple relay nodes. Therefore, we propose a low overhead multi-relay selection protocol to support multi-stream cooperative communications. In addition, we derive the asymptotic performance results at high SNR for the proposed scheme and discuss the diversity- multiplexing tradeoff as well as the throughput-reliability tradeoff. From these results, we show that the proposed multi-stream cooperative communication scheme achieves lower outage probability compared to existing baseline schemes. © 2011 IEEE. Source


Du S.,Hong Kong University of Science and Technology
Physical Review A - Atomic, Molecular, and Optical Physics | Year: 2011

We theoretically study nonlinear optical frequency conversion with time-frequency entangled paired photons whose sum frequency is on two-photon resonance of an atomic ensemble. Assisted by a strong coupling laser, two paired photons with wide spectrum are converted into a single monochromatic photon. The on-resonance nonlinear process is made possible due to the electromagnetically induced transparency that not only eliminates the on-resonance absorption but also enhances the nonlinear interaction between the single photons and atoms. Compared to this quantum-nonlinear conversion, the classical corresponding single-photon counts from accidental two-photon coincidence has a wide spectrum and experiences large absorption. As a result, the system can be used as an efficient two-photon quantum correlator in which the classical accidental coincidences can be suppressed. We perform numerical simulations basing on a Rb atomic vapor cell with realistic operating parameters. © 2011 American Physical Society. Source


Sun L.,Alcatel - Lucent | McKay M.R.,Hong Kong University of Science and Technology
IEEE Transactions on Wireless Communications | Year: 2011

We propose new low complexity opportunistic relaying strategies for multiple-antenna relay networks. Assuming that a source communicates with a destination, both equipped with M antennas, assisted by K single-antenna relay terminals using an amplify-and-forward half-duplex protocol, we propose a new transmission strategy which employs linear zero-forcing transmission and reception. Integrated with this transmission strategy, we propose two low complexity opportunistic relay selection algorithms, referred to as the Maximum Sum Rate Relay Selection (MSR) and Greedy Semi-Orthogonal Relay Selection (GSO) algorithms, which select only a few very important relays to share the total power. For the GSO algorithm, we present a theoretical analysis of the sum capacity as K grows large, which is shown to be M/2 log log K + O(1). This result is also shown to coincide with a fundamental cut-set upper bound on the sum capacity of MIMO networks with opportunistic relaying which we derive; thereby establishing a new exact scaling law for such networks, as well as demonstrating the asymptotic optimality of our proposed low complexity approach. Our numerical studies also indicate that our proposed opportunistic relaying techniques yield significant capacity benefits over the conventional approach without opportunistic selection, even when the number of relays is not large. © 2011 IEEE. Source


Leung Shingyu S.,Hong Kong University of Science and Technology
Journal of Computational Physics | Year: 2011

We propose efficient Eulerian methods for approximating the finite-time Lyapunov exponent (FTLE). The idea is to compute the related flow map using the Level Set Method and the Liouville equation. There are several advantages of the proposed approach. Unlike the usual Lagrangian-type computations, the resulting method requires the velocity field defined only at discrete locations. No interpolation of the velocity field is needed. Also, the method automatically stops a particle trajectory in the case when the ray hits the boundary of the computational domain. The computational complexity of the algorithm is O(Δx -(d+1)) with d the dimension of the physical space. Since there are the same number of mesh points in the x-t space, the computational complexity of the proposed Eulerian approach is optimal in the sense that each grid point is visited for only O(1) time. We also extend the algorithm to compute the FTLE on a co-dimension one manifold. The resulting algorithm does not require computation on any local coordinate system and is simple to implement even for an evolving manifold. © 2011 Elsevier Inc. Source


Mak H.-Y.,Hong Kong University of Science and Technology | Shen Z.-J.,University of California at Berkeley
IIE Transactions (Institute of Industrial Engineers) | Year: 2012

Recent research has pointed out that the optimal strategies to mitigate supply disruptions and demand uncertainty are often mirror images of each other. In particular, risk diversification is favorable under the threat of disruptions and risk pooling is favorable under demand uncertainty. This article studies how dynamic sourcing in supply chain design provides partial benefits of both strategies. Optimization models are formulated for supply chain network design with dynamic sourcing under the risk of temporally dependent and temporally independent disruptions of facilities. Using computational experiments, it is shown that supply chain networks that allow small to moderate degrees of dynamic sourcing can be very robust against both disruptions and demand uncertainty. Insights are attained on the optimal degree of dynamic sourcing under different conditions. © 2012 "IIE". Source


Chan A.L.S.,Hong Kong University of Science and Technology
Energy and Buildings | Year: 2011

In the past decade, the application of phase change material (PCM) wallboard in building faç ade has gained wide attention around the world. For successful application of PCM integrated building faç ade, a number of crucial factors including thermo-physical properties of PCM, outdoor climate condition, operating schedule of building, investment cost and tariff structure should be taken into account. In this study, a typical residential flat with PCM integrated external walls constructed in the living room and bedroom was modeled and the thermal/energy performance was investigated. The effect of PCM integrated wall's orientation was also evaluated. Through computer simulations, it was found that the living room of a residential flat with west-facing integrated external wall could perform better. It gave a comparatively higher decrease in the interior surface temperature up to a maximum of 4.14%. Moreover, an annual energy saving of 2.9% in air-conditioning system was achieved. However, a long cost payback period of 91 years makes the PCM integrated building faç ade economically infeasible. On the other hand, the energy payback period was estimated as 23.4 years for this building case, indicating that the energy saving can surpass the embodied energy of PCM wallboard and mitigate the greenhouse gas emission. © 2011 Elsevier B.V. All rights reserved. Source


Chan A.L.S.,Hong Kong University of Science and Technology
Energy and Buildings | Year: 2011

The concern on climate change leads to growing demand for minimization of energy use. As building is one of the largest energy consuming sectors, it is essential to study the impact of climate change on building energy performance. In this regard, building energy simulation software is a useful tool. A set of appropriate typical weather files is one of the key factors towards successful building energy simulation. This paper reports the work of developing a set of weather data files for subtropical Hong Kong, taking into the effect of future climate change. Projected monthly mean climate changes from a selected General Circulation Model for three future periods under two emission scenarios were integrated into an existing typical meteorological year weather file by a morphing method. Through this work, six sets of future weather files for subtropical Hong Kong were produced. A typical office building and a residential flat were modeled using building simulation program EnergyPlus. Hourly building energy simulations were carried out. The simulated results indicate that there will be substantial increase in A/C energy consumption under the impact of future climate change, ranging from 2.6% to 14.3% and from 3.7% to 24% for office building and residential flat, respectively. © 2011 Elsevier B.V. All rights reserved. Source


Dangkulwanich M.,Howard Hughes Medical Institute | Ishibashi T.,Howard Hughes Medical Institute | Ishibashi T.,Hong Kong University of Science and Technology | Bintu L.,Howard Hughes Medical Institute | And 3 more authors.
Chemical Reviews | Year: 2014

A review of the various aspects of transcription that have been addressed using methods of single-molecule detection and manipulation is studied. Whereas single-subunit viral polymerases such as T7 and SP6 RNAP can start transcription at a promoter region without additional cofactors, multi subunit bacterial and eukaryotic RNA polymerase (RNAPs) require transcription factors that aid the enzyme to recognize and bind to the promoter. Through the combination of single molecule manipulation and single-molecule fluorescence methods in the same experiment, it should be possible to follow, the internal dynamics of the polymerase or the binding of a regulatory factor and simultaneously monitor the mechanical variables of position, force, and torque. The result of these efforts will be a multidimensional picture of transcription that will provide crucial information about the relative timing of various molecular events and therefore reveal their causal connection. Source


Huang K.,University of Hong Kong | Lau V.K.N.,Hong Kong University of Science and Technology
IEEE Transactions on Wireless Communications | Year: 2014

Microwave power transfer (MPT) delivers energy wirelessly from stations called power beacons (PBs) to mobile devices by microwave radiation. This provides mobiles practically infinite battery lives and eliminates the need of power cords and chargers. To enable MPT for mobile recharging, this paper proposes a new network architecture that overlays an uplink cellular network with randomly deployed PBs for powering mobiles, called a hybrid network. The deployment of the hybrid network under an outage constraint on data links is investigated based on a stochastic-geometry model where single-antenna base stations (BSs) and PBs form independent homogeneous Poisson point processes (PPPs) with densities λ-b and λ-p, respectively, and single-antenna mobiles are uniformly distributed in Voronoi cells generated by BSs. In this model, mobiles and PBs fix their transmission power at p and q, respectively; a PB either radiates isotropically, called isotropic MPT, or directs energy towards target mobiles by beamforming, called directed MPT. The model is used to derive the tradeoffs between the network parameters (p, λ-b, q, λ-p) under the outage constraint. First, consider the deployment of the cellular network. It is proved that the outage constraint is satisfied so long as the product pλ-b^\frac{alpha}{2} is above a given threshold where α is the path-loss exponent. Next, consider the deployment of the hybrid network assuming infinite energy storage at mobiles. It is shown that for isotropic MPT, the product qλ-p λ-b^\frac{alpha}{2} has to be above a given threshold so that PBs are sufficiently dense; for directed MPT, z-mqλ-p λ-b^\frac{alpha}{2} with z-m denoting the array gain should exceed a different threshold to ensure short distances between PBs and their target mobiles. Furthermore, similar results are derived for the case of mobiles having small energy storage. © 2002-2012 IEEE. Source


Huang Y.,Hong Kong Baptist University | Palomar D.P.,Hong Kong University of Science and Technology
IEEE Transactions on Signal Processing | Year: 2014

Quadratically constrained quadratic programming (QCQP) with double-sided constraints has plenty of applications in signal processing as have been addressed in recent years. QCQP problems are hard to solve, in general, and they are typically approached by solving a semidefinite programming (SDP) relaxation followed by a postprocessing procedure. Existing postprocessing schemes include Gaussian randomization to generate an approximate solution, rank reduction procedure (the so-called purification), and some specific rank-one matrix decomposition techniques to yield a globally optimal solution. In this paper, we propose several randomized postprocessing methods to output not an approximate solution but a globally optimal solution for some solvable instances of the double-sided QCQP (i.e., instances with a small number of constraints). We illustrate their applicability in robust receive beamforming, radar optimal code design, and broadcast beamforming for multiuser communications. As a byproduct, we derive an alternative (shorter) proof for the Sturm-Zhang rank-one matrix decomposition theorem. © 2014 IEEE. Source


Susarla A.,University of Washington | Subramanyam R.,University of Illinois at Urbana - Champaign | Karhade P.,Hong Kong University of Science and Technology
Information Systems Research | Year: 2010

The complexity and scope of outsourced information technology (IT) demands relationship-specific invest-Tments from vendors, which, when combined with contract incompleteness, may result in underinvestment and inefficient bargaining, referred to as the holdup problem. Using a unique data set of over 100 IT outsourcing contracts, we examine whether contract extensiveness, i.e., the extent to which firms and vendors can foresee contingencies when designing contracts for outsourced IT services, can alleviate holdup. While extensively detailed contracts are likely to include a greater breadth of activities outsourced to a vendor, task complexity makes it difficult to draft extensive contracts. Furthermore, extensive contracts may still be incomplete with respect to enforcement. We then examine the role of nonprice contractual provisions, contract duration, and extendibility terms, which give firms an option to extend the contract to limit the likelihood of holdup. We also validate the ex post efficiency of contract design choices by examining renewals of contracting agreements. © 2010 INFORMS. Source


Hong L.J.,Hong Kong University of Science and Technology | Liu G.,City University of Hong Kong
Operations Research | Year: 2010

A probability is the expectation of an indicator function. However, the standard pathwise sensitivity estimation approach, which interchanges the differentiation and expectation, cannot be directly applied because the indicator function is discontinuous.In this paper, we design a pathwise sensitivity estimator for probability functions based on a result of Hong [Hong, L. J. 2009. Estimating quantile sensitivities. Oper. Res. 57(1) 118-130]. We show that the estimator is consistent and follows a central limit theorem for simulation outputs from both terminating and steady-state simulations, and the optimal rate of convergence of the estimator is n?2/5 where n is the sample size. We further demonstrate how to use importance sampling to accelerate the rate of convergence of the estimator to n?1/2, which is the typical rate of convergence for statistical estimation. We illustrate the performances of our estimators and compare them to other well-known estimators through several examples. © 2010 INFORMS. Source


Ji R.,CAS Institute of Semiconductors | Xu J.,Hong Kong University of Science and Technology | Yang L.,CAS Institute of Semiconductors
IEEE Photonics Technology Letters | Year: 2013

We demonstrate a five-port optical router that is suitable for large-scale photonic networks-on-chip. The optical router is designed to passively route the optical signal travelling in one direction and actively route the optical signal making a turn. In the case that an XY dimension-order routing is used, the passive routing feature guarantees that the maximum power consumption to route the data through the network is a constant that is independent of the network size. The fabricated device has an efficient footprint of ∼ 460× 1000μ m2. The routing functionality of the device is verified by using a 12.5-Gbit/s optical signal. The capability of multiwavlength routing for the optical router is also explored and discussed. © 1989-2012 IEEE. Source


Zhu X.,Tianjin University | Chen Z.,Hong Kong University of Science and Technology | Tang C.,Tianjin University
Optics Letters | Year: 2013

In optical metrology, state of the art algorithms for background and noise removal of fringe patterns are based on space-frequency analysis. In this Letter, an approach based on variational image decomposition is proposed to remove background and noise from a fringe pattern simultaneously. In the proposed method, a fringe image is directly decomposed into three components: a first one containing background, a second one fringes, and a third one noise, which are described in different function spaces and are solved by minimization of the functional. A simple technical process involved in the minimization algorithm improves the convergence performance. The proposed approach is verified with the simulated and experimental fringe patterns. © 2013 Optical Society of America. Source


Lin F.,Hong Kong University of Science and Technology | Zhou Y.,University of Western Sydney
Artificial Intelligence | Year: 2011

We first embed Pearce's equilibrium logic and Ferraris's propositional general logic programs in Lin and Shoham's logic of GK, a nonmonotonic modal logic that has been shown to include as special cases both Reiter's default logic in the propositional case and Moore's autoepistemic logic. From this embedding, we obtain a mapping from Ferraris's propositional general logic programs to circumscription, and show that this mapping can be used to check the strong equivalence between two propositional logic programs in classical logic. We also show that Ferraris's propositional general logic programs can be extended to the first-order case, and our mapping from Ferraris's propositional general logic programs to circumscription can be extended to the first-order case as well to provide a semantics for these first-order general logic programs. © 2010 Elsevier B.V. All rights reserved. Source


Zheng Y.,Microsoft | Capra L.,University College London | Wolfson O.,University of Illinois at Chicago | Yang H.,Hong Kong University of Science and Technology
ACM Transactions on Intelligent Systems and Technology | Year: 2014

Urbanization's rapid progress has modernized many people's lives but also engendered big issues, such as traffic congestion, energy consumption, and pollution. Urban computing aims to tackle these issues by using the data that has been generated in cities (e.g., traffic flow, human mobility, and geographical data). Urban computing connects urban sensing, data management, data analytics, and service providing into a recurrent process for an unobtrusive and continuous improvement of people's lives, city operation systems, and the environment. Urban computing is an interdisciplinary field where computer sciences meet conventional city-related fields, like transportation, civil engineering, environment, economy, ecology, and sociology in the context of urban spaces. This article first introduces the concept of urban computing, discussing its general framework and key challenges from the perspective of computer sciences. Second, we classify the applications of urban computing into seven categories, consisting of urban planning, transportation, the environment, energy, social, economy, and public safety and security, presenting representative scenarios in each category. Third, we summarize the typical technologies that are needed in urban computing into four folds, which are about urban sensing, urban data management, knowledge fusion across heterogeneous data, and urban data visualization. Finally, we give an outlook on the future of urban computing, suggesting a few research topics that are somehow missing in the community. © 2014 ACM 2157-6904/2014/09-ART38 $15.00. Source


Li Y.,CAS Shanghai Institutes for Biological Sciences | Du X.-F.,CAS Shanghai Institutes for Biological Sciences | Liu C.-S.,CAS Shanghai Institutes for Biological Sciences | Wen Z.-L.,Hong Kong University of Science and Technology | Du J.-L.,CAS Shanghai Institutes for Biological Sciences
Developmental Cell | Year: 2012

Microglia are the primary immune cells in the brain. Under physiological conditions, they typically stay in a "resting" state, with ramified processes continuously extending to and retracting from surrounding neural tissues. Whether and how such highly dynamic resting microglia functionally interact with surrounding neurons are still unclear. Using in vivo time-lapse imaging of both microglial morphology and neuronal activity in the optic tectum of larval zebrafish, we found that neuronal activity steers resting microglial processes and facilitates their contact with highly active neurons. This process requires the activation of pannexin-1 hemichannels on neurons. Reciprocally, such resting microglia-neuron contact reduces both spontaneous and visually evoked activities of contacted neurons. Our findings reveal an instructive role for neuronal activity in resting microglial motility and suggest the function for microglia in homeostatic regulation of neuronal activity in the healthy brain. Little is known about the role of resting microglia in the healthy brain. Looking in vivo in zebrafish, Li et al. uncover reciprocal regulation between neurons and resting microglia in which neuronal activity provokes the formation of microglial contacts that, in turn, downregulate the activity of contacted neurons. © 2012 Elsevier Inc. Source


Xu K.,Hong Kong University of Science and Technology
Acta Mechanica Sinica/Lixue Xuebao | Year: 2015

Abstract: All fluid dynamic equations are valid under their modeling scales, such as the particle mean free path and mean collision time scale of the Boltzmann equation and the hydrodynamic scale of the Navier–Stokes (NS) equations. The current computational fluid dynamics (CFD) focuses on the numerical solution of partial differential equations (PDEs), and its aim is to get the accurate solution of these governing equations. Under such a CFD practice, it is hard to develop a unified scheme that covers flow physics from kinetic to hydrodynamic scales continuously because there is no such governing equation which could make a smooth transition from the Boltzmann to the NS modeling. The study of fluid dynamics needs to go beyond the traditional numerical partial differential equations. The emerging engineering applications, such as air-vehicle design for near-space flight and flow and heat transfer in micro-devices, do require further expansion of the concept of gas dynamics to a larger domain of physical reality, rather than the traditional distinguishable governing equations. At the current stage, the non-equilibrium flow physics has not yet been well explored or clearly understood due to the lack of appropriate tools. Unfortunately, under the current numerical PDE approach, it is hard to develop such a meaningful tool due to the absence of valid PDEs. In order to construct multiscale and multiphysics simulation methods similar to the modeling process of constructing the Boltzmann or the NS governing equations, the development of a numerical algorithm should be based on the first principle of physical modeling. In this paper, instead of following the traditional numerical PDE path, we introduce direct modeling as a principle for CFD algorithm development. Since all computations are conducted in a discretized space with limited cell resolution, the flow physics to be modeled has to be done in the mesh size and time step scales. Here, the CFD is more or less a direct construction of discrete numerical evolution equations, where the mesh size and time step will play dynamic roles in the modeling process. With the variation of the ratio between mesh size and local particle mean free path, the scheme will capture flow physics from the kinetic particle transport and collision to the hydrodynamic wave propagation. Based on the direct modeling, a continuous dynamics of flow motion will be captured in the unified gas-kinetic scheme. This scheme can be faithfully used to study the unexplored non-equilibrium flow physics in the transition regime. Graphical Abstract: The most successful governing equations for gas dynamics are the Navier-Stokes (NS) equations in the hydrodynamic scale and the Boltzmann equation in the kinetic scale. Between these two limiting scales, there is no well-accepted equations for non-equilibrium flow description. As shown in Fig.2, the kinetic equation identifies particle transport and collision, and the hydrodynamic ones capture wave propagation. The direct modeling for computational fluid dynamics is to construct a continuous spectrum of governing equation in all scales from kinetic to hydrodynamic scales. [Figure not available: see fulltext.] © 2015, The Chinese Society of Theoretical and Applied Mechanics; Institute of Mechanics, Chinese Academy of Sciences and Springer-Verlag Berlin Heidelberg. Source


Low B.K.,Nanyang Technological University | Zhang J.,Tongji University | Tang W.H.,Hong Kong University of Science and Technology
Computers and Geotechnics | Year: 2011

Although first-order reliability method is a common procedure for estimating failure probability, the formulas derived for bimodal bounds of system failure probability have not been widely used as expected in present reliability analyses. The reluctance for applying these formulas in practice may be partly due to the impression that the procedures to implement the system reliability theory are tedious. Among the methods for system reliability analysis, the approach suggested in Ditlevsen 1979 is considered here because it is a natural extension of the first-order reliability method commonly used for failure probability estimation corresponding to a single failure mode, and it can often provide reasonably narrow failure probability bounds. To facilitate wider practical application, this paper provides a short program code in the ubiquitous Excel spreadsheet platform for efficiently calculating the bounds for system failure probability. The procedure is illustrated for a semi-gravity retaining wall with two failure modes, a soil slope with two and eight failure modes, and a loaded beam with three failure modes. In addition, simple equations are provided to relate the correlated but unrotated equivalent standard normals of the Low and Tang 2007 FORM procedure with the uncorrelated but rotated equivalent standard normals of the classical FORM procedure. Also demonstrated are the need for investigating different permutations of failure modes in order to get the narrowest bounds for system failure probability, and the use of SORM reliability index for system reliability bounds in a case where the curvature of the limit state surface cannot be neglected. © 2010 Elsevier Ltd. Source


Dai H.,Central University of Finance and Economics | Tseng M.M.,Hong Kong University of Science and Technology
International Journal of Production Economics | Year: 2012

Mismatch of inventory information between the reality and what is on the record of information systems has been generally accepted as inventory inaccuracy. Its financial impacts go beyond the cost of direct inventory loss at each stage of the supply chain. The discrepancy also results in increasing holding and shortage cost because information distortion propagates along the supply chain. With the growing emphasis on responsiveness and cost of inventory, inventory inaccuracy has become a critical hurdle to achieve high performance supply chains. The emergence of RFID technology offers a possible solution to alleviate the growing cost of inventory inaccuracy. However, differs from tangible justification based on shrinkage reduction, adoption of RFID technology has to be justified with improvement in intangible information flow. The objective of this paper is to present a systematic approach with analytical models to quantify the extent of saving from timely information as well as reduction in information distortion and its amplification. With the increasing dynamic and complexity of global supply chain, this paper may shed some new light on framing the discussion of investing in RFID technology. © 2012 Elsevier B.V. All rights reserved. Source


Weng L.-T.,Hong Kong University of Science and Technology
Applied Catalysis A: General | Year: 2014

This paper provides a critical review on the applications of time-of-flight secondary ion mass spectrometry (ToF-SIMS) in heterogeneous catalysis, with a particular emphasis on the examples published during the last decade. The covered areas include supported metal oxide catalysts, supported metal catalysts, electrocatalysts for oxygen-reduction reaction and organometallic clusters as precursors for the preparation of heterogeneous catalysts. The molecular specificity and surface sensitivity of ToF-SIMS have been shown to be extremely useful in the surface characterization of heterogeneous catalysts, in particular in the areas of assessing the formation of new compounds or interactions between different components (e.g. active phase-active phase or active phase-support), providing more precision on the structure of surface species, monitoring the different steps of catalyst preparation and/or activation, etc. In some cases, ToF-SIMS is able to provide unique molecular information that is unattainable with other conventional techniques and thus give a more precise characterization of the heterogeneous catalysts. Finally, the advantages and limitations of ToF-SIMS with respect to other more conventional techniques such as XPS are also discussed. © 2013 Elsevier B.V. Source


Lin F.,Hong Kong University of Science and Technology
Artificial Intelligence | Year: 2016

We consider the problem of representing and reasoning about computer programs, and propose a translation from a core procedural iterative programming language to first-order logic with quantification over the domain of natural numbers that includes the usual successor function and the "less than" linear order, essentially a first-order logic with a discrete linear order. Unlike Hoare's logic, our approach does not rely on loop invariants. Unlike the typical temporal logic specification of a program, our translation does not require a transition system model of the program, and is compositional on the structures of the program. Some non-trivial examples are given to show the effectiveness of our translation for proving properties of programs. © 2016 Elsevier B.V. Source


Zhou X.,Australian National University | McKay M.R.,Hong Kong University of Science and Technology
IEEE Transactions on Vehicular Technology | Year: 2010

We consider the problem of secure communication with multiantenna transmission in fading channels. The transmitter simultaneously transmits an information-bearing signal to the intended receiver and artificial noise to the eavesdroppers. We obtain an analytical closed-form expression of an achievable secrecy rate and use it as the objective function to optimize the transmit power allocation between the information signal and the artificial noise. Our analytical and numerical results show that equal power allocation is a simple yet near-optimal strategy for the case of noncolluding eavesdroppers. When the number of colluding eavesdroppers increases, more power should be used to generate the artificial noise. We also provide an upper bound on the SNR, above which, the achievable secrecy rate is positive and shows that the bound is tight at low SNR. Furthermore, we consider the impact of imperfect channel state information (CSI) at both the transmitter and the receiver and find that it is wise to create more artificial noise to confuse the eavesdroppers than to increase the signal strength for the intended receiver if the CSI is not accurately obtained. © 2006 IEEE. Source


Chen S.,South University of Science and Technology of China | Kwok H.-S.,Hong Kong University of Science and Technology
Israel Journal of Chemistry | Year: 2014

Light outcoupling from organic light-emitting diodes (OLEDs) is essential for developing energy-saving displays and efficient lighting sources. Nanocrystallized organic thin films exhibiting scattering features have been considered as effective light extractors for OLEDs. This paper reviews recent advancements in nanocrystallized thin films and their applications in OLEDs. Due to the advantages of easy preparation and OLED compatibility, nanocrystallized organic thin films can integrate with OLEDs as external or internal light extractors easily. Significant light enhancement has been achieved. The fabrication methods and mechanisms of light enhancement are discussed. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim. Source


Liu F.,Hong Kong Applied Science and Technology Research Institute | Tsui C.-Y.,Hong Kong University of Science and Technology | Zhang Y.J.,Chinese University of Hong Kong
IEEE Transactions on Wireless Communications | Year: 2010

The rapid proliferation of wireless sensor networks has stimulated enormous research efforts that aim to maximize the lifetime of battery-powered sensor nodes and, by extension, the overall network lifetime. Most work in this field can be divided into two equally important threads, namely (i) energy-efficient routing that balances traffic load across the network according to energy-related metrics and (ii) sleep scheduling that reduces energy cost due to idle listening by providing periodic sleep cycles for sensor nodes. To date, these two threads are pursued separately in the literature, leading to designs that optimize one component assuming the other is pre-determined. Such designs give rise to practical difficulty in determining the appropriate routing and sleep scheduling schemes in the real deployment of sensor networks, as neither component can be optimized without pre-fixing the other one. This paper endeavors to address the lack of a joint routing-and-sleep-scheduling scheme in the literature by incorporating the design of the two components into one optimization framework. Notably, joint routing-and-sleep-scheduling by itself is a non-convex optimization problem, which is difficult to solve. We tackle the problem by transforming it into an equivalent Signomial Program (SP) through relaxing the flow conservation constraints. The SP problem is then solved by an iterative Geometric Programming (IGP) method, yielding an near optimal routing-and-sleep-scheduling scheme that maximizes network lifetime. To the best of our knowledge, this is the first attempt to obtain the optimal joint routing-and-sleep-scheduling strategy for wireless sensor networks. The near optimal solution provided by this work opens up new possibilities for designing practical and heuristic schemes targeting the same problem, for now the performance of any new heuristics can be easily evaluated by using the proposed near optimal scheme as a benchmark. © 2006 IEEE. Source


Dimitrakopoulos E.G.,Hong Kong University of Science and Technology | DeJong M.J.,University of Cambridge
Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences | Year: 2012

In this paper, the dynamic response of the rocking block subjected to base excitation is revisited. The goal is to offer new closed-form solutions and original similarity laws that shed light on the fundamental aspects of the rocking block. The focus is on the transient dynamics of the rocking block under finite-duration excitations. An alternative way to describe the response of the rocking block, informative of the behaviour of rocking structures under excitations of different intensity, is offered. In the process, limitations of standard dimensional analysis, related to the orientations of the involved physical quantities, are revealed. The proposed dimensionless and orientationless groups condense the response and offer a lucid depiction of the rocking phenomenon. When expressed in the appropriate dimensionless- orientationless groups, the rocking response becomes perfectly self-similar for slender blocks (within the small rotations range) and practically self-similar for non-slender blocks (larger rotations). Using this formulation, the nonlinear and non-smooth rocking response to pulse-type ground motion can be directly determined, and need only be scaled by the intensity and frequency of the excitation. © 2012 The Royal Society. Source


Lee K.A.W.,Hong Kong University of Science and Technology
Advances in Experimental Medicine and Biology | Year: 2012

Interactions between Intrinsically Disordered Protein Regions (IDRs) and their targets commonly exhibit localised contacts via target-induced disorder to order transitions. Other more complex IDR target interactions have been termed "fuzzy" because the IDR does not form a well-defined induced structure. In some remarkable cases of fuzziness IDR function is apparently sequence independent and conferred by amino acid composition. Such cases have been referred to as "random fuzziness" but the molecular features involved are poorly characterised. The transcriptional activation domain (EAD) of oncogenic Ewing's Sarcoma Fusion Proteins (EFPs) is an ≈280 residue IDR with a biased composition restricted to Ala, Gly, Gln, Pro, Ser, Thr and Tyr. Multiple aromatic side chains (exclusively from Try residues) and the particular EAD composition are crucial for molecular recognition but there appears to be no other major geometrically constrained requirement. Computational analysis of the EAD using PONDR (Molecular Kinetics, Inc. http://www.pondr. com) complements the functional data and shows, accordingly, that propensity for structural order within the EAD is conferred by Tyr residues. To conclude, molecular recognition by the EAD is extraordinarily malleable and involves multiple aromatic contacts facilitated by a flexible peptide backbone and, most likely, a limited number of weaker contributions from amenable side chains. I propose to refer to this mode of fuzzy recognition as "polyaromatic", noting that it shares some fundamental features with the "polyelectrostatic" (phosphorylation-dependent) interaction of the Sic1 Cdk inhibitor and Cdc4.-I will also speculate on more detailed models for molecular recognition by the EAD and their relationship to native (non-oncogenic) EAD function. © 2012 Landes Bioscience and Springer Science+Business Media. Source


Zheng X.,Anhui University of Science and Technology | Yan Y.,Anhui University of Science and Technology | Yan Y.,Hong Kong University of Science and Technology | Di Ventra M.,University of California at San Diego
Physical Review Letters | Year: 2013

We investigate the real-time current response of strongly correlated quantum dot systems under sinusoidal driving voltages. By means of an accurate hierarchical equations of motion approach, we demonstrate the presence of prominent memory effects induced by the Kondo resonance on the real-time current response. These memory effects appear as distinctive hysteresis line shapes and self-crossing features in the dynamic current-voltage characteristics, with concomitant excitation of odd-number overtones. They emerge as a cooperative effect of quantum coherence - due to inductive behavior - and electron correlations - due to the Kondo resonance. We also show the suppression of memory effects and the transition to classical behavior as a function of temperature. All these phenomena can be observed in experiments and may lead to novel quantum memory applications. © 2013 American Physical Society. Source


Lau A.S.M.,Hong Kong University of Science and Technology
Journal of Medical Internet Research | Year: 2011

Background: Web 2.0 provides a platform or a set of tools such as blogs, wikis, really simple syndication (RSS), podcasts, tags, social bookmarks, and social networking software for knowledge sharing, learning, social interaction, and the production of collective intelligence in a virtual environment. Web 2.0 is also becoming increasingly popular in e-learning and e-social communities. Objectives: The objectives were to investigate how Web 2.0 tools can be applied for knowledge sharing, learning, social interaction, and the production of collective intelligence in the nursing domain and to investigate what behavioral perceptions are involved in the adoption of Web 2.0 tools by nurses. Methods: The decomposed technology acceptance model was applied to construct the research model on which the hypotheses were based. A questionnaire was developed based on the model and data from nurses (n = 388) were collected from late January 2009 until April 30, 2009. Pearson's correlation analysis and t tests were used for data analysis. Results: Intention toward using Web 2.0 tools was positively correlated with usage behavior (r = .60, P < .05). Behavioral intention was positively correlated with attitude (r = .72, P < .05), perceived behavioral control (r = .58, P < .05), and subjective norm (r = .45, P < .05). In their decomposed constructs, perceived usefulness (r = .7, P < .05), relative advantage (r = .64, P < .05), and compatibility (r = .60,P < .05) were positively correlated with attitude, but perceived ease of use was not significantly correlated (r = .004, P < .05) with it. Peer (r = .47, P < .05), senior management (r = .24,P < .05), and hospital (r = .45, P < .05) influences had positive correlations with subjective norm. Resource (r= .41,P< .05) and technological (r= .69,P< .05) conditions were positively correlated with perceived behavioral control. Conclusions: The identified behavioral perceptions may further health policy makers' understanding of nurses' concerns regarding and barriers to the adoption of Web 2.0 tools and enable them to better plan the strategy of implementation of Web 2.0 tools for knowledge sharing, learning, social interaction, and the production of collective intelligence. © Adela S.M. Lau. Source


Yobas L.,Hong Kong University of Science and Technology
Journal of Micromechanics and Microengineering | Year: 2013

Among the electrophysiology techniques, the voltage clamp and its subsequent scaling to smaller mammalian cells, the so-called patch clamp, led to fundamental discoveries in the last century, revealing the ionic mechanisms and the role of single-ion channels in the generation and propagation of action potentials through excitable membranes (e.g. nerves and muscles). Since then, these techniques have gained a reputation as the gold standard of studying cellular ion channels owing to their high accuracy and rich information content via direct measurements under a controlled membrane potential. However, their delicate and skill-laden procedure has put a serious constrain on the throughput and their immediate utilization in the discovery of new cures targeting ion channels until researchers discovered 'lab-on-a-chip' as a viable platform for the automation of these techniques into a reliable high-throughput screening functional assay on ion channels. This review examines the innovative 'lab-on-a-chip' microtechnologies demonstrated towards this target over a period of slightly more than a decade. The technologies are categorically reviewed according to their considerations for design, fabrication, as well as microfluidic integration from a performance perspective with reference to their ability to secure G Ω seals (gigaseals) on cells, the norm broadly accepted among electrophysiologists for quality recordings that reflect ion-channel activity with high fidelity. © 2013 IOP Publishing Ltd. Source


Feng W.-Z.,Hong Kong University of Science and Technology | Nath P.,Northeastern University
Physical Review D - Particles, Fields, Gravitation and Cosmology | Year: 2013

Analysis of contributions from vectorlike leptonic supermultiplets to the Higgs diphoton decay rate and to the Higgs boson mass is given. Corrections arising from the exchange of the new leptons and their superpartners as well as their mirrors are computed analytically and numerically. We also study the correlation between the enhanced Higgs diphoton rate and the Higgs mass corrections. Specifically, we find two branches in the numerical analysis: on the lower branch the diphoton rate enhancement is flat, while on the upper branch it has a strong correlation with the Higgs mass enhancement. It is seen that a factor of 1.4-1.8 enhancement of the Higgs diphoton rate on the upper branch can be achieved, and a 4-10 GeV positive correction to the Higgs mass can also be obtained simultaneously. The effect of this extra contribution to the Higgs mass is to release the constraint on weak-scale supersymmetry, allowing its scale to be lower than in the theory without extra contributions. The vectorlike supermultiplets also have collider implications which could be testable at the LHC and at the ILC. © 2013 American Physical Society. Source


Huang Y.,Hong Kong Baptist University | Palomar D.P.,Hong Kong University of Science and Technology | Zhang S.,University of Minnesota
IEEE Transactions on Signal Processing | Year: 2013

Consider a unicast downlink beamforming optimization problem with robust signal-to-interference-plus-noise ratio constraints to account for imperfect channel state information at the base station in a multiple-input single-output (MISO) communication system. The convexity of this robust beamforming problem remains unknown. A slightly conservative version of the robust beamforming problem is thus studied herein as a compromise. It is in the form of a semi-infinite second-order cone program (SOCP) and, more importantly, it possesses an equivalent and explicit convex reformulation, due to a linear matrix inequality (LMI) description of the cone of Lorentz-positive maps. Hence, the conservative robust beamforming problem can be efficiently solved by an optimization solver. Additional robust shaping constraints can also be easily handled to control the amount of interference generated on other co-existing users such as in cognitive radio systems. © 1991-2012 IEEE. Source


Wang X.S.,University of California at Santa Barbara | Yue C.P.,Hong Kong University of Science and Technology
IEEE Transactions on Microwave Theory and Techniques | Year: 2014

This paper presents the circuit techniques to achieve superior linearity and isolation for a single-pole six-throw transmit/receive (T/R) switch designed for GSM/W-CDMA dual-band operation at 0.85-0.9 and 1.8-1.9 GHz. Implemented in a 0.18-μ m thick-film silicon-on-insulator (SOI) CMOS process, the switch employs an LC-tuned asymmetric topology for the transmit (Tx) and receive (Rx) branch to handle the high-power GSM transmitter requirement. The proposed design also features a switchable double LC-tank acting as a variable impedance block to relax the tradeoff among linearity, insertion loss (IL), and isolation. Feed-forward capacitors, ac-floating bias techniques, and floating-body SOI devices are utilized to further improve the linearity. The measured P -0.1dB, IL and Tx-Rx isolation in the lower and upper band are 37.2-35.6 dBm, 0.43-0.75 dB, and 45-37 dB, respectively. The proposed T/R switch design in SOI CMOS is an important building block toward more compact and lower cost RF frond-end modules, which integrate the switch, antenna tuning module, and control logic on the same chip. © 1963-2012 IEEE. Source


Cheung T.H.,Stanford University | Cheung T.H.,Hong Kong University of Science and Technology | Rando T.A.,Stanford University | Rando T.A.,Neurology Service and Rehabilitation Research
Nature Reviews Molecular Cell Biology | Year: 2013

Subsets of mammalian adult stem cells reside in the quiescent state for prolonged periods of time. This state, which is reversible, has long been viewed as dormant and with minimal basal activity. Recent advances in adult stem cell isolation have provided insights into the epigenetic, transcriptional and post-transcriptional control of quiescence and suggest that quiescence is an actively maintained state in which signalling pathways are involved in maintaining a poised state that allows rapid activation. Deciphering the molecular mechanisms regulating adult stem cell quiescence will increase our understanding of tissue regeneration mechanisms and how they are dysregulated in pathological conditions and in ageing. © 2013 Macmillan Publishers Limited. All rights reserved. Source


Li J.,Hong Kong University of Science and Technology
IEEE Transactions on Engineering Management | Year: 2010

A learning perspective was applied to examining when multinational corporations select universities rather than local firms as partners in international RD alliances. Data were collected on 327 international RD alliances established over the 1995 - 2001 period in China, an emerging economy where intellectual property rights protection is still far from adequate, over the 19952001 period. The effects of factors such as the international investors host country RD experience and the ventures research objectives on the selection of universities or research institutes as local partners for RD alliances were analyzed. Analysis using logistic regression models suggests that the contribution of local universities and research institutes to such RD collaborations is likely to be high when foreign investors have had abundant prior RD experience in the host country and when the alliance has been established primarily for research rather than development purposes. The implications for theory, practice, and policymaking are discussed. © 2009 IEEE. Source


Ding C.,Hong Kong University of Science and Technology | Yang Y.,Southwest Jiaotong University | Tang X.,Southwest Jiaotong University
IEEE Transactions on Information Theory | Year: 2010

In communication systems, frequency hopping spread spectrum and direct sequence spread spectrum are two main spread coding technologies. Frequency hopping sequences are used in FH-CDMA systems. In this paper, an earlier idea of constructing optimal sets of frequency hopping sequences is further investigated. New optimal parameters of sets of frequency hopping sequences are obtained with subcodes of the ReedSolomon codes. Optimal sets of frequency hopping sequences are constructed with a class of irreducible cyclic codes. As a byproduct, the weight distribution of a subclass of irreducible cyclic codes is determined. © 2006 IEEE. Source


Lau A.K.W.,Hong Kong University of Science and Technology | Tang E.,Hong Kong Polytechnic University | Yam R.C.M.,City University of Hong Kong
Journal of Product Innovation Management | Year: 2010

While the beneficial impacts of supplier and customer integration are generally acknowledged, very few empirical research studies have examined how an organization can achieve better product performance through product innovation enhanced by such integration. This paper thus examines the impact of key supplier and customer integration processes (i.e., information sharing and product codevelopment with supplier and customer, respectively) on product innovation as well as their impact on product performance. It contributes to existing literature by asking how such integration activities affect product innovation and performance in both direct and indirect ways. After surveying 251 manufacturers in Hong Kong, this study tested the relationships among information sharing, product codevelopment, product innovativeness, and performance with three control variables (i.e., company size, type of industry, and market certainty). Structural equation modeling with correlation and t-tests was used to test the hypothesized research model. The findings indicate a direct, positive relationship between supplier and customer integration and product performance. In particular, this study verifies that sharing information with suppliers and product codevelopment with customers directly improves product performance. In addition, this study empirically examines the indirect effects of supplier and customer integration processes on product performance, mediated by innovation. This has seldom been attempted in previous research. The empirical findings show that product codevelopment with suppliers improves performance, mediated by innovation. However, the sampled firms cannot improve their product innovation by sharing information with their current customers and suppliers as well as codeveloping new products with the customers. If the adoption of supplier and customer integration is not cost free, the findings of this study may suggest firms work on particular supplier and customer integration processes (i.e., product codevelopment with suppliers) to improve their product innovation. The study also suggests that companies codevelop new products only with new customers and lead users instead of current ones for product innovation. For managers, this study has demonstrated that both information sharing and product codevelopment affect performance directly and indirectly. Managers should put more emphasis on these key processes, especially when linked with product innovation. Managers should consider involving their suppliers and customers in the early stages of design. Information sharing with suppliers is also important in product development. As suggested by this study, extensive effort on supplier and customer integration should be made to directly augment current product performance and product innovation at the same time. © 2010 Product Development & Management Association. Source


Muppala J.K.,Hong Kong University of Science and Technology
Proceedings - 2011 Workshop on Embedded Systems Education, WESE 2011 | Year: 2011

Does smartphone application development provide an opportunity to explore various aspects of embedded software? This question is the primary motivator behind the ideas explored in this paper. We cannot deny the ubiquitous nature of smartphones. Leveraging on this already available "platform" to convey embedded software concepts to Computer Science (CS) students seems an exciting opportunity. Traditionally CS have often shied away from the field of embedded systems owing to their perception of this area as "hardware" oriented, not without reason. We explore the Android platform as a means of advancing embedded software concepts to CS students. Copyright 2010 ACM. Source


Xu K.,Hong Kong University of Science and Technology | Huang J.-C.,National Taiwan Ocean University
Journal of Computational Physics | Year: 2010

With discretized particle velocity space, a multiscale unified gas-kinetic scheme for entire Knudsen number flows is constructed based on the BGK model. The current scheme couples closely the update of macroscopic conservative variables with the update of microscopic gas distribution function within a time step. In comparison with many existing kinetic schemes for the Boltzmann equation, the current method has no difficulty to get accurate Navier-Stokes (NS) solutions in the continuum flow regime with a time step being much larger than the particle collision time. At the same time, the rarefied flow solution, even in the free molecule limit, can be captured accurately. The unified scheme is an extension of the gas-kinetic BGK-NS scheme from the continuum flow to the rarefied regime with the discretization of particle velocity space. The success of the method is due to the un-splitting treatment of the particle transport and collision in the evaluation of local solution of the gas distribution function. For these methods which use operator splitting technique to solve the transport and collision separately, it is usually required that the time step is less than the particle collision time. This constraint basically makes these methods useless in the continuum flow regime, especially in the high Reynolds number flow simulations. Theoretically, once the physical process of particle transport and collision is modeled statistically by the kinetic Boltzmann equation, the transport and collision become continuous operators in space and time, and their numerical discretization should be done consistently. Due to its multiscale nature of the unified scheme, in the update of macroscopic flow variables, the corresponding heat flux can be modified according to any realistic Prandtl number. Subsequently, this modification effects the equilibrium state in the next time level and the update of microscopic distribution function. Therefore, instead of modifying the collision term of the BGK model, such as ES-BGK and BGK-Shakhov, the unified scheme can achieve the same goal on the numerical level directly. Many numerical tests will be used to validate the unified method. © 2010 Elsevier Inc. Source


Wang Q.,Hong Kong University of Science and Technology
IEEE Transactions on Information Theory | Year: 2010

Frequency hopping (FH) is one of the basic spread coding technologies in spread spectrum communications. FH sequences are needed in FH code-division multiple access (CDMA) systems. For the anti-jamming purpose, FH sequences are required to have a large linear span. A few optimal sets of FH sequences are available in the literature. However, their sequences have very small linear spans. It is known that an optimal set of FH sequences could be transformed to another optimal set of FH sequences with large linear spans by a power permutation, if the power is chosen properly [see C. Ding and J. Yin, IEEE Trans. Inf. Theory, vol. IT-54, pp. 3741-3745, 2008]. The objective of this paper is to investigate this idea of C. Ding and J. Yin further, and determine the linear span of the FH sequences in the optimal sets obtained by applying a power permutation to some existing optimal sets of FH sequences. © 2006 IEEE. Source


Ciucci F.,Hong Kong University of Science and Technology
Electrochimica Acta | Year: 2013

In this article several computational tools related to parameter identification and optimal experimental design (OED) in electrochemical impedance spectroscopy (EIS) are introduced. Weighted and iteratively reweighted least squares are revisited and are coupled to an optimization procedure, which aims at increasing the confidence on the estimated parameters and/or at shortening the experimental time without compromising the accuracy of the estimates. A sequential algorithm allowing real-time implementation of OED is also developed. A fuel cell electrode system model is used to test and validate the methods developed. Source


Tang X.,Southwest Jiaotong University | Ding C.,Hong Kong University of Science and Technology
IEEE Transactions on Information Theory | Year: 2010

Sequences with optimal autocorrelation property are needed in certain communication systems and cryptography. In this paper, a construction of balanced quaternary sequences with period N ≡ 2 (mod 4) and optimal autocorrelation value and a construction of almost balanced binary sequences with period N ≡ 0 (mod 4) and optimal autocorrelation value are presented. Both constructions are a generalization of earlier ones. © 2006 IEEE. Source


Wang Q.,Hong Kong University of Science and Technology
IEEE Transactions on Information Theory | Year: 2010

Binary sequences with good autocorrelation are needed in many applications. A construction of binary sequences with three-level autocorrelation was recently presented. This construction is generic and powerful in the sense that many classes of binary sequences with three-level autocorrelation could be obtained from any difference set with Singer parameters. The objective of this paper is to determine both the linear complexity and the minimal polynomial of two classes of binary sequences, i.e., the class based on the Singer difference set, and the class based on the GMW difference set. © 2006 IEEE. Source


Ding C.,Hong Kong University of Science and Technology | Tang X.,Southwest Jiaotong University
IEEE Transactions on Information Theory | Year: 2010

Binary sequences with low correlation have applications in communication systems and cryptography. Though binary sequences with optimal autocorrelation were constructed in the literature, no pair of binary sequences with optimal autocorrelation are known to have also best possible cross correlation. In this paper, new bounds on the cross correlation of binary sequences with optimal autocorrelation are derived, and pairs of binary sequences having optimal autocorrelation and meeting some of these bounds are presented. These new bounds are better than the Sarwate bounds on the cross correlation of binary sequences with optimal autocorrelation. © 2006 IEEE. Source


Wang Q.,Hong Kong University of Science and Technology | Du X.,Northwest Normal University
IEEE Transactions on Information Theory | Year: 2010

Binary sequences with optimal autocorrelation are needed in many applications. Two constructions of binary sequences with optimal autocorrelation of period N ≡ 0 (mod 4) are investigated. The two constructions are powerful and generic in the sense that many classes of binary sequences with optimal autocorrelation could be obtained from binary sequences with ideal autocorrelation. General results on the minimal polynomials of these binary sequences are derived. Based on the results, both the linear complexities and the minimal polynomials are determined. © 2006 IEEE. Source


Leung S.,Hong Kong University of Science and Technology | Qian J.,Michigan State University
Journal of Computational Physics | Year: 2010

We propose the backward phase flow method to implement the Fourier-Bros-Iagolnitzer (FBI)-transform-based Eulerian Gaussian beam method for solving the Schrödinger equation in the semi-classical regime. The idea of Eulerian Gaussian beams has been first proposed in [12]. In this paper we aim at two crucial computational issues of the Eulerian Gaussian beam method: how to carry out long-time beam propagation and how to compute beam ingredients rapidly in phase space. By virtue of the FBI transform, we address the first issue by introducing the reinitialization strategy into the Eulerian Gaussian beam framework. Essentially we reinitialize beam propagation by applying the FBI transform to wavefields at intermediate time steps when the beams become too wide. To address the second issue, inspired by the original phase flow method, we propose the backward phase flow method which allows us to compute beam ingredients rapidly. Numerical examples demonstrate the efficiency and accuracy of the proposed algorithms. © 2010 Elsevier Inc. Source


Li X.S.,Hong Kong University of Science and Technology | Dafalias Y.F.,University of California at Davis | Dafalias Y.F.,National Technical University of Athens
Journal of Engineering Mechanics | Year: 2012

An Anisotropic Critical State Theory (ACST) for granular media is presented, which accounts for the role of anisotropic fabric at critical state. It enhances the requirements of critical values for the stress and void ratio of the classical Critical State Theory (CST) by an additional requirement of critical value for an appropriate measure of fabric-anisotropy. A fabric tensor and its evolution toward a critical value, norm-wise and direction-wise, is introduced motivated by micromechanical and experimental studies. On the basis of a scalar-valued fabric-anisotropy variable relating the evolving fabric tensor to the loading direction, a dilatancy state line is defined in the void ratio-pressure plane which determines a dilatancy state parameter ζ that characterizes the contracting or dilating trends of the current state. When the fabric-anisotropy variable reaches its critical state value, the dilatancy state line becomes identical to the critical state line and the ζ identical to the well-known state parameter ψ An immediate corollary is the uniqueness of the critical state line, for which a thermodynamic proof is provided on the basis of the Gibbs condition. Static liquefaction is obtained when ζ = o with the stress ratio reaching its critical value but not the void ratio and the fabric. Simulations of anisotropic material response by a triaxial model are used to illustrate the effectiveness of the novel ACST. © 2012 American Society of Civil Engineers. Source


Zhang Q.,Princeton University | Austin R.H.,Princeton University | Austin R.H.,Hong Kong University of Science and Technology
Annual Review of Condensed Matter Physics | Year: 2012

It is a common mistake to view cancer as a single disease with a single possible cure which we have just not found yet. In reality cancer takes on many forms that share a common symptom: uncontrolled cell growth and successful invasion of cancer colonies to remote regions of the body. The key reason why we may never be able to defeat cancer may lie in the extreme heterogeneity of the population of the cells in a tumor: there is no one magic bullet. We will try in this review to show how the developing field of the physics of biological heterogeneity can help us understand and quantify the emergent heterogeneity that makes cancer such a fundamental puzzle. Copyright © 2012 by Annual Reviews. All rights reserved. Source


Chen A.,Utah State University | Ryu S.,Utah State University | Xu X.,Hong Kong University of Science and Technology | Choi K.,Ajou University
Computers and Operations Research | Year: 2014

The paired combinatorial logit (PCL) model is one of the recent extended logit models adapted to resolve the overlapping problem in the route choice problem, while keeping the analytical tractability of the logit choice probability function. However, the development of efficient algorithms for solving the PCL model under congested and realistic networks is quite challenging, since it has large-dimensional solution variables as well as a complex objective function. In this paper, we examine the computation and application of the PCL stochastic user equilibrium (SUE) problem under congested and realistic networks. Specifically, we develop an improved path-based partial linearization algorithm for solving the PCL SUE problem by incorporating recent advances in line search strategies to enhance the computational efficiency required to determine a suitable stepsize that guarantees convergence. A real network in the city of Winnipeg is applied to examine the computational efficiency of the proposed algorithm and the robustness of various line search strategies. In addition, in order to acquire the practical implications of the PCL SUE model, we investigate the effectiveness of how the PCL model handles the effects of congestion, stochasticity, and similarity in comparison with the multinomial logit stochastic traffic equilibrium problem and the deterministic traffic equilibrium problem. © 2013 Elsevier Ltd. Source


DeJong M.J.,University of Cambridge | Dimitrakopoulos E.G.,Hong Kong University of Science and Technology
Earthquake Engineering and Structural Dynamics | Year: 2014

Predicting the rocking response of structures to ground motion is important for assessment of existing structures, which may be vulnerable to uplift and overturning, as well as for designs which employ rocking as a means of seismic isolation. However, the majority of studies utilize a single rocking block to characterize rocking motion. In this paper, a methodology is proposed to derive equivalence between the single rocking block and various rocking mechanisms, yielding a set of fundamental rocking parameters. Specific structures that have exact dynamic equivalence with a single rocking block, are first reviewed. Subsequently, approximate equivalence between single and multiple block mechanisms is achieved through local linearization of the relevant equations of motion. The approximation error associated with linearization is quantified for three essential mechanisms, providing a measure of the confidence with which the proposed methodology can be applied. © 2014 John Wiley & Sons, Ltd. Source


Chan A.L.S.,Hong Kong University of Science and Technology
Building and Environment | Year: 2011

Building energy computer simulation software is a useful tool for achieving sophisticated design and evaluation of the thermal performance of buildings. For successful thermal and energy simulation of buildings, it requires hourly weather data such as dry bulb air temperature, relative humidity, solar radiation, wind speed, etc. Nowadays, an urban city faces a problem of an urban heat island which causes the urban area to have a higher air temperature than the rural region. Since the currently available weather dataset used in building simulation software mainly comes from weather stations located in remote and rural areas, the impact of the urban heat island on thermal and energy performance of buildings may not be effectively reflected. This paper reports an approach to construct a modified typical meteorological weather file, taking into account the urban heat island effect in the summer season. Field measurements have been carried out in the summer months and the corresponding urban heat island intensities were then determined. With a morphing algorithm, an existing typical meteorological year weather file was modified. An office building and a typical residential flat were modeled with a renowned building energy simulation program EnergyPlus. Computer simulations were conducted using the existing and modified typical meteorological year weather files. It was found that there was around a 10% increase in air-conditioning demand caused by the urban heat island effect in both cases. The implications of this and further work will also be discussed in this paper. © 2011 Elsevier Ltd. Source


Hong Y.,Zhejiang University | Soomro M.A.,University of Sindh | Ng C.W.W.,Hong Kong University of Science and Technology
Computers and Geotechnics | Year: 2015

Development of underground transportation systems often involve twin tunnels, which may encounter existing pile groups during construction. Since many previous studies mainly focus on the effects of single tunnelling on single piles, settlement and load transfer mechanism of a pile group subjected to twin tunnelling are not well investigated and understood. To address these two issues, two three-dimensional centrifuge tests were carried out in this study to simulate side-by-side twin tunnels (excavated one after the other on both sides of the pile group) at two critical locations relative to the pile group, namely next to (Test TT) and below the toe of the pile group (Test BB). Moreover, numerical back-analyses of the centrifuge tests are conducted by using a hypoplastic model, which takes small-strain stiffness into account. Both measured and computed results show that the induced tilting of the pile group in Test TT is significantly larger than that in Test BB, with a maximum percentage difference of 120%. On the other hand, a slightly smaller (about 13%) settlement of the pile group is induced in Test TT, as compared to that in Test BB. This is because the pile group in Test TT is partially located within the major influence zone of tunnelling-induced ground settlement while the entire pile group in Test BB is bounded by the major influence zone of ground settlement. Two distinct load transfer mechanisms due to twin tunnelling are identified, i.e., the load in the pile group in Test TT transfers downwards from the pile shaft to the pile toe while the load in the pile group in Test BB transfers upwards from the pile toe to the pile shaft. Apart from load transfer along each pile, load re-distribution also occurs among piles during twin tunnelling. In both Tests TT and BB, axial load at pile head only reduces at a pile closet to the advancing tunnel face and the reduction is re-distributed to the other three piles. The load re-distribution among piles results in a maximum increase of axial force of 10% in Test TT. © 2014 Elsevier Ltd. Source


McKay M.R.,Hong Kong University of Science and Technology | Collings I.B.,CSIRO | Tulino A.M.,Alcatel - Lucent
IEEE Transactions on Information Theory | Year: 2010

This paper investigates the achievable sum rate of multiple-input multiple-output (MIMO) wireless systems employing linear minimum mean-squared error (MMSE) receivers. We present a new analytic framework which exploits an interesting connection between the achievable sum rate with MMSE receivers and the ergodic mutual information achieved with optimal receivers. This simple but powerful result enables the vast prior literature on ergodic MIMO mutual information to be directly applied to the analysis of MMSE receivers. The framework is particularized to various Rayleigh and Rician channel scenarios to yield new exact closed-form expressions for the achievable sum rate, as well as simplified expressions in the asymptotic regimes of high and low signal-to-noise ratios (SNRs). These expressions lead to the discovery of key insights into the performance of MIMO MMSE receivers under practical channel conditions. © 2009 IEEE. Source


Chang H.,Tohoku University | Wu H.,Tohoku University | Wu H.,Hong Kong University of Science and Technology
Advanced Functional Materials | Year: 2013

Graphene, a two-dimensional, single-atom-thick carbon crystal arranged in a honeycomb lattice, shows extraordinary electronic, mechanical, thermal, optical, and optoelectronic properties, and has great potential in next-generation electronics, optics, and optoelectronics. Graphene and graphene-based nanomaterials have witnessed a very fast development of both fundamental and practical aspects in optics and optoelectronics since 2008. In this Feature Article, the synthesis techniques and main electronic and optical properties of graphene-based nanomaterials are introduced with a comprehensive view. Recent progress of graphene-based nanomaterials in optical and optoelectronic applications is then reviewed, including transparent conductive electrodes, photodetectors and phototransistors, photovoltaics and light emitting devices, saturable absorbers for ultrafast lasers, and biological and photocatalytic applications. In the final section, perspectives are given and future challenges in optical and optoelectronic applications of graphene-based nanomaterials are addressed. Graphene, a two-dimensional, single-atom-thick carbon crystal arranged in honeycomb lattices, shows extraordinary electronic, mechanical, thermal, optical, and optoelectronic properties, and has great potential in next-generation electronics, optics, and optoelectronics. Recent progress of graphene-based nanomaterials in optical and optoelectronic applications is reviewed, including transparent conductive electrodes, photodetectors and phototransistors, photovoltaic/light-emitting devices, saturable absorbers for ultrafast lasers, and biological and photocatalytic applications. Copyright © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim. Source


Han Y.,Hong Kong University of Science and Technology
Physical Review E - Statistical, Nonlinear, and Soft Matter Physics | Year: 2010

The six-vertex model is mapped to three-dimensional sphere stacks and different boundary conditions corresponding to different containers. The shape of the container provides a qualitative visualization of the boundary effect. Based on the sphere-stacking picture, we map the phase spaces of the six-vertex models to discrete networks. A node in the network represents a state of the system, and an edge between two nodes represents a zero-energy spin flip, which corresponds to adding or removing a sphere. The network analysis shows that the phase spaces of systems with different boundary conditions share some common features. We derived a few formulas for the number and the sizes of the disconnected phase-space subnetworks under the periodic boundary conditions. The sphere stacking provides new challenges in combinatorics and may cast light on some two-dimensional models. © 2010 The American Physical Society. Source


Pang J.-S.,University of Illinois at Urbana - Champaign | Scutari G.,University of Illinois at Urbana - Champaign | Palomar D.P.,Hong Kong University of Science and Technology | Facchinei F.,University of Rome La Sapienza
IEEE Transactions on Signal Processing | Year: 2010

The concept of cognitive radio (CR) has recently received great attention from the research community as a promising paradigm to achieve efficient use of the frequency resource by allowing the coexistence of licensed (primary) and unlicensed (secondary) users in the same bandwidth. In this paper, we propose a novel Nash equilibrium (NE) problem to model concurrent communications of cognitive secondary users who compete against each other to maximize their information rate. The formulation contains constraints on the transmit power (and possibly spectral masks) as well as aggregate interference tolerable at the primary users' receivers. The coupling among the strategies of the players due to the interference constraints presents a new challenge for the analysis of this class of Nash games that cannot be addressed using the game theoretical models proposed in the literature. For this purpose, we need the framework given by the more advanced theory of finite-dimensional variational inequalities (VI). This provides us with all the mathematical tools necessary to analyze the proposed NE problem (e.g., existence and uniqueness of the solution) and to devise alternative distributed algorithms along with their convergence properties. © 2010 IEEE. Source


Hua X.,Hong Kong University of Science and Technology
International Journal of Industrial Organization | Year: 2012

This paper examines the right of first offer, which requires a seller to bargain with the contracted buyer before subsequent buyers arrive. The contract also prevents the seller from selling his unique asset to subsequent buyers at a price below what he offers to the contracted buyer. The right of first offer makes the seller less aggressive in bargaining with the contracted buyer, who is privately informed about his valuation. Such a contract can reduce inter-temporal misallocation, in which a subsequent buyer gets the asset when the contracted buyer has higher valuation. But it also may cause misallocation in which the contracted buyer gets the asset when subsequent buyers have higher valuations. Overall, whether the right of first offer can increase the joint surplus for the seller and the contracted buyer, as well as social welfare, depends on the contracted buyer's renegotiation power and the distribution of the buyers' valuations. This paper also discusses the differences between the right of first offer and the most-favored-customer clause. © 2012 Elsevier B.V. All rights reserved. Source


Glatz J.F.C.,Maastricht University | Renneberg R.,Hong Kong University of Science and Technology
Clinical Lipidology | Year: 2014

Suspected acute coronary syndrome (ACS) represents a substantial healthcare problem and is responsible for a large proportion of emergency department admissions. Better triaging of patients with suspected ACS is needed to facilitate early initiation of appropriate therapy in patients with acute myocardial infarction (AMI) and to exclude low-risk patients who can safely be sent home, thereby limiting healthcare costs. H-FABP has been established as the earliest available plasma marker for myocardial injury. In this review we evaluate the clinical utility of H-FABP for suspected ACS. H-FABP shows added value in addition to cardiac troponin, especially in the early hours after onset of symptoms. Moreover, H-FABP identifies patients at increased risk for future cardiac events. It is concluded that measuring H-FABP along with troponin shortly after onset of symptoms improves risk stratification of patients suspected of having ACS in a cost-effective manner. © 2014 Future Medicine Ltd. Source


Cheng J.C.,Hong Kong University of Science and Technology
Procedia Engineering | Year: 2011

There is increasing demand for the measurement and accounting of the environmental and carbon footprint produced by corporate companies. In construction processes, environmental impacts accumulate along supply chains from raw material extraction, manufacturing, distribution, installation, maintenance, to demolition and disposal. The calculation of the environmental and carbon footprint of a construction project considers not only the emissions from contractors on site, but also those from the participating members along the supply chains. However, construction supply chains are characterized by their high fragmentation. It is not an easy task to measure and collect loosely distributed footprint data among numerous supply chain members. In addition, due to the project-based temporary nature of construction supply chains, it is unlikely for project participants to work together long enough on a project to build enough trust and to share information willingly. Therefore, a flexible, secure, and scalable support system is needed to measure and manage environmental and carbon footprint data in construction supply chains. This paper presents a web service collaborative framework for measuring, monitoring, and integrating environmental and carbon footprint data in construction supply chains. Web services technology is used because its "plug-and-play" capability allows flexible and quick system reconfiguration, which is desirable for communication and collaboration in construction supply chains. In the framework, each process element and footprint calculation is represented and delivered as individual web service units, which can be reused and integrated over standard web service protocols. This paper also presents an illustrative example to demonstrate the implementation of the web service framework. Source


Zhuo H.H.,Sun Yat Sen University | Yang Q.,Hong Kong University of Science and Technology
Artificial Intelligence | Year: 2014

Applying learning techniques to acquire action models is an area of intense research interest. Most previous work in this area has assumed that there is a significant amount of training data available in a planning domain of interest. However, it is often difficult to acquire sufficient training data to ensure the learnt action models are of high quality. In this paper, we seek to explore a novel algorithm framework, called TRAMP, to learn action models with limited training data in a target domain, via transferring as much of the available information from other domains (called source domains) as possible to help the learning task, assuming action models in source domains can be transferred to the target domain. TRAMP transfers knowledge from source domains by first building structure mappings between source and target domains, and then exploiting extra knowledge from Web search to bridge and transfer knowledge from sources. Specifically, TRAMP first encodes training data with a set of propositions, and formulates the transferred knowledge as a set of weighted formulas. After that it learns action models for the target domain to best explain the set of propositions and the transferred knowledge. We empirically evaluate TRAMP in different settings to see their advantages and disadvantages in six planning domains, including four International Planning Competition (IPC) domains and two synthetic domains. © 2014 PublishedbyElsevierB.V. Source


Zhang C.,Hong Kong University of Science and Technology | Jacobsen H.-A.,Kings College
IEEE Transactions on Software Engineering | Year: 2012

Inspired by our past manual aspect mining experiences, this paper describes a probabilistic random walk model to approximate the process of discovering crosscutting concerns (CCs) in the absence of the domain knowledge about the investigated application. The random walks are performed on the concept graphs extracted from the program sources to calculate metrics of 201C and 201 for each of the program elements. We rank all the program elements based on these metrics and use a threshold to produce a set of candidates that represent crosscutting concerns. We implemented the algorithm as the Prism CC miner (PCM) and evaluated PCM on Java applications ranging from a small-scale drawing application to a medium-sized middleware application and to a large-scale enterprise application server. Our quantification shows that PCM is able to produce comparable results (95 percent accuracy for the top 125 candidates) with respect to the manual mining effort. PCM is also significantly more effective as compared to the conventional approach. © 2012 IEEE. Source


Lin Z.,Hong Kong University of Science and Technology
Accounts of Chemical Research | Year: 2010

Computational and theoretical chemistry provide fundamental insights into the structures, properties, and reactivities of molecules. As a result, theoretical calculations have become indispensable in various fields of chemical research and development. In this Account, we present our research in the area of computational transition metal chemistry, using examples to illustrate how theory impacts our understanding of experimental results and how close collaboration between theoreticians and experimental chemists can be mutually beneficial. We begin by examining the use of computational chemistry to elucidate the details of some unusual chemical bonds. We consider the three-center, two-electron bonding in titanocene σ-borane complexes and the five-center, four-electron bonding in a rhodium-bismuth complex. The bonding in metallabenzene complexes is also examined. In each case, theoretical calculations provide particular insight into the electronic structure of the chemical bonds. We then give an example of how theoretical calculations aided the structural determination of a κ2-N,N chelate ruthenium complex formed upon heating an intermediate benzonitrile-coordinated complex. An initial X-ray diffraction structure proposed on the basis of a reasonable mechanism appeared to fit well, with an apparently acceptable R value of 0.0478. But when DFT calculations were applied, the optimized geometry differed significantly from the experimental data. By combining experimental and theoretical outlooks, we posited a new structure. Remarkably, a re-refining of the X-ray diffraction data based on the new structure resulted in a slightly lower R value of 0.0453. We further examine the use of computational chemistry in providing new insight into C-H bond activation mechanisms and in understanding the reactivity properties of nucleophilic boryl ligands, addressing experimental difficulties with calculations and vice versa. Finally, we consider the impact of theoretical insights in three very specific experimental studies of chemical reactions, illustrating how theoretical results prompt further experimental studies: (i) diboration of aldehydes catalyzed by copper(I) boryl complexes, (ii) ruthenium-catalyzed C-H amination of arylazides, and (iii) zinc reduction of a vinylcarbyne complex. The concepts and examples presented here are intended for nonspecialists, particularly experimentalists. Together, they illustrate some of the achievements that are possible with a fruitful union of experiment and theory. © 2010 American Chemical Society. Source


Mollon G.,CNRS Contacts and Structural Mechanics Laboratory | Zhao J.,Hong Kong University of Science and Technology
Computer Methods in Applied Mechanics and Engineering | Year: 2014

The inability of simulating the grain shapes of granular media accurately has been an outstanding issue preventing particle-based methods such as discrete element method from providing meaningful information for relevant scientific and engineering applications. In this study we propose a novel statistical method to generate virtual 3D particles with realistically complex yet controllable shapes and further pack them effectively for use in discrete-element modelling of granular materials. We combine the theory of random fields for spherical topology with a Fourier-shape-descriptor based method for the particle generation, and develop rigorous solutions to resolve the mathematical difficulties arising from the linking of the two. The generated particles are then packed within a prescribed container by a cell-filling algorithm based on Constrained Voronoi Tessellation. We employ two examples to demonstrate the excellent control and flexibility that the proposed method can offer in reproducing such key characteristics as shape descriptors (aspect ratio, roundness, sphericity, presence of facets, etc.), size distribution and solid fraction. The study provides a general and robust framework on effective characterization and packing of granular particles with complex shapes for discrete modelling of granular media. © 2014 Elsevier B.V. Source


Yan W.M.,University of Hong Kong | Li X.S.,Hong Kong University of Science and Technology
Geotechnique | Year: 2011

This paper presents a thermodynamically consistent constitutive model for natural soils with bonds. In the model, the free energy (the internal energy available to do work) is contributed partly by the so-called frozen or locked energy, whose evolution is assumed to be homogeneously related to the irrecoverable deformation. During loading, the bonds existing in the natural soil not only boost the dissipation rate but also liberate certain historically accumulated locked energy. Such effects, however, are diminished as loading proceeds and the bonds are destroyed. The novel aspect of the present model is that it accommodates both the Mohr-Coulomb and critical-state failure modes, and the two modes are unified through the evolution law of a thermodynamic force associated with the locked bonding energy. As compared with the classical Cam-clay models, the model contains two additional material constants, where one is proposed by Collins & Kelly to improve the shape of the yield surface, and the other is dedicated to bonding evolution. The calibration procedure for the material parameters is provided. The capability of the model is demonstrated by a series of model simulations on a hypothetical bonded soil under various triaxial loading paths, and the model response is also compared with representative testing results in the literature. Source


Ding D.,National University of Singapore | Li K.,Institute of Materials Research and Engineering of Singapore | Liu B.,National University of Singapore | Liu B.,Institute of Materials Research and Engineering of Singapore | And 3 more authors.
Accounts of Chemical Research | Year: 2013

Fluorescent bioprobes are powerful tools for analytical sensing and optical imaging, which allow direct visualization of biological analytes at the molecular level and offer useful insights into complex biological structures and processes. The sensing and imaging sensitivity of a bioprobe is determined by the brightness and contrast of its fluorescence before and after analyte binding. Emission from a fluorophore is often quenched at high concentration or in aggregate state, which is notoriously known as concentration quenching or aggregation-caused quenching (ACQ). The ACQ effect limits the label-to-analyte ratio and forces researchers to use very dilute solutions of fluorophores. It compels many probes to operate in a fluorescence "turn-off" mode with a narrow scope of practical applications.The unique aggregation-induced emission (AIE) process offers a straightforward solution to the ACQ problem. Typical AIE fluorogens are characterized by their propeller-shaped rotorlike structures, which undergo low-frequency torsional motions as isolated molecules and emit very weakly in solutions. Their aggregates show strong fluorescence mainly due to the restriction of their intramolecular rotations in the aggregate state. This fascinating attribute of AIE fluorogens provides a new platform for the development of fluorescence light-up molecules and photostable nanoaggregates for specific analyte detection and imaging.In this Account, we review our recent AIE work to highlight the utility of AIE effect in the development of new fluorescent bioprobes, which allows the use of highly concentrated fluorogens for biosensing and imaging. The simple design and fluorescence turn-on feature of the molecular AIE bioprobes offer direct visualization of specific analytes and biological processes in aqueous media with higher sensitivity and better accuracy than traditional fluorescence turn-off probes. The AIE dot-based bioprobes with different formulations and surface functionalities show advanced features over quantum dots and small molecule dyes, such as large absorptivity, high luminosity, excellent biocompatibility, free of random blinking, and strong photobleaching resistance. These features enable cancer cell detection, long term cell tracing, and tumor imaging in a noninvasive and high contrast manner. Recent research has significantly expanded the scope of biological applications of AIE fluorogens and offers new strategies to fluorescent bioprobe design. We anticipate that future development on AIE bioprobes will combine one- or multiphoton fluorescence with other modalities (e.g., magnetic resonance imaging) or functionalities (e.g. therapy) to fully demonstrate their potential as a new generation of theranostic reagent. In parallel, the advances in molecular biology will provide more specific bioreceptors, which will enable the development of next generation AIE bioprobes with high selectivity and sensitivity for molecular sensing and imaging. © 2013 American Chemical Society. Source


Hong L.J.,Hong Kong University of Science and Technology | Yang Y.,University of California at Irvine | Zhang L.,Dalian University of Technology
Operations Research | Year: 2011

When there is parameter uncertainty in the constraints of a convex optimization problem, it is natural to formulate the problem as a joint chance constrained program (JCCP), which requires that all constraints be satisfied simultaneously with a given large probability. In this paper, we propose to solve the JCCP by a sequence of convex approximations. We show that the solutions of the sequence of approximations converge to a Karush-Kuhn-Tucker (KKT) point of the JCCP under a certain asymptotic regime. Furthermore, we propose to use a gradient-based Monte Carlo method to solve the sequence of convex approximations. © 2011 INFORMS. Source


Liu G.,City University of Hong Kong | Hong L.J.,Hong Kong University of Science and Technology
Operations Research | Year: 2011

The Greeks are the derivatives (also known as sensitivities) of the option prices with respect to market parameters. They play an important role in financial risk management. Among many Monte Carlo methods of estimating the Greeks, the classical pathwise method requires only the pathwise information that is directly observable from simulation and is generally easier to implement than many other methods. However, the classical pathwise method is generally not applicable to the Greeks of options with discontinuous payoffs and the second-order Greeks. In this paper, we generalize the classical pathwise method to allow discontinuity in the payoffs. We show how to apply the new pathwise method to the first- and second-order Greeks and propose kernel estimators that require little analytical efforts and are very easy to implement. The numerical results show that our estimators work well for practical problems. © 2011 INFORMS. Source


Zhang J.,Hong Kong University of Science and Technology
Queueing Systems | Year: 2013

We study many-server queues with abandonment in which customers have general service and patience time distributions. The dynamics of the system are modeled using measure-valued processes, to keep track of the residual service and patience times of each customer. Deterministic fluid models are established to provide a first-order approximation for this model. The fluid model solution, which is proved to uniquely exist, serves as the fluid limit of the many-server queue, as the number of servers becomes large. Based on the fluid model solution, first-order approximations for various performance quantities are proposed. © 2012 Springer Science+Business Media, LLC. Source


Lu X.,Fudan University | Ba S.,University of Connecticut | Huang L.,Fudan University | Feng Y.,Hong Kong University of Science and Technology
Information Systems Research | Year: 2013

The value of promotional marketing and word-of-mouth (WOM) is well recognized, but few studies have compared the effects of these two types of information in online settings. This research examines the effect of marketing efforts and online WOM on product sales by measuring the effects of online coupons, sponsored keyword search, and online reviews. It aims to understand the relationship between firms' promotional marketing and WOM in the context of a third party review platform. Using a three-year panel data set from one of the biggest restaurant review websites in China, the study finds that both online promotional marketing and reviews have a significant impact on product sales, which suggests promotional marketing on third party review platforms is still an effective marketing tool. This research further explores the interaction effects between WOM and promotional marketing when these two types of information coexist. The results demonstrate a substitute relationship between the WOM volume and coupon offerings, but a complementary relationship between WOM volume and keyword advertising. © 2013 Informs. Source


Chang K.-H.,National Tsing Hua University | Hong L.J.,Hong Kong University of Science and Technology | Wan H.,Purdue University
INFORMS Journal on Computing | Year: 2013

Response surface methodology (RSM) is a widely used method for simulation optimization. Its strategy is to explore small subregions of the decision space in succession instead of attempting to explore the entire decision space in a single attempt. This method is especially suitable for complex stochastic systems where little knowledge is available. Although RSM is popular in practice, its current applications in simulation optimization treat simulation experiments the same as real experiments. However, the unique properties of simulation experiments make traditional RSM inappropriate in two important aspects: (1) It is not automated; human involvement is required at each step of the search process; (2) RSM is a heuristic procedure without convergence guarantee; the quality of the final solution cannot be quantified. We propose the stochastic trust-region response-surface method (STRONG) for simulation optimization in attempts to solve these problems. STRONG combines RSM with the classic trust-region method developed for deterministic optimization to eliminate the need for human intervention and to achieve the desired convergence properties. The numerical study shows that STRONG can outperform the existing methodologies, especially for problems that have grossly noisy response surfaces, and its computational advantage becomes more obvious when the dimension of the problem increases. © 2013 INFORMS. Source


Zhang X.,Hong Kong University of Science and Technology | Venkatesh V.,University of Arkansas
MIS Quarterly: Management Information Systems | Year: 2013

By distinguishing between employees' online and offline workplace communication networks, this paper incorporates technology into social network theory to understand employees' job performance. Specifically, we conceptualize network ties as direct and indirect ties in both online and offline workplace communication networks, thus resulting in four distinct types of ties. We theorize that employees' ties in online and offline workplace communication networks are complementary resources that interact to influence their job performance. We found support for our model in a field study among 104 employees in a large telecommunication company. The paper concludes with theoretical and practical implications. Source


Venkatesh V.,University of Arkansas | Zhang X.,Hong Kong University of Science and Technology | Sykes T.A.,University of Arkansas
Information Systems Research | Year: 2011

With the strong ongoing push toward investment in and deployment of electronic healthcare (e-healthcare) systems, understanding the factors that drive the use of such systems and the consequences of using such systems is of scientific and practical significance. Elaborate training in new e-healthcare systems is not a luxury that is typically available to healthcare professionals-i.e., doctors, paraprofessionals (e.g., nurses) and administrative personnel-because of the 24 × 7 nature and criticality of operations of healthcare organizations, especially hospitals, thus making peer interactions and support a key driver of or barrier to such e-healthcare system use. Against this backdrop, using social networks as a theoretical lens, this paper presents a nomological network related to e-healthcare system use. A longitudinal study of an e-healthcare system implementation, with data gathered from doctors, paraprofessionals, administrative personnel, patients, and usage logs lent support to the hypotheses that: (1) ingroup and outgroup ties to doctors negatively affect use in all user groups; (2) ingroup and outgroup ties to paraprofessionals and administrative personnel positively affect use in both those groups, but have no effect on doctors' use; and (3) use contributes positively to patient satisfaction mediated by healthcare quality variables-i.e., technical quality, communication, interpersonal interactions, and time spent. This work contributes to the theory and practice related to the success of e-healthcare system use in particular, and information systems in general. © 2011 INFORMS. Source


Zhao J.,Hong Kong University of Science and Technology
International Journal of Solids and Structures | Year: 2011

This paper presents a unified theory for both cylindrical and spherical cavity expansion problems in cohesive-frictional micromorphic media. A phenomenological strain-gradient plasticity model in conjunction with a generalized Mohr-Coulomb criterion is employed to characterize the elasto-plastic behavior of the material. To solve the resultant two-point boundary-value problem (BVP) of fourth-order homogeneous ordinary differential equation (ODE) for the governing equations which is not well-conditioned in certain cases, several numerical methods are developed and are compared in terms of robustness, efficiency and accuracy. Using one of the finite difference methods that shows overall better performance, both cylindrical and spherical cavity expansion problems in micromorphic media are solved. The influences of microstructural properties on the expansion response are clearly demonstrated. Size effect during the cavity expansion is captured. The proposed theory is also applied to a revisit of the classic problem of stress concentration around a cavity in a micromorphic medium subjected to isotropic tension at infinity, for which some conclusions made in early studies are revised. The proposed theory can be useful for the interpretation of indentation tests at small scales. © 2011 Elsevier Ltd. All rights reserved. Source


Chen Y.-J.,Hong Kong University of Science and Technology | Tang C.S.,University of California at Los Angeles
Production and Operations Management | Year: 2015

In developing countries, farmers lack information for making informed production, manufacturing/selling decisions to improve their earnings. To alleviate poverty, various non-governmental organizations (NGOs) and for-profit companies have developed different ways to distribute information about market price, crop advisory and farming technique to farmers. We investigate a fundamental question: will information create economic value for farmers? We construct a stylized model in which farmers face an uncertain market price (demand) and must make production decisions before the market price is realized. Each farmer has an imprecise private signal and an imprecise public signal to estimate the actual market price. By examining the equilibrium outcomes associated with a Cournot competition game, we show that private signals do create value by improving farmers' welfare. However, this value deteriorates as the public signal becomes available (or more precise). In contrast, in the presence of private signals, the public signal does not always create value for the farmers. Nevertheless, both private and public signals will reduce price variation. We also consider two separate extensions that involve non-identical private signal precisions and farmers' risk-aversion, and we find that the same results continue to hold. More importantly, we find that the public signal can reduce welfare inequality when farmers have non-identical private signal precisions. Also, risk-aversion can dampen the value created by private or public information. © 2015 Production and Operations Management Society. Source


Xu S.X.,Tsinghua University | Zhang X.M.,Hong Kong University of Science and Technology
MIS Quarterly: Management Information Systems | Year: 2013

In this paper, we seek to determine whether a typical social media platform, Wikipedia, improves the information environment for investors in the financial market. Our theoretical lens leads us to expect that information aggregation about public companies on Wikipedia may influence how management's voluntary information disclosure reacts to market uncertainty with respect to investors' information about these companies. Our empirical analysis is based on a unique data set collected from financial records, management disclosure records, news article coverage, and a Wikipedia modification history of public companies. On the supply side of information, we find that information aggregation on Wikipedia can moderate the timing of managers' voluntary disclosure of companies' earnings disappointments, or bad news. On the demand side of information, we find that Wikipedia's information aggregation moderates investors' negative reaction to bad news. Taken together, these findings support the view that Wikipedia improves the information environment in the financial market and underscore the value of information aggregation through the use of information technology. Source


Chasnov J.R.,Hong Kong University of Science and Technology
Theoretical Population Biology | Year: 2012

Under haploid selection, a multi-locus, diallelic, two-niche Levene (1953) model is studied. Viability coefficients with symmetrically opposing directional selection in each niche are assumed, and with a further simplification that the most and least favored haplotype in each niche shares no alleles in common, and that the selection coefficients monotonically increase or decrease with the number of alleles shared. This model always admits a fully polymorphic symmetric equilibrium, which may or may not be stable.We show that a stable symmetric equilibrium can become unstable via either a supercritical or subcritical pitchfork bifurcation. In the supercritical bifurcation, the symmetric equilibrium bifurcates to a pair of stable fully polymorphic asymmetric equilibria; in the subcritical bifurcation, the symmetric equilibrium bifurcates to a pair of unstable fully polymorphic asymmetric equilibria, which then connect to either another pair of stable fully polymorphic asymmetric equilibria through saddle-node bifurcations, or to a pair of monomorphic equilibria through transcritical bifurcations. As many as three fully polymorphic stable equilibria can coexist, and jump bifurcations can occur between these equilibria when model parameters are varied.In our Levene model, increasing recombination can act to either increase or decrease the genetic diversity of a population. By generating more hybrid offspring from the mating of purebreds, recombination can act to increase genetic diversity provided the symmetric equilibrium remains stable. But by destabilizing the symmetric equilibrium, recombination can ultimately act to decrease genetic diversity. © 2011 Elsevier Inc. Source


Zhang K.,Lawrence Berkeley National Laboratory | Kwok J.T.,Hong Kong University of Science and Technology
IEEE Transactions on Neural Networks | Year: 2010

The finite mixture model is widely used in various statistical learning problems. However, the model obtained may contain a large number of components, making it inefficient in practical applications. In this paper, we propose to simplify the mixture model by minimizing an upper bound of the approximation error between the original and the simplified model, under the use of the L 2 distance measure. This is achieved by first grouping similar components together and then performing local fitting through function approximation. The simplified model obtained can then be used as a replacement of the original model to speed up various algorithms involving mixture models during training (e.g., Bayesian filtering, belief propagation) and testing [e.g., kernel density estimation, support vector machine (SVM) testing]. Encouraging results are observed in the experiments on density estimation, clustering-based image segmentation, and simplification of SVM decision functions. © 2006 IEEE. Source


Lian X.,University of Texas-Pan American | Chen L.,Hong Kong University of Science and Technology
Information Sciences | Year: 2013

Due to the existence of uncertain data in a wide spectrum of real applications, uncertain query processing has become increasingly important, which dramatically differs from handling certain data in a traditional database. In this paper, we formulate and tackle an important query, namely probabilistic top-k dominating (PTD) query, in the uncertain database. In particular, a PTD query retrieves k uncertain objects that are expected to dynamically dominate the largest number of uncertain objects. We propose an effective pruning approach to reduce the PTD search space, and present an efficient query procedure to answer PTD queries. Moreover, approximate PTD query processing and the case where the PTD query is issued from an uncertain query object are also discussed. Furthermore, we propose an important query type, that is, the PTD query in arbitrary subspaces (namely SUB-PTD), which is more challenging, and provide an effective pruning method to facilitate the SUB-PTD query processing. Extensive experiments have demonstrated the efficiency and effectiveness of our proposed PTD query processing approaches. © 2012 Published by Elsevier Inc. Source


Chigrinov V.G.,Hong Kong University of Science and Technology
Crystals | Year: 2013

Photoalignment possesses obvious advantages in comparison with the usually "rubbing" treatment of the substrates of liquid crystal display (LCD) cells. The application of the photoalignment and photopatterning nanotechnology for the new generation of photonic and display devices will be reviewed. © 2013 by the authors; licensee MDPI, Basel, Switzerland. Source


Ku A.S.-M.,Hong Kong University of Science and Technology
Environment and Planning D: Society and Space | Year: 2012

The paper integrates spatial analysis with dynamic discourse analysis to look at the interplay among discourse, agency, and spatial practices in the social production of space. It examines the dual process of place making and discursive formation with regard to the campaigns over the Star Ferry pier and the Queen's pier in Hong Kong in 2006-07. Drawing on and extending Lefebvre's theory, which asserts the priority of space over language, I argue that social movement presents a case of reappropriation of space that is intended to be read and lived interactively. The two case studies show that the events became vehicles for oppositional ideas and practices that gradually crystallised into a counterdiscourse of people's space in the process of remaking places from below. The dynamic discourse analysis focuses on the contestatory process of multivocal claims and interpretations among the activists, the media, and the government regarding memory, history, living space, and agency. The spatial analysis sheds light on the material embodiment of meanings in places as well as the activists' tactics and actions. The interplay between discourse and spatiality is registered in how one informed or prefigured the other's development, how action-guiding narratives were recounted in spatial terms, and how the activists enacted the agency of the narratives in and through the places. I conclude that the struggle underscores the rise of a new social movement in society. © 2012 Pion Ltd and its Licensors. Source


Sun X.,Societe Generale Corporate and Investment Bank | Tsang D.H.K.,Hong Kong University of Science and Technology
IEEE Transactions on Wireless Communications | Year: 2013

In this paper, by taking both sensing performance and energy efficiency into consideration, the Cooperative Sensing Scheduling (CSS) problem for multi-band Cognitive Radio Networks (CRNs) is investigated under a practical scenario where both Primary User (PU) channels and Secondary Users (SUs) have heterogeneous characteristics. Unlike many existing works that merely claim that the CSS problem is NP-hard and then turn to heuristic methods, we analyze this problem under a solid discrete-convex framework. After formulating the CSS problem as a nonlinear binary programming problem, we adopt a three-step approach to solve it. In the first step, the number of SUs assigned to sense each PU channel is determined with the M/M^natural-convex theory. Based on the results obtained in the first step, we then find the SU assignment using the L/L^natural-convex theory in the second step. In the last step, the optimal number of SUs participating in sensing is obtained based on the SU assignment obtained in step two. By combining these three steps, a complete and efficient SU assignment scheme is obtained. Numerical results are provided to evaluate the performance of our proposed SU assignment scheme and validate the theoretical analysis. © 2002-2012 IEEE. Source


Ashoorioon A.,Lancaster University | Dimopoulos K.,Lancaster University | Sheikh-Jabbari M.M.,Institute for Research in Fundamental Sciences | Sheikh-Jabbari M.M.,Kyung Hee University | And 2 more authors.
Physics Letters, Section B: Nuclear, Elementary Particle and High-Energy Physics | Year: 2014

The BICEP2 experiment has announced a signal for primordial gravity waves with tensor-to-scalar ratio r=0.2-0.05+0.07 [1]. There are two ways to reconcile this result with the latest Planck experiment [2]. One is by assuming that there is a considerable tilt of r, Tr, with a positive sign, Tr=dlnr/dlnk≳0.57-0.27+0.29 corresponding to a blue tilt for the tensor modes of order nT≃0.53-0.27+0.29, assuming the Planck experiment best-fit value for tilt of scalar power spectrum n S. The other possibility is to assume that there is a negative running in the scalar spectral index, dn S/dln k ≃ - 0.02 which pushes up the upper bound on r from 0.11 up to 0.26 in the Planck analysis assuming the existence of a tensor spectrum. Simple slow-roll models fail to provide such large values for Tr or negative runnings in n S [1]. In this note we show that a non-Bunch-Davies initial state for perturbations can provide a match between large field chaotic models (like m2φ2) with the latest Planck result [3] and BICEP2 results by accommodating either the blue tilt of r or the negative large running of n S. © 2014 The Authors. Source


Ho S.Y.,Australian National University | Bodoff D.,Haifa University | Tam K.Y.,Hong Kong University of Science and Technology
Information Systems Research | Year: 2011

Web personalization allows online merchants to customize Web content to serve the needs of individual customers. Using data mining and clickstream analysis techniques, merchants can now adapt website content in real time to capture the current preferences of online customers. Though the ability to offer adaptive content in real time opens up new business opportunities for online merchants, it also raises questions of timing. One question is when to present personalized content to consumers. Consumers prefer early presentation that eases their selection process, whereas adaptive systems can make better personalized content if they are allowed to collect more consumers' clicks over time. A review of personalization research confirms that little work has been done on these timing issues in the context of personalized services. The current study aims to fill that gap. Drawing on consumer search theory, we develop hypotheses about consumer responses to differences in presentation timing and recommendation type and the interaction between the two. The findings establish that quality improves over the course of an online session but the probability of considering and accepting a given recommendation diminishes over the course of the session. These effects are also shown to interact with consumer expertise, providing insights on the interplay between the different design elements of a personalization strategy. © 2011 INFORMS. Source


Liu J.,University of Hong Kong | Zhang F.-C.,University of Hong Kong | Law K.T.,Hong Kong University of Science and Technology
Physical Review B - Condensed Matter and Materials Physics | Year: 2013

Recent observation of zero bias conductance peaks in semiconductor wire/superconductor heterostructures has generated great interest, and there is a hot debate on whether the observation is associated with Majorana fermions (MFs). Here we study the local and crossed Andreev reflections of two normal leads attached to the two ends of a superconductor-semiconductor wire. We show that the MFs induced crossed Andreev reflections have significant effects on the shot noise of the device and strongly enhance the current-current correlations between the two normal leads. The measurements of shot noise and current-current correlations can be used to identify MFs. © 2013 American Physical Society. Source


Ding C.,Hong Kong University of Science and Technology | Liu Y.,Hebei Normal University | Ma C.,Hebei Normal University | Zeng L.,Hebei Normal University
IEEE Transactions on Information Theory | Year: 2011

Cyclic codes with two zeros and their dual codes have been a subject of study for many years. However, their weight distributions are known only for a few cases. In this paper, the weight distributions of the duals of the cyclic codes with two zeros are settled for a few more cases. © 2006 IEEE. Source


Chung K.K.K.,Hong Kong University of Science and Technology | David K.K.,Johns Hopkins University
Nitric Oxide - Biology and Chemistry | Year: 2010

Nitric oxide (NO) is a gaseous signaling molecule which has physiological and pathological roles in the cell. Under normal conditions, NO is produced by nitric oxide synthase (NOS) and can induce physiological responses such as vasodilation. However, over-activation of NOS has been linked to a number of human pathological conditions. For instance, most neurodegenerative disorders are marked by the presence of nitrated protein aggregates. How nitrosative stress leads to neurodegeneration is not clear, but various studies suggest that increased nitrosative stress causes protein nitration which then leads to protein aggregation. Protein aggregates are highly toxic to neurons and can promote neurodegeneration. In addition to inducing protein aggregation, recent studies show that nitrosative stress can also compromise a number of neuroprotective pathways by modifying activities of certain proteins through S-nitrosylation. These findings suggest that increased nitrosative stress can contribute to neurodegeneration through multiple pathways. © 2010 Elsevier Inc. All rights reserved. Source


Meng G.,Hong Kong University of Science and Technology
Journal of Mathematical Physics | Year: 2012

The McIntosh-Cisneros-Zwanziger (MICZ)-Kepler orbits are the non-colliding orbits of the MICZ-Kepler problems (the magnetized versions of the Kepler problem). The oriented MICZ-Kepler orbits can be parametrized by the canonical angular momentum L and the Lenz vector A, with the parameter space consisting of the pairs of 3D vectors (A,L) with L L > (L A) 2. The recent 4D perspective of the Kepler problem yields a new parametrization, with the parameter space consisting of the pairs of Minkowski vectors (a, l) with l l = -1, a l = 0, a 0 > 0. Here, a 0 is the temporal component of a. This new parametrization of orbits implies that the MICZ-Kepler orbits of different magnetic charges are related to each other by symmetries: SO + (1,3)R + acts transitively on both the set of oriented elliptic MICZ-Kepler orbits and the set of oriented parabolic MICZ-Kepler orbits. This action extends to O + (1,3)R +, the structure group for the rank-two Euclidean Jordan algebra whose underlying Lorentz space is the Minkowski space. © 2012 American Institute of Physics. Source


Shimokawa S.,Hong Kong University of Science and Technology
Journal of Integrative Agriculture | Year: 2015

Sustainable meat consumption is critical to achieve a sustainable food system because meat products are among the most energy-intensive, ecologically burdensome, and ethically concerned foods. This paper focuses on the case of China and discusses the difficulties and possibilities to achieve sustainable meat consumption in China by reviewing previous empirical studies and descriptive statistics, particularly considering consumers' dietary transitions in quantity and quality following China's rapid economic growth. Given China's sheer size of population and meat demand, the sustainable meat consumption in China is also a relevant topic in the global food system. © 2015 Chinese Academy of Agricultural Sciences. Source


Xu J.,George Mason University | Nelson B.L.,Northwestern University | Hong L.J.,Hong Kong University of Science and Technology
INFORMS Journal on Computing | Year: 2013

We propose an adaptive hyperbox algorithm (AHA), which is an instance of a locally convergent, random search algorithm for solving discrete optimization via simulation problems. Compared to the COMPASS algorithm, AHA is more efficient in high-dimensional problems. By analyzing models of the behavior of COMPASS and AHA, we show why COMPASS slows down significantly as dimension increases, whereas AHA is less affected. Both AHA and COMPASS can be used as the local search algorithm within the Industrial Strength COMPASS framework, which consists of a global search phase, a local search phase, and a final cleanup phase. We compare the performance of AHA to COMPASS within the framework of Industrial Strength COMPASS and as stand-alone algorithms. Numerical experiments demonstrate that AHA scales up well in high-dimensional problems and has similar performance to COMPASS in low-dimensional problems. © 2013 INFORMS. Source


Leung K.W.,Hong Kong University of Science and Technology | Wong A.S.,University of Hong Kong
Chinese Medicine | Year: 2010

The therapeutic potential of ginseng has been studied extensively, and ginsenosides, the active components of ginseng, are shown to be involved in modulating multiple physiological activities. This article will review the structure, systemic transformation and bioavailability of ginsenosides before illustration on how these molecules exert their functions via interactions with steroidal receptors. The multiple biological actions make ginsenosides as important resources for developing new modalities. Yet, low bioavailability of ginsenoside is one of the major hurdles needs to be overcome to advance its use in clinical settings. © 2010 Leung and Wong; licensee BioMed Central Ltd. Source


Meng G.,Hong Kong University of Science and Technology
Journal of Physics A: Mathematical and Theoretical | Year: 2015

In mid-1970s Tulczyjew discovered an approach to classical mechanics which brings the Hamiltonian formalism and the Lagrangian formalism under a common geometric roof: the dynamics of a particle with configuration space X is determined by a Lagrangian submanifold D of TT∗(the total tangent space of T∗X), and the description of D by its Hamiltonian H:T∗X → ℝ (resp. its Lagrangian L:TX → ℝ ) yields the Hamilton (resp. Euler-Lagrange) equation. It is reported here that Tulczyjew's approach also works for the dynamics of (charged) particles in gauge fields, in which the role of the total cotangent space T∗X is played by Sternberg phase spaces. In particular, it is shown that, for a particle in a gauge field, the equation of motion can be locally presented as the Euler-Lagrange equation for a Lagrangian which is the sum of the ordinary Lagrangian ,L (q,q), the Lorentz term, and an extra new term which vanishes whenever the gauge group is abelian. A charge quantization condition is also derived, generalizing Dirac's charge quantization condition from gauge group to any compact connected gauge group. © 2015 IOP Publishing Ltd. Source


Siu B.W.Y.,Hong Kong Polytechnic University | Lo H.K.,Hong Kong University of Science and Technology
Transportmetrica A: Transport Science | Year: 2014

Predominantly, existing dynamic traffic assignment studies presume that travel time is deterministic, merely subjected to congestion due to capacity limitations. In view of the inevitability of travel-time uncertainty, a lot of effort has been spent to investigate how uncertainty affects travel choices. Extending from the bottleneck scheduling model, this article establishes the connection between trip scheduling and punctuality reliability by considering that travellers value earliness and lateness differently according to their different degrees of punctuality reliability. Punctuality reliability refers to the probability of being not late for a scheduled activity, which is heterogeneous among travellers and depends on their degrees of risk aversion. By incorporating the notion of punctuality reliability, we can produce the sensible result that risk-averse travellers (those with higher punctuality reliability) choose to depart from home at earlier times, while such a mapping or feature is absent in the original model by Small [1982. The scheduling of consumer activities: work trips. American Economic Review, 72, 467-479]. The proposition is confirmed by our empirical study. The modelling framework is then demonstrated numerically first to the scheduling problem in a single bottleneck then to parallel bottlenecks that offer route choices. © 2013 © 2013 Hong Kong Society for Transportation Studies Limited. Source


Louie R.H.Y.,University of Sydney | McKay M.R.,Hong Kong University of Science and Technology | Collings I.B.,CSIRO
IEEE Transactions on Information Theory | Year: 2011

This paper investigates the performance of open-loop multi-antenna point-to-point links in ad hoc networks with slotted ALOHA medium access control (MAC). We consider spatial multiplexing transmission with linear maximum ratio combining and zero forcing receivers, as well as orthogonal space time block coded transmission. New closed-form expressions are derived for the outage probability, throughput and transmission capacity. Our results demonstrate that both the best performing scheme and the optimum number of transmit antennas depend on different network parameters, such as the node intensity and the signal-to-interference-and-noise ratio operating value. We then compare the performance to a network consisting of single-antenna devices and an idealized fully centrally coordinated MAC. These results show that multi-antenna schemes with a simple decentralized slotted ALOHA MAC can outperform even idealized single-antenna networks in various practical scenarios. © 2006 IEEE. Source


Lea C.-T.,Hong Kong University of Science and Technology
Journal of Lightwave Technology | Year: 2015

The energy-per-bit efficiency has quickly become the ultimate limiting factor in the design of a switching fabric for routers and data center networks. People are now turning to optics for solutions. If switch fabrics can be implemented with optics, many E/O and O/E conversions will be removed and tremendous power saving can be achieved. Arrayed waveguide grating routers (AWGRs) provide the most promising solution in this regard. But AWGRs have one fundamental limitation: poor scalability. While the realistic port count of an AWGR is likely to be less than 50, a switch for a data center network may need to interconnect one thousand racks, or more. This paper presents a novel AWGR-based switch architecture which, without using wavelength converters, can expand the switch size from N to N2, where N is the number of wavelengths in the AWGR. Each port can transmit up to N wavelengths simultaneously. This makes the total capacity of the switch close to N3 × bandwidth of a wavelength channel). A detailed analysis of the performance of the switch is provided in this paper. © 2015 IEEE. Source


Wang W.-X.,Hong Kong University of Science and Technology
Chinese Science Bulletin | Year: 2013

Over the past decade, the quantitative recognition of the significance of dietary exposure in the overall bioaccumulation of metals in aquatic animals has been an area of major progress in metal ecotoxicology. In several major groups of marine animals such as predators and deposit-feeding animals, diet (food) is the predominant source for metal accumulation. The importance of trophic transfer raises very fundamental questions about its toxicity to aquatic animals and in setting water quality standards which go beyond waterborne metal exposure. Ten years of research on the dietary toxicity of metals in several groups of aquatic animals, including zooplankton and fish, is reviewed. It is suggested the future studies should attempt to incorporate the dosage rate or the dietary influx rate in the design of toxicology experiments to facilitate inter-comparison of the results of different studies. © 2012 The Author(s). Source


Hu X.,Fudan University | Chan C.T.,Hong Kong University of Science and Technology | Ho K.-M.,Iowa State University | Zi J.,Fudan University
Physical Review Letters | Year: 2011

Based on analytic derivations and numerical simulations, we show that near a low resonant frequency water waves cannot propagate through a periodic array of resonators (bottom-mounted split tubes) as if water has a negative effective gravitational acceleration ge and positive effective depth h e. This gives rise to a low-frequency resonant band gap in which water waves can be strongly reflected by the resonator array. For a damping resonator array, the resonant gap can also dramatically modify the absorption efficiency of water waves. The results provide a mechanism to block water waves and should find applications in ocean wave energy extraction. © 2011 American Physical Society. Source


Wang G.,Hong Kong University of Science and Technology
Soil Dynamics and Earthquake Engineering | Year: 2011

Earthquake ground motion variability is one of the primary sources of uncertainty in the assessment of the seismic performance of civil systems. The paper presents a novel method to select and modify ground motions to achieve specified response spectrum variability. The resulted ground motions capture the median, standard deviation and correlations of response spectra of an earthquake scenario conditioned on a specified earthquake magnitude, source-to-site distance, fault mechanism, site condition, etc.The proposed method was evaluated through numerical analyses of a 20-story RC frame structure. The example demonstrated the excellent capacity of the proposed method in capturing the full distribution of nonlinear structural responses under a specified scenario. In particular, a suite of 30 or 60 records selected using the refined algorithm can lead to statistically stable results similar to those obtained from a much larger set. The proposed algorithm is computationally efficient. It shows great potential in the performance-based earthquake design of nonlinear civil systems. © 2010 Elsevier Ltd. Source


Chan S.N.,University of Hong Kong | Thoe W.,University of Hong Kong | Lee J.H.W.,Hong Kong University of Science and Technology
Water Research | Year: 2013

Bacterial level (e.g. Escherichia coli) is generally adopted as the key indicator of beach water quality due to its high correlation with swimming associated illnesses. A 3D deterministic hydrodynamic model is developed to provide daily water quality forecasting for eight marine beaches in Tsuen Wan, which are only about 8 km from the Harbour Area Treatment Scheme (HATS) outfall discharging 1.4 million m3/d of partially-treated sewage. The fate and transport of the HATS effluent and its impact on the E. coli level at nearby beaches are studied. The model features the seamless coupling of near field jet mixing and the far field transport and dispersion of wastewater discharge from submarine outfalls, and a spatial-temporal dependent E. coli decay rate formulation specifically developed for sub-tropical Hong Kong waters. The model prediction of beach water quality has been extensively validated against field data both before and after disinfection of the HATS effluent. Compared with daily beach E. coli data during August-November 2011, the model achieves an overall accuracy of 81-91% in forecasting compliance/exceedance of beach water quality standard. The 3D deterministic model has been most valuable in the interpretation of the complex variation of beach water quality which depends on tidal level, solar radiation and other hydro-meteorological factors. The model can also be used in optimization of disinfection dosage and in emergency response situations. © 2012 Elsevier Ltd. Source


Wang Q.,Hong Kong University of Science and Technology
Designs, Codes, and Cryptography | Year: 2011

Frequency hopping (FH) sequences are needed in FH code division multiple access (CDMA) systems. Recently some new constructions of optimal sets of FH sequences were presented. For the anti-jamming purpose, FH sequences are required to have a large linear span. The objective of this paper is to determine both the linear spans and the minimal polynomials of the FH sequences in these optimal sets. Furthermore, the linear spans of the transformed FH sequences by applying a power permutation are also investigated. If the power is chosen properly, the linear span could be very large compared to the length of the FH sequences. © 2010 Springer Science+Business Media, LLC. Source


Zhang X.,Hong Kong University of Science and Technology
Expert Systems with Applications | Year: 2011

Public-private partnership (PPP) in infrastructure development is a principal-agent maximization problem that requires a win-win solution for the two partners, the public sector client and the private sector concessionaire. A variety of construction and market risks are involved, which if not properly managed, can significantly affect the economic, financial and social performance of a PPP project. The determination of a suitable concession period is one of the critical issues that have to be carefully examined for effective risk management toward successful PPP project development. This paper introduces an improved concession period determination methodology and develops a web-based concession period analysis system (WCPAS) based on this methodology. Integrating project scheduling tools, financial analysis methods and the Monte Carlo simulation technique, the WCPAS provides a systematic framework and organized modules that provide automatic support for data input and simulation-based analyses for construction cost, construction period, operation period and concession period. The WCPAS facilitates public clients in reasoning and quantifying construction and market risks in order to determine an appropriate concession period and consequently to minimize the potential social, economic and financial problems. A case study is carried out to illustrate the application and usefulness of the WCPAS. © 2011 Elsevier Ltd. All rights reserved. Source


Zhou Z.,Southwest Jiaotong University | Ding C.,Hong Kong University of Science and Technology
Finite Fields and their Applications | Year: 2014

Cyclic codes are a subclass of linear codes and have applications in consumer electronics, data storage systems, and communication systems as they have efficient encoding and decoding algorithms. In this paper, a class of three-weight cyclic codes over Fp whose duals have two zeros is presented, where p is an odd prime. The weight distribution of this class of cyclic codes is settled. Some of the cyclic codes are optimal. The duals of a subclass of the cyclic codes are also studied and proved to be optimal. © 2013 Elsevier Inc. Source


Scutari G.,State University of New York at Buffalo | Facchinei F.,University of Rome La Sapienza | Song P.,State University of New York at Buffalo | Palomar D.P.,Hong Kong University of Science and Technology | Pang J.-S.,University of Southern California
IEEE Transactions on Signal Processing | Year: 2014

We propose a novel decomposition framework for the distributed optimization of general nonconvex sum-utility functions arising naturally in the system design of wireless multi-user interfering systems. Our main contributions are i) the development of the first class of (inexact) Jacobi best-response algorithms with provable convergence, where all the users simultaneously and iteratively solve a suitably convexified version of the original sum-utility optimization problem; ii) the derivation of a general dynamic pricing mechanism that provides a unified view of existing pricing schemes that are based, instead, on heuristics; and iii) a framework that can be easily particularized to well-known applications, giving rise to very efficient practical (Jacobi or Gauss-Seidel) algorithms that outperform existing ad hoc methods proposed for very specific problems. Interestingly, our framework contains as special cases well-known gradient algorithms for nonconvex sum-utility problems, and many block-coordinate descent schemes for convex functions. © 1991-2012 IEEE. Source


Bejar Haro B.,Ecole Polytechnique Federale de Lausanne | Zazo S.,Technical University of Madrid | Palomar D.P.,Hong Kong University of Science and Technology
IEEE Transactions on Signal Processing | Year: 2014

Energy efficiency is a major design issue in the context of Wireless Sensor Networks (WSN). If the acquired data is to be sent to a far-away base station, collaborative beamforming performed by the sensors may help to distribute the communication load among the nodes and to reduce fast battery depletion. However, collaborative beamforming techniques are far from optimality and in many cases we might be wasting more power than required. We consider the issue of energy efficiency in beamforming applications. Using a convex optimization framework, we propose the design of a virtual beamformer that maximizes the network lifetime while satisfying a pre-specified Quality of Service (QoS) requirement. We derive both centralized and distributed algorithms for the solution of the problem using convex optimization and consensus algorithms. In order to account for other sources of battery depletion different from that of communications beamforming, we consider an additional random energy term in the consumption model. The formulation then switches to a probabilistic design that generalizes the deterministic case. Conditions under which the general problem is convex are also provided. © 2013 IEEE. Source


Chan A.L.S.,Hong Kong University of Science and Technology
Renewable Energy | Year: 2016

Computer simulation plays an important role in investigating the thermal/energy performance of buildings and energy systems. In order to reduce the computational time and provide a consistent form of weather data, simulation run with multi-year weather files is generally avoided. In contrast, representative weather data is widely adopted. For developing typical meteorological year (TMY) weather files, Sandia method is one of the commonly adopted approaches. During the generation of TMY, different weighting factors are assigned to some key climatic indices. Currently, the values of weighting factors mainly depend on the researchers' judgement. As these weighting factors can express the relative importance of impact of a particular climatic index on the thermal/energy performance of an energy system, computer simulation using different TMYs may lead to different conclusions. Therefore, it is inappropriate to apply one single TMY for all energy systems. In this study, a novel TMY weather file generator has been developed to link up an optimization algorithm and an energy simulation program. Through four application examples (one air-conditioned building and three renewable energy systems), this weather file generator demonstrated its capability to search optimal/near optimal combinations of weighting factors for generating appropriate TMY for computer simulations of different energy systems. © 2015 Elsevier Ltd. Source


Parks-Leduc L.,James Madison University | Feldman G.,Hong Kong University of Science and Technology | Bardi A.,Royal Holloway, University of London
Personality and Social Psychology Review | Year: 2015

Personality traits and personal values are important psychological characteristics, serving as important predictors of many outcomes. Yet, they are frequently studied separately, leaving the field with a limited understanding of their relationships. We review existing perspectives regarding the nature of the relationships between traits and values and provide a conceptual underpinning for understanding the strength of these relationships. Using 60 studies, we present a meta-analysis of the relationships between the Five-Factor Model (FFM) of personality traits and the Schwartz values, and demonstrate consistent and theoretically meaningful relationships. However, these relationships were not generally large, demonstrating that traits and values are distinct constructs. We find support for our premise that more cognitively based traits are more strongly related to values and more emotionally based traits are less strongly related to values. Findings also suggest that controlling for personal scale-use tendencies in values is advisable. © 2014 by the Society for Personality and Social Psychology, Inc. Source


Ding C.,Hong Kong University of Science and Technology
IEEE Transactions on Information Theory | Year: 2012

Cyclic codes are a subclass of linear codes and have wide applications in consumer electronics, data storage systems, and communication systems as they have efficient encoding and decoding algorithms. In this paper, the two-prime sequence is employed to construct several classes of cyclic codes over GF(q). Lower bounds on the minimum weight of these cyclic codes are developed. Some of the codes obtained are optimal or almost optimal. The p-ranks of the twin-prime difference sets and a class of almost difference sets are computed. © 1963-2012 IEEE. Source


Brown E.S.,University of California at Los Angeles | Chan T.F.,Hong Kong University of Science and Technology | Bresson X.,City University of Hong Kong
International Journal of Computer Vision | Year: 2012

The active contours without edges model of Chan and Vese (IEEE Transactions on Image Processing 10(2):266-277, 2001) is a popular method for computing the segmentation of an image into two phases, based on the piecewise constant Mumford-Shah model. The minimization problem is non-convex even when the optimal region constants are known a priori. In (SIAM Journal of Applied Mathematics 66(5):1632-1648, 2006), Chan, Esedolu, and Nikolova provided a method to compute global minimizers by showing that solutions could be obtained from a convex relaxation. In this paper, we propose a convex relaxation approach to solve the case in which both the segmentation and the optimal constants are unknown for two phases and multiple phases. In other words, we propose a convex relaxation of the popular K-means algorithm. Our approach is based on the vector-valued relaxation technique developed by Goldstein et al. (UCLA CAM Report 09-77, 2009) and Brown et al. (UCLA CAM Report 10-43, 2010). The idea is to consider the optimal constants as functions subject to a constraint on their gradient. Although the proposed relaxation technique is not guaranteed to find exact global minimizers of the original problem, our experiments show that our method computes tight approximations of the optimal solutions. Particularly, we provide numerical examples in which our method finds better solutions than the method proposed by Chan et al. (SIAM Journal of Applied Mathematics 66(5):1632-1648, 2006), whose quality of solutions depends on the choice of the initial condition. © 2011 Springer Science+Business Media, LLC. Source


Mukhopadhyay A.,Hong Kong University of Science and Technology | Yeung C.W.M.,National University of Singapore
Journal of Marketing Research | Year: 2010

This research studies the effect of consumers' lay theories of selfcontrol on their choices of products for young children. The authors find that people who hold the implicit assumption that self-control is a small resource that can be increased over time ("limited-malleable theorists") are more likely to engage in behaviors that may benefit children's selfcontrol. In contrast, people who believe either that self-control is a large resource ("unlimited theorists") or that it cannot increase over time ("fixed theorists") are less likely to engage in such behaviors. Field experiments conducted with parents demonstrate that limited-malleable theorists take their children less frequently to fast-food restaurants, give their children unhealthful snacks less often, and prefer educational to entertaining television programs for them. Similar patterns are observed when nonparent adults make gift choices for children or while babysitting. The authors obtain these effects with lay theories both measured and manipulated and after they control for demographic and psychological characteristics, including own self-control. These results contribute to the literature on self-control, parenting, and consumer socialization. © 2010, American Marketing Association. Source


Zhou X.,CSIRO | Chen L.,Hong Kong University of Science and Technology
VLDB Journal | Year: 2014

In recent years, microblogs have become an important source for reporting real-world events. A real-world occurrence reported in microblogs is also called a social event. Social events may hold critical materials that describe the situations during a crisis. In real applications, such as crisis management and decision making, monitoring the critical events over social streams will enable watch officers to analyze a whole situation that is a composite event, and make the right decision based on the detailed contexts such as what is happening, where an event is happening, and who are involved. Although there has been significant research effort on detecting a target event in social networks based on a single source, in crisis, we often want to analyze the composite events contributed by different social users. So far, the problem of integrating ambiguous views from different users is not well investigated. To address this issue, we propose a novel framework to detect composite social events over streams, which fully exploits the information of social data over multiple dimensions. Specifically, we first propose a graphical model called location-time constrained topic (LTT) to capture the content, time, and location of social messages. Using LTT, a social message is represented as a probability distribution over a set of topics by inference, and the similarity between two messages is measured by the distance between their distributions. Then, the events are identified by conducting efficient similarity joins over social media streams. To accelerate the similarity join, we also propose a variable dimensional extendible hash over social streams. We have conducted extensive experiments to prove the high effectiveness and efficiency of the proposed approach. © 2013 Springer-Verlag Berlin Heidelberg. Source


Huang T.,Dalian University of Technology | Wang J.,University of Tokyo | Yu W.,Hong Kong University of Science and Technology | He Z.,Dalian University of Technology
Briefings in Bioinformatics | Year: 2012

Assembling peptides identified from tandem mass spectra into a list of proteins, referred to as protein inference, is a critical step in proteomics research. Due to the existence of degenerate peptides and 'one-hit wonders', it is very difficult to determine which proteins are present in the sample. In this paper, we review existing protein inference methods and classify them according to the source of peptide identifications and the principle of algorithms. It is hoped that the readers will gain a good understanding of the current development in this field after reading this review and come up with new protein inference algorithms. © The Author 2012. Published by Oxford University Press. Source


Ge Z.,Zhejiang University | Song Z.,Zhejiang University | Gao F.,Hong Kong University of Science and Technology
Industrial and Engineering Chemistry Research | Year: 2013

Data-based process monitoring has become a key technology in process industries for safety, quality, and operation efficiency enhancement. This paper provides a timely update review on this topic. First, the natures of different industrial processes are revealed with their data characteristics analyzed. Second, detailed terminologies of the data-based process monitoring method are illustrated. Third, based on each of the main data characteristics that exhibits in the process, a corresponding problem is defined and illustrated, with review conducted with detailed discussions on connection and comparison of different monitoring methods. Finally, the relevant research perspectives and several promising issues are highlighted for future work. © 2013 American Chemical Society. Source


Xiong M.,Hong Kong University of Science and Technology
Finite Fields and their Applications | Year: 2013

Recently, the weight distributions of the duals of the cyclic codes with two zeros have been obtained for several cases in Ma et al. (2011) [10], Ding et al. (2011) [6], Wang et al. (2012) [15], Xiong (2012) [16,17]. In this paper we solve one more special case. The problem of finding the weight distribution is transformed into a problem of evaluating certain character sums over finite fields, which in turn can be solved by using the Jacobi sums directly. © 2012 Elsevier Inc. Source


Meng G.,Hong Kong University of Science and Technology
Journal of Mathematical Physics | Year: 2013

Let R2k+1z.ast; = R2k+1\{o} (k ≥ 1) and π: R2k+1 → S2k be the map sending r e{open} R2k+1 to r/|r| e{open} R2k. Denote by P → R2k+1z.ast; the pullback by π of the canonical principal SO(2k)-bundle SO(2k + 1) → S2k. Let E{music sharp sign} → R2k+1z.ast;be the associated co-adjoint bundle and E{music sharp sign} → T z.ast; R2k+1z.ast; be the pullback bundle under projection map T z.ast; R2k+1z.ast;→ R2k+1z.ast;. The canonical connection on SO(2k + 1) → S2k turns E{music sharp sign} into a Poisson manifold. The main result here is that the real Lie algebra so (2,2k, + 2) can be realized as a Lie subalgebra of the Poisson algebra (C∞(O{music sharp sign}),{,}), where O{music sharp sign} is a symplectic leave of E{music sharp sign} of special kind. Consequently, in view of the earlier result of the author, an extension of the classical MICZ Kepler problems to dimension 2k + 1 is obtained. The Hamiltonian, the angular momentum, the Lenz vector, and the equation of motion for this extension are all explicitly worked out. © 2013 AIP Publishing LLC. Source


Dimitrakopoulos E.G.,Hong Kong University of Science and Technology
Nonlinear Dynamics | Year: 2013

Skew bridges with in-deck joints belong to the most common types of existing bridges worldwide. Empirical evidence from past earthquakes indicates that such, multi-segment, skew bridges often rotate in the horizontal plane, increasing the chances of deck unseating. The present paper studies the oblique in-deck impact between successive bridge segments, which triggers this peculiar rotation mechanism. The analysis employs a nonsmooth rigid body approach and utilizes set-valued force laws. A key feature of this approach is the linear complementarity problem (LCP) which encapsulates all physically feasible post-impact states. The LCP results in pertinent closed-form solutions which capture each of these states, and clarifies the conditions under which each post-impact state appears. In this context, a rational method to avoid the singularities arising from dependent constraints is coined. The results confirm theoretically the observed tendency of skew (bridge deck) segments to bind in their obtuse corners and rotate in such a way that the skew angle increases. Further, the study offers equations which describe the contact kinematics between two adjacent skew planar rigid bodies. The same equations can be used to treat successively as many pairs of skew bridge-segments as necessary. © 2013 Springer Science+Business Media Dordrecht. Source


Banfield D.K.,Hong Kong University of Science and Technology
Cold Spring Harbor Perspectives in Biology | Year: 2011

The protein composition of the Golgi is intimately linked to its structure and function. As the Golgi serves as the major protein-sorting hub for the secretory pathway, it faces the unique challenge of maintaining its protein composition in the face of constant influx and efflux of transient cargo proteins. Much of our understanding of how proteins are retained in the Golgi has come from studies on glycosylation enzymes, largely because of the compartment specific distributions these proteins display. From these and other studies of Golgi membrane proteins, we now understand that a variety of retention mechanisms are employed, the majority of which involve the dynamic process of iterative rounds of retrograde and anterograde transport. Such mechanisms rely on protein conformation and amino acid-based sorting signals as well as on properties of transmembrane domains and their relationship with the unique lipid composition of the Golgi. © 2011 Cold Spring Harbor Laboratory Press. Source


He Z.,Dalian University of Technology | Yu W.,Hong Kong University of Science and Technology
Computational Biology and Chemistry | Year: 2010

Feature selection techniques have been used as the workhorse in biomarker discovery applications for a long time. Surprisingly, the stability of feature selection with respect to sampling variations has long been under-considered. It is only until recently that this issue has received more and more attention. In this article, we review existing stable feature selection methods for biomarker discovery using a generic hierarchical framework. We have two objectives: (1) providing an overview on this new yet fast growing topic for a convenient reference; (2) categorizing existing methods under an expandable framework for future research and development. © 2010 Elsevier Ltd. Source


Hu C.,Hong Kong University of Science and Technology
MRS Communications | Year: 2015

Regarding the significance of cement paste in construction materials, the present paper aims to use nanoindentation to measure and map mechanical properties of hardened cement pastes. The mechanical properties of involved phases were extracted from grid nanoindentation on the cement paste. The results suggested that nanoindentation can be used as a tool to measure and map mechanical properties of hardened cement pastes, and can identify the phases, including outer product, inner product, calcium hydroxide (or interface of residual cement clinker), and residual cement clinker. © Materials Research Society 2015. Source


Zhu Y.,Fudan University | Letaief K.B.,Hong Kong University of Science and Technology
IEEE Transactions on Wireless Communications | Year: 2010

Single carrier interleaved frequency division multiple access (SC-IFDMA) has been recently receiving much attention for uplink multiuser access in the next generation mobile systems because of its lower peak-to-average transmit power ratio (PAPR). In this paper, we investigate the effect of carrier frequency offset (CFO) on SC-IFDMA and propose a new low-complexity time domain linear CFO compensation (TD-LCC) scheme. The TD-LCC scheme can be combined with successive interference cancellation (SIC) to further improve the system performance. The combined method will be referred to as TD-CC-SIC. We shall study the use of user equipment (UE) ordering algorithms in our TD-CC-SIC scheme and propose both optimal and suboptimal ordering algorithms in the MMSE sense. We also analyze both the output SINR and the BER performance of the proposed TD-LCC and TD-CC-SIC schemes. Simulation results along with theoretical SINR and BER results will show that the proposed TD-LCC and TD-CC-SIC schemes greatly reduce the CFO effect on SC-IFDMA. We also propose a new blind CFO estimation scheme for SC-IFDMA systems when the numbers of subcarrier sets allocated to different UEs are not the same due to their traffic requirements. Compared to the conventional blind CFO estimation schemes, it is shown that by using a virtual UE concept, the proposed scheme does not have the CFO ambiguity problem, and in some cases can improve the throughput efficiency since it does not need to increase the length of cyclic prefix (CP). © 2010 IEEE. Source


Ding C.,Hong Kong University of Science and Technology
IEEE Transactions on Information Theory | Year: 2015

Because of their efficient encoding and decoding algorithms, cyclic codes - an interesting class of linear codes - are widely used in communication systems, storage devices, and consumer electronics. BCH codes form a special class of cyclic codes, and are usually among the best cyclic codes. A subclass of good BCH codes is the narrow-sense primitive BCH codes. However, the dimension and minimum distance of these codes are not known in general. The main objective of this paper is to study the dimension and minimum distances of a subclass of the narrow-sense primitive BCH codes with design distance δ = (q - ℓ0)qm-ℓ1-1 -1 for certain pairs (ℓ0, ℓ1), where 0 ≤ ℓ0 ≤ q - 2 and 0 ≤ ℓ1 ≤ m - 1. The parameters of other related classes of BCH codes are also investigated, and some open problems are proposed in this paper. © 1963-2012 IEEE. Source


Ma E.,University of California at Riverside | Ma E.,Hong Kong University of Science and Technology
Physical Review Letters | Year: 2015

It is shown that in extensions of the standard model of quarks and leptons where the additive lepton number L is broken by two units, so that Z2 lepton parity, i.e., (-1)L which is either even or odd, remains exactly conserved, there is the possibility of stable dark matter without additional symmetry. This applies to many existing simple models of Majorana neutrino mass with dark matter, including some radiative models. Several well-known examples are discussed. This new insight leads to the construction of a radiative type II seesaw model of neutrino mass with dark matter where the dominant decay of the doubly charged Higgs boson ξ++ is into W+W+ instead of the expected li+lj+ lepton pairs for the well-known tree-level model. © 2015 American Physical Society. © 2015 American Physical Society. Source


Mollon G.,French National Center for Scientific Research | Zhao J.,Hong Kong University of Science and Technology
Granular Matter | Year: 2013

We present a 2D discrete modelling of sand flow through a hopper using realistic grain shapes. A post-processing method is used to assess the local fluctuations in terms of void ratio, coordination number, velocity magnitude, and mean stress. The characteristics of fluctuations associated with the four considered quantities along the vertical axis of the hopper and across the entire hopper are carefully examined. The flow fluctuations for coordination number, velocity magnitude and mean stress are all found to take the form of radial waves originating from the lower centre of the hopper and propagating in the opposite direction of the granular flow. Quantitative characteristics of these waves (shape, amplitude, frequency, velocity, etc.) are identified. The fluctuations in void ratio however are not supportive of the observation of density waves in the granular flow as mentioned in some experiments. The possible reasons for this apparent contradiction are discussed, as well as possible extensions of this work. © 2013 Springer-Verlag Berlin Heidelberg. Source


Shi D.,University of Alberta | Chen T.,University of Alberta | Shi L.,Hong Kong University of Science and Technology
Automatica | Year: 2014

The event-triggered state estimation problem for linear time-invariant systems is considered in the framework of Maximum Likelihood (ML) estimation in this paper. We show that the optimal estimate is parameterized by a special time-varying Riccati equation, and the computational complexity increases exponentially with respect to the time horizon. For ease in implementation, a one-step event-based ML estimation problem is further formulated and solved, and the solution behaves like a Kalman filter with intermittent observations. For the one-step problem, the calculation of upper and lower bounds of the communication rates from the process side is also briefly analyzed. An application example to sensorless event-based estimation of a DC motor system is presented and the benefits of the obtained one-step event-based estimator are demonstrated by comparative simulations. © 2013 The Authors. Published by Elsevier Ltd. All rights reserved. Source


Ma Y.,University of California at Riverside | Liu A.,Hong Kong University of Science and Technology | Hua Y.,University of California at Riverside
IEEE Transactions on Signal Processing | Year: 2014

We present optimization algorithms for source and relay power allocations in a multicarrier relay system with direct link, where the source power is allowed to transmit in both phases in a two-phase relay scheme. We show that there is a significant benefit to the system capacity by allowing the source power to be distributed over both phases. Specifically, we consider the joint optimization of source and relay power to minimize a general cost function. The joint optimization problem is non-convex and the complexity of finding the optimal solution is extremely high. Using the alternating optimization (AO) method, the joint problem is decomposed into a convex source power allocation problem and a non-convex relay power allocation problem. By exploiting the specific structure of the problem, we present efficient algorithms that yield the exact optimal solutions for both source and (non-convex) relay power allocation problems. Then we show that the overall AO algorithm converges to a stationary point of the joint problem. Moreover, the proposed AO algorithm is asymptotically optimal for large relay transmit power or large source-relay channel gain. Finally, simulations show that the proposed AO algorithm achieves significant gain over various baselines. © 2013 IEEE. Source


Leung S.,Hong Kong University of Science and Technology
Chaos | Year: 2013

We propose a simple Eulerian approach to compute the moderate to long time flow map for approximating the Lyapunov exponent of a (periodic or aperiodic) dynamical system. The idea is to generalize a recently proposed backward phase flow method which is specially designed for long time level set propagation. Unlike the original phase flow method or the backward phase flow method, which is applicable only to autonomous systems, the current approach can also be applied to any time-dependent (periodic or aperiodic) flow. We will discuss the stability of the proposed method. Numerical examples will be given to demonstrate the effectiveness of the algorithm. © 2013 AIP Publishing LLC. Source


Zhang X.,Soochow University of China | Liu Y.,Soochow University of China | Lee S.-T.,Soochow University of China | Yang S.,Hong Kong University of Science and Technology | Kang Z.,Soochow University of China
Energy and Environmental Science | Year: 2014

The slow photon effect of a photonic crystal (PC) is a promising characteristic for tuning light-matter interactions through material structure designing. A TiO2 bi-layer structure photoanode was constructed by fabricating a TiO2 PC layer through a template-assisted sol-gel process on a TiO2 nanorod array (NR) layer. Gold nanoparticles (Au NPs) with an average size of about 10 nm were deposited in situ into the TiO2 bi-layer structure. The extended photoelectrochemical (PEC) water splitting activity in visible light was ascribed to the energetic hot electrons and holes that were generated in the Au NPs through the excitation and decay of surface plasmons. By alternating the characteristic pore size of the TiO2 PC layer, the slow photon region at the red edge of the photonic band gap could be purposely tuned to overlap with the strong localized surface plasmon resonance (SPR) region of Au NPs. The matching slow photon effect of TiO2 PC (with a characteristic pore size of 250 nm) intensified the SPR responses (central at 536 nm) of Au NPs. Consequently, more hot electrons were generated in the Au NPs and injected into the conduction band of TiO 2, resulting in improved PEC water splitting efficiency in the visible light region. Under simulated sunlight illumination, the photoconversion efficiency of the well matching Au/TiO2 photoanode approached 0.71%, which is one of the highest values ever reported in Au/TiO2 PEC systems. The work reported here provides support for designing coupling plasmonic nanostructures with PC-based materials to synergistically enhance PEC water splitting efficiency. This journal is © the Partner Organisations 2014. Source


Cai N.,Hong Kong University of Science and Technology | Kou S.,Columbia University
Operations Research | Year: 2012

We obtain a closed-form solution for the double-Laplace transform of Asian options under the hyper-exponential jump diffusion model. Similar results were available previously only in the special case of the Black-Scholes model (BSM). Even in the case of the BSM, our approach is simpler as we essentially use only Itô's formula and do not need more advanced results such as those of Bessel processes and Lamperti's representation. As a by-product we also show that a well-known recursion relating to Asian options has a unique solution in a probabilistic sense. The double-Laplace transform can be inverted numerically via a two-sided Euler inversion algorithm. Numerical results indicate that our pricing method is fast, stable, and accurate; and it performs well even in the case of low volatilities. © 2012 INFORMS. Source


Ding C.,Hong Kong University of Science and Technology
Finite Fields and their Applications | Year: 2013

Cyclic codes are a subclass of linear codes and have a lot of applications in consumer electronics, data transmission technologies, broadcast systems, and computer applications as they have efficient encoding and decoding algorithms. In this paper, three cyclotomic sequences of order four are employed to construct a number of classes of cyclic codes over GF(q) with prime length. Under certain conditions lower bounds on the minimum weight are developed. Some of the codes obtained are optimal or almost optimal. In general, the codes constructed in this paper are very good. Some of the cyclic codes obtained in this paper are closely related to almost difference sets and difference sets. © 2013 Elsevier Inc. Source


Huang K.,Hong Kong Polytechnic University | Lau V.K.N.,Hong Kong University of Science and Technology
IEEE Transactions on Information Theory | Year: 2012

This paper addresses the stability and queueing delay of space-division multiple access (SDMA) systems with bursty traffic, where zero-forcing beamforming enables simultaneous transmissions to multiple mobiles. Computing beamforming vectors relies on quantized channel state information (CSI) feedback (limited feedback) from mobiles. Define the stability region for SDMA as the set of multiuser packet-arrival rates for which the steady-state queue lengths are finite. Given perfect feedback of channel-direction information (CDI) and equal power allocation over scheduled queues, the stability region is proved to be a convex polytope having the derived vertices. A similar result is obtained for the case with perfect feedback of CDI and channel-quality information (CQI), where CQI allows scheduling and power control for enlarging the stability region. For any set of arrival rates in the stability region, multiuser queues are shown to be stabilized by the joint queue-and-beamforming control policy that maximizes the departure-rate-weighted sum of queue lengths. The stability region for limited feedback is found to be the perfect-CSI region multiplied by one minus a small factor. The required number of feedback bits per mobile is proved to scale logarithmically with the inverse of the above factor as well as linearly with the number of transmit antennas minus one. The effect of limited feedback on queueing delay is also quantified. CDI quantization errors are shown to multiply average queueing delay by a factor $M > 1$. For given $M\rightarrow 1$, the number of feedback bits per mobile is proved to be $O(-\log-{2}(1-1/M))$. © 1963-2012 IEEE. Source


Gao Y.,Brandeis University | Zhao F.,Brandeis University | Wang Q.,Hong Kong University of Science and Technology | Zhang Y.,Brandeis University | Xu B.,Brandeis University
Chemical Society Reviews | Year: 2010

Enzymes, together with the process of self-assembly, constitute necessary components of the foundation of life on the nanometre scale. The exceedingly high efficiency and selectivity exhibited by enzymes for catalyzing biotransformations naturally lead to the exploration of enzyme mimics and the applications of enzymes in industrial biotransformations. While the mimicking of enzymes aims to preserve the essence of enzymes in a simpler system than proteins, industrial biotransformations demand high activity and stability of enzymes. Recent research suggests that small peptide-based nanofibers in the form of molecular hydrogels can provide a general platform to achieve both important goals. This tutorial review will introduce the recent progress of these research activities on small peptide-based nanomaterials for catalysis and hopes to provide a starting point for further explorations that ultimately may lead to practical applications of enzymes and enzyme mimics for addressing important societal problems in energy, environment, and health. © 2010 The Royal Society of Chemistry. Source


Shi L.,Hong Kong University of Science and Technology | Epstein M.,California Institute of Technology | Murray R.M.,California Institute of Technology
IEEE Transactions on Automatic Control | Year: 2010

We consider the problem of state estimation of a discrete time process over a packet-dropping network. Previous work on Kalman filtering with intermittent observations is concerned with the asymptotic behavior of E[Pk], i.e., the expected value of the error covariance, for a given packet arrival rate. We consider a different performance metric, Pr}[Pk≤M], i.e., the probability that Pk is bounded by a given M. We consider two scenarios in the paper. In the first scenario, when the sensor sends its measurement data to the remote estimator via a packet-dropping network, we derive lower and upper bounds on Pr}[Pk≤M]. In the second scenario, when the sensor preprocesses the measurement data and sends its local state estimate to the estimator, we show that the previously derived lower and upper bounds are equal to each other, hence we are able to provide a closed form expression for Pr[Pk≤M]. We also recover the results in the literature when using Pr}[Pk≤M] as a metric for scalar systems. Examples are provided to illustrate the theory developed in the paper. © 2010 IEEE. Source


Zhang W.,University of New South Wales | Ben Letaief K.,Hong Kong University of Science and Technology
IEEE Transactions on Signal Processing | Year: 2010

In this paper, we propose a systematic design of full-diversity space-frequency (SF) codes for multiuser MIMO-OFDM systems. With joint maximum likelihood detection, the proposed codes can obtain full diversity over selective-fading multiple access channels. The proposed coding scheme does not require the cooperation of multiple transmitters, nor the instantaneous channel side information at the transmitters but the channel statistics. Moreover, the proposed coding scheme is bandwidth efficient in that all users send their data streams through all OFDM subchannels simultaneously to make full use of the available bandwidth. © 2010 IEEE. Source


Wong J.F.,Hong Kong University of Science and Technology
Design Studies | Year: 2010

One of the major developments in architecture in the past twenty years is the liberation of formal expression and organisation in architecture to reflect the heterogeneous nature of our current cultures and contexts. Antithetical to the hegemony of modernist formalism, this 'free-form architecture' is characterised by a free-flowing expression that seeks to simultaneously reflect and reconcile the inevitability of a diversity of forces influencing any architectural design. Despite its proliferation, there seems to be a lack of rigorous research on the types of factors that these architects consider during their process of design, which would provide a better understanding on the diverse range of design considerations behind the provocative architectural forms. This study adopts the well established qualitative research methodology of grounded theory to examine the architects' own discourse to establish a hierarchical structure of factors of free-form architecture from this primary source of data. © 2009 Elsevier Ltd. All rights reserved. Source


Sharif N.,Hong Kong University of Science and Technology
Science Technology and Human Values | Year: 2010

Since its introduction in the 1980s, use of the innovation systems (IS) conceptual approach has been growing, particularly on the part of national governments including, recently, the Hong Kong Government. In 2004, the Hong Kong Government set forth a "new strategy" for innovation and technology policy making. Because it marked a significant break from the past (characterized by a laissez-faire Government attitude), it was necessary to convince a wider audience to accept this new strategy, a strategy that included the IS conceptual approach. Adopting a science and technology studies (S&TS) perspective, I show how the IS conceptual approach is being used as a rhetorical resource by the Hong Kong Government in its innovation and technology policy making in an effort to persuade its perceived audience of the efficacy of its new strategy for its policies-policies that are in fact unrelated to the basic precepts of the IS conceptual approach. The case provides a cautionary tale in the ways in which policy makers transform scholarly work and scientific discovery into rhetorical instruments in support of a political agenda. © The Author(s) 2010. Source


Wu P.Y.,ViXS Systems | Tsui S.Y.S.,Fujitsu Limited | Mok P.K.T.,Hong Kong University of Science and Technology
IEEE Journal of Solid-State Circuits | Year: 2010

Monolithic PWM voltage-mode buck converters with a novel Pseudo-Type III (PT3) compensation are presented. The proposed compensation maintains the fast load transient response of the conventional Type III compensator; while the Type III compensator response is synthesized by adding a high-gain low-frequency path (via error amplifier) with a moderate-gain high-frequency path (via bandpass filter) at the inputs of PWM comparator. As such, smaller passive components and low-power active circuits can be used to generate two zeros required in a Type III compensator. Constant Gm/C biasing technique can also be adopted by PT3 to reduce the process variation of passive components, which is not possible in a conventional Type III design. Two prototype chips are fabricated in a 0.35-μm CMOS process with constant G m/C biasing technique being applied to one of the designs. Measurement result shows that converter output is settled within 7 μs for a load current step of 500 mA. Peak efficiency of 97% is obtained at 360 mW output power, and high efficiency of 86% is measured for output power as low as 60 mW. The area and power consumption of proposed compensator is reduced by > 75% in both designs, compared to an equivalent conventional Type III compensator.© 2006 IEEE. Source


Zhang J.,Hong Kong University of Science and Technology | Andrews J.G.,University of Texas at Austin
IEEE Journal on Selected Areas in Communications | Year: 2010

Downlink spatial intercell interference cancellation (ICIC) is considered for mitigating other-cell interference using multiple transmit antennas. A principle question we explore is whether it is better to do ICIC or simply standard single-cell beamforming. We explore this question analytically and show that beamforming is preferred for all users when the edge SNR (signal-to-noise ratio) is low (0 dB), and ICIC is preferred when the edge SNR is high (10 dB), for example in an urban setting. At medium SNR, a proposed adaptive strategy, where multiple base stations jointly select transmission strategies based on the user location, outperforms both while requiring a lower feedback rate than the pure ICIC approach. The employed metric is sum rate, which is normally a dubious metric for cellular systems, but surprisingly we show that even with this reward function the adaptive strategy also improves fairness. When the channel information is provided by limited feedback, the impact of the induced quantization error is also investigated. The analysis provides insights on the feedback design, and it is shown that ICIC with well-designed feedback strategies still provides significant throughput gain. © 2006 IEEE. Source


Meng G.,Hong Kong University of Science and Technology
Journal of Mathematical Physics | Year: 2013

For each simple euclidean Jordan algebra V of rank ρ and degree δ, we introduce a family of classical dynamic problems. These dynamical problems all share the characteristic features of the Kepler problem for planetary motions, such as the existence of Laplace-Runge-Lenz vector and hidden symmetry. After suitable quantizations, a family of quantum dynamic problems, parametrized by the nontrivial Wallach parameter ν, is obtained. Here, vεW(V):= and was introduced by N. Wallach to parametrize the set of nontrivial scalar-type unitary lowest weight representations of the conformal group of V. For the quantum dynamic problem labelled by ν the bound state spectra is 1/2/(I+v ρ/2), I = 0, 1, ... and its Hilbert space of bound states gives a new realization for the afore-mentioned representation labelled by ν. A few results in the literature about these representations become more explicit and more refined. The Lagrangian for a classical Kepler-type dynamic problem introduced here is still of the simple form: 1/2 ||χ||2 + 1/r. Here, χ is the velocity of a unit-mass particle moving on the space consisting of V's semi-positive elements of a fixed rank, and r is the inner product of x with the identity element of V. © 2013 American Institute of Physics. Source


Wong J.F.,Hong Kong University of Science and Technology
Habitat International | Year: 2010

The vast majority of buildings being constructed in Hong Kong today are massive 40+-storey high-rise residential building towers housing hundreds of families. Immense resources - land, material, time, labour, money, energy - have been invested in their realization. However, almost all of these buildings, including those currently under construction and on the drawing boards, are not designed with adaptability and flexibility as a design intention and will cause major problems in the future: their lack of capacity for re-activation means that their only fate is demolition, thereby consuming even more resources, producing more waste, and causing more disruption to the environment. Unless we change our mind-set in mass housing design, today's designs will inevitably become tomorrow's problems. This paper studies the scenario design requirements and critical dimensions of use-territories in public mass housing in Hong Kong in view of extracting useful patterns for use in future designs. Case studies of popular residential layouts currently used in Hong Kong will be used to illustrate the kind of problems the majority of the existing residential building stock will face when the need for renewal and upgrade arises. © 2009 Elsevier Ltd. All rights reserved. Source


Ng M.W.,Old Dominion University | Lo H.K.,Hong Kong University of Science and Technology
Networks and Spatial Economics | Year: 2013

The air quality levels in various regions around the world remain a large public concern. Transportation is known to be a major contributor to reduced air quality levels. Until now, the modeling of the regional impact of transportation on air quality has been based on the assumption of determinism. On the other hand, it is well recognized that transportation systems are subject to both demand and supply uncertainties. In this paper, we relax the assumption of determinism and allow for capacity and link flow uncertainty. We introduce a probability measure - coined the conformity probability - to capture the full probabilistic behavior of vehicular emissions. Moreover, stochastic dependencies are modeled using copulas, generalizing other commonly used dependence modeling techniques in the transportation network modeling arena. In a case study we demonstrate that such a generalization is critical as the ranking of capacity expansion projects to improve air quality is shown to be dependent on the hypothesized dependence structure. Finally, we present some preliminary results that suggest that capacity uncertainty is more detrimental to the environment (i.e. leads to lower conformity probabilities) than demand uncertainty. © 2013 Springer Science+Business Media New York. Source


Ding K.,Chinese Academy of Sciences | Ding C.,Hong Kong University of Science and Technology
IEEE Transactions on Information Theory | Year: 2015

In this paper, a class of two-weight and three-weight linear codes over GF(p) is constructed, and their application in secret sharing is investigated. Some of the linear codes obtained are optimal in the sense that they meet certain bounds on linear codes. These codes have applications also in authentication codes, association schemes, and strongly regular graphs, in addition to their applications in consumer electronics, communication and data storage systems. © 1963-2012 IEEE. Source


Ng W.,Hong Kong University of Science and Technology
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) | Year: 2011

The applications of RFID (Radio Frequency Identification) have become more important and diversified in recent years due to the lower cost of RFID tags and smaller tag sizes. One promising area for applying the technology is in Supply Chain Management (SCM) in which the manufacturers need to analyse product and logistic information in order to get the right quantity of products arriving at the right time to the right locations. In this paper, we present a holistic framework that supports data querying and analysis of raw datasets obtained from different RFID collection points managed by supply chains. First, the framework provides repair mechanisms to preprocess raw RFID data from readers. Second, we present a database model to capture SCM information at various abstraction levels such as items, time and locations, and then discuss the use of SQL query language to manipulate RFID databases. Finally, we present a graph data model called a Tag Movement Graph (TMG) to capture the moving information of tagged objects. © 2011 Springer-Verlag. Source


Yang J.,Harbin Institute of Technology | Li Z.,Harbin Institute of Technology | Li Z.,Hong Kong University of Science and Technology
IEEE/ASME Transactions on Mechatronics | Year: 2011

Reduction of contour error is the main control objective in contour-following applications. A common approach to this objective is to design a controller based on the contour error directly. In this case, the contour error estimation is a key factor in the contour-following operation. Contour error can be approximated by the linear distance from the actual position to the tangent line or plane at the desired position. This approach suffers from a significant error due to linear approximation. A novel approach to contour error calculation of an arbitrary smooth path is proposed in this paper. The proposed method is based on coordinate transformation and circular approximation. In this method, the contour error is represented by the coordinate of the actual position with respect to a specific virtual coordinate frame. The method is incorporated in a position loop-based cross-coupled control structure. An equivalent robust control system is used to establish stability of the closed-loop system. Experimental results demonstrate the efficiency and performance of the proposed contour error estimation algorithm and the motion control strategy. © 2010 IEEE. Source


Huang G.,Hong Kong University of Science and Technology
Control Engineering Practice | Year: 2011

A new control strategy is proposed for zone thermal systems to deal with nonlinearities, uncertainties and constraints. The temperature control of VAV zone thermal systems is investigated. The system consists of two constrained processes: a zone temperature process with input-output bi-linearity and uncertainty, and a damper process with gain nonlinearity. Model predictive control is adopted for control design: a bilinear predictive controller is designed for the zone temperature process and a gain-scheduled robust predictive controller for the damper process. Both controllers deal with constraints directly and they operate in a cascaded manner. Case studies are given to show the effectiveness of the proposed strategy. © 2011 Elsevier Ltd. Source


Yan Zhang Y.,South China Normal University | Yan Zhang Y.,Hong Kong University of Science and Technology | An Yin Y.,South China Normal University
Applied Physics Letters | Year: 2011

The characteristics of the nitride-based blue light-emitting diode (LED) with an AlGaN/GaN superlattice (SL) electron-blocking layer (EBL) of gradual Al mole fraction are analyzed numerically and experimentally. The emission spectra, carrier concentrations in the quantum wells, energy band diagrams, electrostatic fields, and internal quantum efficiency are investigated. The results indicate that the LED with an AlGaN/GaN SL EBL of gradual Al mole fraction has a better hole injection efficiency, lower electron leakage, and smaller electrostatic fields in its active region over the LED with a conventional rectangular AlGaN EBL or with a normal AlGaN/GaN SL EBL. The results also show that the efficiency droop is markedly improved when the SL EBL of gradual Al mole fraction is used. © 2011 American Institute of Physics. Source


Altman M.S.,Hong Kong University of Science and Technology
Journal of Physics Condensed Matter | Year: 2010

Low energy electron microscopy (LEEM) and spin polarized LEEM (SPLEEM) are two powerful in situ techniques for the study of surfaces, thin films and other surface-supported nanostructures. Their real-time imaging and complementary diffraction capabilities allow the study of structure, morphology, magnetism and dynamic processes with high spatial and temporal resolution. Progress in methods, instrumentation and understanding of novel contrast mechanisms that derive from the wave nature and spin degree of freedom of the electron continue to advance applications of LEEM and SPLEEM in these areas and beyond. We review here the basic imaging principles and recent developments that demonstrate the current capabilities of these techniques and suggest potential future directions. © 2010 IOP Publishing Ltd. Source


Hu L.,Sun Yat Sen University | You F.,Sun Yat Sen University | Yu T.,Hong Kong University of Science and Technology
Materials and Design | Year: 2013

The dynamic crushing behaviors in the x- and the y-directions of hexagonal honeycombs with various cell-wall angles are explored by both experiments and numerical simulations. Several deformation modes are identified based on the shape of the localization band forming by the cells' collapse. The respective influence of the cell-wall angle, the crushing velocity and the honeycomb's relative density on the honeycomb's mechanical properties is studied. It is shown that these influencing factors affect the honeycomb's x-directional crushing strength by altering the deformation mode of the honeycomb, while both the y-directional crushing strength and the average crushing strength are dominated by the honeycomb's density. With retaining the honeycombs' relative density as a constant, the honeycomb with the cell-wall angle of about 45° exhibits the optimal crushing strength and energy absorption capacity under the y-directional crushing, while the average crushing strength in the x- and the y-directions decreases with the cell-wall angle. Significant orthotropic anisotropy is revealed in the honeycombs with the cell-wall angle greater than 30°, especially under low-velocity compression. The honeycomb with the cell-wall angle of about 25° possesses transversely isotropic mechanical properties. © 2012 Elsevier Ltd. Source


Ma E.,University of California at Riverside | Ma E.,Hong Kong University of Science and Technology
Physics Letters, Section B: Nuclear, Elementary Particle and High-Energy Physics | Year: 2016

I propose a model of radiative charged-lepton and neutrino masses with A4 symmetry. The soft breaking of A4 to Z3 lepton triality is accomplished by dimension-three terms. The breaking of Z3 by dimension-two terms allows cobimaximal neutrino mixing (θ13≠0, θ23=π/4, δCP=±π/2) to be realized with only very small finite calculable deviations from the residual Z3 lepton triality. This construction solves a long-standing technical problem inherent in renormalizable A4 models since their inception. © 2016 The Author. Source


Ma E.,University of California at Riverside | Ma E.,Hong Kong University of Science and Technology
Physics Letters, Section B: Nuclear, Elementary Particle and High-Energy Physics | Year: 2016

In all scalar extensions of the standard model of particle interactions, the one Higgs boson responsible for electroweak symmetry breaking always mixes with other neutral scalars at tree level unless a symmetry prevents it. An unexplored important option is that the mixing may be radiative, and thus guaranteed to be small. Two first such examples are discussed. One is based on the soft breaking of the discrete symmetry Z3. The other starts with the non-Abelian discrete symmetry A4 which is then softly broken to Z3, and results in the emergence of an interesting dark-matter candidate together with a light mediator for the dark matter to have its own long-range interaction. © 2016 The Author. Source


Shimokawa S.,Hong Kong University of Science and Technology
American Journal of Agricultural Economics | Year: 2010

To analyze intrahousehold calorie allocation, we propose a new framework that takes into account asymmetric consumption behavior due to liquidity constraints and loss aversion. We find that intrahousehold calorie allocation responds asymmetrically to expected declines and increases in household food availability in China. Compared with previous studies based on symmetric consumption behavior, our framework provides stronger evidence of gender bias in intrahousehold calorie allocation among children in urban areas and among elderly people in rural areas, and of demographic bias between girls and prime-age adults in both urban and rural areas. Implications for demographic targeting in nutrition programs are discussed. © The Author (2010). Source


Chen B.,CAS Shanghai Institute of Organic Chemistry | Hou X.-L.,CAS Shanghai Institute of Organic Chemistry | Li Y.-X.,CAS Shanghai Institute of Organic Chemistry | Wu Y.-D.,CAS Shanghai Institute of Organic Chemistry | Wu Y.-D.,Hong Kong University of Science and Technology
Journal of the American Chemical Society | Year: 2011

DFT calculations suggest that the unexpected meta product in the copper-catalyzed arylation of anilide is formed via a Heck-like four-membered-ring transition state involving a CuIII-Ph species. A competitive electrophilic substitution mechanism delivers the ortho product when a methoxy group is present at the meta position of pivanilide. A series of experiments including kinetic studies support the involvement of a Cu I catalyst. © 2011 American Chemical Society. Source


Jia G.,Hong Kong University of Science and Technology
Organometallics | Year: 2013

This personal account summarizes our work on the chemistry of transition-metal-containing metallabenzynes, organometallic compounds derived from formal replacement of a C atom in benzyne by an isolobal transition-metal fragment. Metallabenzynes with osmium and rhenium have been synthesized and well characterized. They have aromatic character on the basis of the criteria of reactivity, geometry, aromatic stabilization energy, and magnetic properties. They can undergo typical reactions of aromatic systems (e.g., electrophilic substitution reactions) and organometallic complexes (e.g., reductive elimination reactions to form carbene complexes). © 2013 American Chemical Society. Source


Wang M.,CAS Institute of Chemistry | Zhang G.,CAS Institute of Chemistry | Zhang D.,CAS Institute of Chemistry | Zhu D.,CAS Institute of Chemistry | Tang B.Z.,Hong Kong University of Science and Technology
Journal of Materials Chemistry | Year: 2010

New fluorescent sensors have been developed, utilizing the aggregation-induced emission (AIE) attribute of silole and tetraphenylethene luminogens. In this feature article, we briefly summarize recent progress in the development of AIE-based bio/chemosensors for assays of nuclease and AChE activities, screening of inhibitors, and detection of various analytes including charged biopolymers, ionic species, volatile and explosive organic compounds. © 2010 The Royal Society of Chemistry. Source


Chow T.T.,Hong Kong University of Science and Technology
Applied Energy | Year: 2010

A significant amount of research and development work on the photovoltaic/thermal (PVT) technology has been done since the 1970s. Many innovative systems and products have been put forward and their quality evaluated by academics and professionals. A range of theoretical models has been introduced and their appropriateness validated by experimental data. Important design parameters are identified. Collaborations have been underway amongst institutions or countries, helping to sort out the suitable products and systems with the best marketing potential. This article gives a review of the trend of development of the technology, in particular the advancements in recent years and the future work required. © 2009 Elsevier Ltd. Source


Ding C.,Hong Kong University of Science and Technology
IEEE Transactions on Information Theory | Year: 2012

Cyclic codes are an interesting type of linear codes and have applications in communication and storage systems due to their efficient encoding and decoding algorithms. In this paper, three types of generalized cyclotomy of order two are described and three classes of cyclic codes of length n 1n 2 and dimension (n 1n 2+1)/2 are presented and analyzed, where n 1 and n 2 are two distinct primes. Bounds on their minimum odd-like weight are also proved. Some of the codes presented in this paper are among the best cyclic codes. © 2006 IEEE. Source


Lin B.-C.,National University of Tainan | Lea C.-T.,Hong Kong University of Science and Technology
Journal of Lightwave Technology | Year: 2013

Silicon photonic microrings have the capability of handling two wavelengths simultaneously and this capability does not exist in other types of photonic switching technologies, such as directional couplers or MEMS. Exploiting this two-wavelength switching capability has not been done before. In this paper, we use this capability to construct a new type of microring-based non-blocking optical interconnects. For a 4,×, 4 network, the new architecture only needs four rings. In contrast, the conventional crossbar-based architecture requires 16 rings. For medium size switches, such as 8 ports or 16 ports, the new architecture also requires significantly fewer rings than conventional crossbar switches of the same sizes. © 2013 IEEE. Source


Sun Y.,Hong Kong University of Science and Technology
Energy and Buildings | Year: 2015

The rapid growth of building energy has imposed increasing pressure on environmental protection. Net zero energy building (NZEB) is widely considered to be an effective solution. Various macro-parameters in a NZEB have different impacts on system design and very few studies have investigated such impacts in a NZEB. Therefore, a systematic sensitivity analysis of macro-parameters in a NZEB has been conducted in this study. Differential sensitivity analysis, a local sensitivity analysis method, is performed on a constructed dynamic simulation platform to study the impacts of each macro-parameter on the sizes of key NZEB systems. The systems include heating, ventilation and air-conditioning (HVAC) system, renewable energy system and energy storage system. The influence coefficient of each parameter is calculated to quantify its sensitivity impacts. Meanwhile, an exhaustive search approach is proposed to minimize the overall initial investment cost of the renewable energy system and storage system. The study results are valuable to help designers improve NZEB system design through carefully selecting more accurate design parameters especially those identified with heavy sensitivity impacts. The study also provides a method to optimize the initial investment cost of systems in a NZEB. © 2014 Elsevier B.V. All rights reserved. Source


Wang H.,Hong Kong University of Science and Technology | Chen W.-R.,INSEAD
Research Policy | Year: 2010

This paper extends the resource-based theory of the firm to examine the contingencies that either intensify or reduce the relationship between firm-specific innovation and value appropriation. Based on a large-scale analysis of a sample of US manufacturing firms, we found that greater innovation rents appropriation is associated with an increase in firm specificity of its innovative knowledge. But the positive relationship between firm-specific innovations and firm value appropriation tends to decrease when the product or technology market is highly dynamic. Further, under high environmental dynamism, firms should increase the diversity in their knowledge composition in order to mitigate the risk of value erosion associated with firm-specific innovations. © 2009 Elsevier B.V. All rights reserved. Source


Taleb T.,NEC Europe Ltd. | Letaief K.B.,Hong Kong University of Science and Technology
IEEE Transactions on Wireless Communications | Year: 2010

Cooperative diversity has emerged as a promising technique to facilitate fast handoff mechanisms in mobile ad-hoc environments. The key concept behind a prominent cooperative diversity based protocol, namely, Partner-based Hierarchical Mobile IPv6 (PHMIPv6), is to enable mobile nodes anticipate handover events by selecting suitable partners to communicate on their behalves with Mobility Anchor Points (MAPs). In the original design of PHMIPv6, mobile hosts choose partners based on their signal strength. Such a naive selection procedure may lead to scenarios where mobile hosts lose communication with the selected partners before the completion of the handoff operations. In addition, PHMIPv6 overlooks security considerations, which can easily lead to vulnerable mobile hosts and/or partner entities. As a solution to these two shortcomings of PHMIPv6, this paper first proposes an extended version of PHMIPv6 called Connection Stability Aware PHMIPv6 (CSA-PHMIPv6). In CSA-PHMIPv6, mobile hosts select partners with whom communication can last for a sufficiently long time by employing the Link Expiration Time (LET) parameter. To tackle the security issues, the simple yet effective use of two distinct authentication keys is envisioned. Furthermore, to shorten the communication time between mobile hosts and their corresponding partners, a second handoff management approach called Partner Less Dependable PHMIPv6 (PLD-PHMIPv6) is proposed. © 2010 IEEE. Source


Lam H.,Hong Kong University of Science and Technology
Molecular and Cellular Proteomics | Year: 2011

Spectral library searching is an emerging approach in peptide identifications from tandem mass spectra, a critical step in proteomic data analysis. Conceptually, the premise of this approach is that the tandem MS fragmentation pattern of a peptide under some fixed conditions is a reproducible fingerprint of that peptide, such that unknown spectra acquired under the same conditions can be identified by spectral matching. In actual practice, a spectral library is first meticulously compiled from a large collection of previously observed and identified tandem MS spectra, usually obtained from shotgun proteomics experiments of complex mixtures. Then, a query spectrum is then identified by spectral matching using recently developed spectral search engines. This review discusses the basic principles of the two pillars of this approach: spectral library construction, and spectral library searching. An overview of the software tools available for these two tasks, as well as a high-level description of the underlying algorithms, will be given. Finally, several new methods that utilize spectral libraries for peptide identification in ways other than straightforward spectral matching will also be described. © 2011 by The American Society for Biochemistry and Molecular Biology, Inc. Source


Atzeni I.,Polytechnic University of Catalonia | Ordonez L.G.,Polytechnic University of Catalonia | Scutari G.,State University of New York at Buffalo | Palomar D.P.,Hong Kong University of Science and Technology | Fonollosa J.R.,Polytechnic University of Catalonia
IEEE Transactions on Smart Grid | Year: 2013

Demand-side management, together with the integration of distributed energy generation and storage, are considered increasingly essential elements for implementing the smart grid concept and balancing massive energy production from renewable sources. We focus on a smart grid in which the demand-side comprises traditional users as well as users owning some kind of distributed energy sources and/or energy storage devices. By means of a day-ahead optimization process regulated by an independent central unit, the latter users intend to reduce their monetary energy expense by producing or storing energy rather than just purchasing their energy needs from the grid. In this paper, we formulate the resulting grid optimization problem as a noncooperative game and analyze the existence of optimal strategies. Furthermore, we present a distributed algorithm to be run on the users' smart meters, which provides the optimal production and/or storage strategies, while preserving the privacy of the users and minimizing the required signaling with the central unit. Finally, the proposed day-ahead optimization is tested in a realistic situation. © 2010-2012 IEEE. Source


Chen Q.,Stanford University | Li D.,Columbia University | Tang C.-K.,Hong Kong University of Science and Technology
IEEE Transactions on Pattern Analysis and Machine Intelligence | Year: 2013

This paper proposes to apply the nonlocal principle to general alpha matting for the simultaneous extraction of multiple image layers; each layer may have disjoint as well as coherent segments typical of foreground mattes in natural image matting. The estimated alphas also satisfy the summation constraint. As in nonlocal matting, our approach does not assume the local color-line model and does not require sophisticated sampling or learning strategies. On the other hand, our matting method generalizes well to any color or feature space in any dimension, any number of alphas and layers at a pixel beyond two, and comes with an arguably simpler implementation, which we have made publicly available. Our matting technique, aptly called KNN matting, capitalizes on the nonlocal principle by using (K) nearest neighbors (KNN) in matching nonlocal neighborhoods, and contributes a simple and fast algorithm that produces competitive results with sparse user markups. KNN matting has a closed-form solution that can leverage the preconditioned conjugate gradient method to produce an efficient implementation. Experimental evaluation on benchmark datasets indicates that our matting results are comparable to or of higher quality than state-of-the-art methods requiring more involved implementation. In this paper, we take the nonlocal principle beyond alpha estimation and extract overlapping image layers using the same Laplacian framework. Given the alpha value, our closed form solution can be elegantly generalized to solve the multilayer extraction problem. We perform qualitative and quantitative comparisons to demonstrate the accuracy of the extracted image layers. © 1979-2012 IEEE. Source


Lin Z.,Hong Kong University of Science and Technology
Building and Environment | Year: 2011

With its reliability and simplicity, effective draft temperature (EDT) has been a popular tool for the practice engineers to evaluate and/or predict the performance of mixing ventilation. In this study, experimental measurements are conducted in a large environmental chamber under stratum ventilation. A computational fluid dynamics (CFD) model, which is previously validated extensively, is further validated with the experimental data generated in the current study. Simulations are also carried out using the validated model. Data generated by means of experiment and of simulation are used to formulate effective draft temperature for stratum ventilation (EDTS). Similar to its counterpart for mixing ventilation, the new effective draft temperature for stratum ventilation (EDTS) is also found to be reliable and straightforward in evaluation of the performance in thermal comfort of stratum ventilation. The effective draft temperature for stratum ventilation (EDTS) has the potential to be applied easily by practice engineers in air distribution design of stratum ventilation. © 2011 Elsevier Ltd. Source


Yang Z.,Tsinghua University | Zhou Z.,Hong Kong University of Science and Technology | Liu Y.,Tsinghua University
ACM Computing Surveys | Year: 2013

The spatial features of emitted wireless signals are the basis of location distinction and determination for wireless indoor localization. Available in mainstream wireless signal measurements, the Received Signal Strength Indicator (RSSI) has been adopted in vast indoor localization systems. However, it suffers from dramatic performance degradation in complex situations due to multipath fading and temporal dynamics. Break-through techniques resort to finer-grained wireless channel measurement than RSSI. Different from RSSI, the PHY layer power feature, channel response, is able to discriminate multipath characteristics, and thus holds the potential for the convergence of accurate and pervasive indoor localization. Channel State Information (CSI, reflecting channel response in 802.11 a/g/n) has attracted many research efforts and some pioneer works have demonstrated submeter or even centimeter-level accuracy. In this article, we survey this new trend of channel response in localization. The differences between CSI and RSSI are highlighted with respect to network layering, time resolution, frequency resolution, stability, and accessibility. Furthermore, we investigate a large body of recent works and classify them overall into three categories according to how to use CSI. For each category, we emphasize the basic principles and address future directions of research in this new and largely open area. © 2013 ACM. Source


Wang W.-X.,Hong Kong University of Science and Technology
Aquatic Toxicology | Year: 2011

The field of aquatic toxicology has been expanding rapidly in recent years. The ecotoxicological study of environmental toxicants encompasses three basic frameworks: environmental behavior/transport, bioavailability/bioaccumulation (exposure), and toxicity at different biological levels. Environmental risk assessments are then based on this knowledge to provide sound advice for environmental management and policies. In this article I will highlight the need to further understand the exposure to toxicants and its direct relationship with toxicological responses at different levels. Exposure considerations generally include the route, species, concentration and duration of exposure, among which the importance of the exposure route has been little considered. A typical aquatic toxicological study simply exposes the organisms to toxicants in the water for a certain period of time under different concentrations. This approach may not be environmentally relevant. Future studies should attempt to understand the toxicology under different exposure regimes. Incorporating exposure will allow aquatic toxicology to be placed in a context of environmental relevance and enhance our understanding of the impacts of toxicants on our living environments. © 2011 Elsevier B.V. Source


Ma H.,Hong Kong University of Science and Technology
Journal of Porous Materials | Year: 2014

Mercury intrusion porosimetry (MIP) has been widely used to investigate the pore structure of cement based materials for many years. The purpose of this paper is to present views of how to make MIP results of similar materials from different research institutes compatible and how to use MIP results in modeling. Factors that influence MIP results are analyzed comprehensively considering characteristics of cement based materials, and recommendations corresponding to these factors are illustrated. According to these recommendations, when specific tests are unavailable, mercury surface tension of 480 mM/m and mercury-solid contact angle of 130 may be used in theoretical calculations of pore size; sampling by either sawing or core-drilling, maximum value of the minimum sample dimension of 5 mm, and solvent drying are recommended for sample preparation; and staged operation mode which set appropriate equilibrium time is recommended for MIP measurements of cement based materials. From a MIP result, the methods which are necessary to be unified to determine pore structure parameters are discussed. Besides, it is shown that sometimes pore structure parameters should be used carefully in physical models by considering their physical or statistical meanings. © 2013 Springer Science+Business Media New York. Source


In a residential building, a balcony of an upper floor can act as an overhang; and provide solar shading and reduction in electricity consumption of air-conditioner (A/C) for a flat on the underneath floor. However, some residential flats located on the lower levels may receive substantial self-shading effect from some adjacent flats in the same building block, leading to an insignificant shading effect from a balcony. As there is substantial amount of energy consumed and pollutant generated during the production and disposal of a balcony, it is vital to investigate the energy and environmental performance of residential flats installed with balconies at various floor levels. The objective of this study is to investigate an appropriate floor level of a residential building above which balconies should be incorporated. A 21-story residential building was modeled using EnergyPlus. Simulation results indicated that, for a west-facing flat, only the flats located on 15/F to 20/F can give acceptable environmental payback periods, ranging from 58.3 years to 40.7 years, i.e. within the lifespan (60 years) of a building. The corresponding annual savings in A/C consumption range from 234.9 MJ (2.60%) to 336.7 MJ (3.57%). The research methodology and findings are presented in this paper. © 2015 Elsevier Ltd. Source


Junghans D.,Hong Kong University of Science and Technology | Schmidt D.,Leibniz University of Hanover | Zagermann M.,Leibniz University of Hanover
Journal of High Energy Physics | Year: 2014

We study AdS7 vacua of massive type IIA string theory compactified on a 3-sphere with H3 flux and anti-D6-branes. In such backgrounds, the anti-brane backreaction is known to generate a singularity in the H3 energy density, whose interpretation has not been understood so far. We first consider supersymmetric solutions of this setup and give an analytic proof that the flux singularity is resolved there by a polarization of the anti-D6-branes into a D8-brane, which wraps a finite 2-sphere inside of the compact space. To this end, we compute the potential for a spherical probe D8-brane on top of a background with backreacting anti-D6-branes and show that it has a local maximum at zero radius and a local minimum at a finite radius of the 2-sphere. The polarization is triggered by a term in the potential due to the AdS curvature and does therefore not occur in non-compact setups where the 7d external spacetime is Minkowski. We furthermore find numerical evidence for the existence of non-supersymmetric solutions in our setup. This is supported by the observation that the general solution to the equations of motion has a continuous parameter that is suggestive of a modulus and appears to control supersymmetry breaking. Analyzing the polarization potential for the non-supersymmetric solutions, we find that the flux singularities are resolved there by brane polarization as well. © The Authors. Source


Leung K.N.,Chinese University of Hong Kong | Ng Y.S.,Hong Kong University of Science and Technology
IEEE Transactions on Circuits and Systems I: Regular Papers | Year: 2010

An energy-efficient voltage buffer for a low-dropout regulator (LDO) is presented in this paper. The voltage buffer contains a current-boosting circuit with quick-on and auto-off features so that it can momentarily provide an extra current to charge and discharge the large gate capacitance of the power transistor. The voltage buffer is therefore able to increase the slew rate at the gate of the power transistor, whereas the quiescent current of the LDO remains constantly low in the steady state. Moreover, the proposed current-boosting circuit has a capacitive shunt feedback network to improve the loop bandwidth of the LDO. The proposed voltage buffer is applied to an LDO implemented in a 0.35-μm CMOS technology. The LDO operates at a 2-V supply, and the regulated voltage is 1.8 V. The maximum output current is 100 mA. The measured quiescent current is about 4μA. The load transient deviation of the regulated voltage is small. The proposed voltage buffer can effectively reduce the transient voltage spikes. © 2010 IEEE. Source


Chan A.L.S.,Hong Kong University of Science and Technology
Applied Energy | Year: 2012

There are various architectural features of a residential building that can influence its indoor climate and electricity consumption, such as thermal insulation, window size, glazing material, albedo of building façade and orientation. In addition to these architectural features, shading effects (either by external objects or by the building itself) can also affect the thermal performance of a building. External shading effects are mainly caused by nearby trees or buildings, while shading effect imposed by the building itself usually depends on the layout design of the building, i.e. building shape and layout arrangement of the flats on each floor. Some flats in a building may receive a shading effect from adjacent flats located in the same building block. When architects or building designers conduct the layout design of a building, a number of factors such as building regulations, site limitations, scenic view, noise control, natural ventilation and daylight utilization will be considered. The thermal performance of a building is one of the major issues that should be taken into account. The objective of this study is to assess the thermal performance of residential buildings under the effect of adjacent shading in subtropical Hong Kong. A literature survey was carried out to identify typical layout designs of residential buildings from the past two decades. Building energy simulations were conducted for residential building blocks with different layout designs. It is found that adjacent shading effect has a substantial impact on the thermal performance of residential buildings. The findings are reported in this paper. © 2011 Elsevier Ltd. Source


Zou C.,Nankai University | Tsung F.,Hong Kong University of Science and Technology
Journal of Quality Technology | Year: 2010

Nonparametric or distribution-free charts are useful in statistical process control when thereis a lack of or limited knowledge about the underlying process distribution. Most existing approachesin the literature are for monitoring location parameters. They may not be effective with a change of distribution over time in many applications. This paper develops a new distribution-free control chart based on the integration of a powerful nonparametric goodness-of-fit test and the exponentially weighted moving-average (EWMA)control scheme. Benefiting from certain desirable properties of the test and the proposed charting statistic,our proposed control chart is computationally fast, convenient to use, and efficient in detecting potential shifts in location, scale, and shap Thus, it offers robust protection against variation in various underlying distributions. Numericalstudies and a real-data example show that the proposed approaches are quite effective in industrialapplications, particularly in start-up and short-run situations. Source


Fong K.F.,Hong Kong University of Science and Technology | Lee C.K.,University of Hong Kong
Applied Energy | Year: 2014

Trigeneration, which is able to provide cooling, heating and power, has been advocated to be a sustainable solution for building use in the urban area. With the high-temperature feature and maintenance convenience, solid oxide fuel cell (SOFC) becomes a promising prime mover of trigeneration. In this study, two zero grid-electricity design strategies of SOFC-trigeneration system for high-rise building were proposed and evaluated. The first zero design approach, named full-SOFC strategy, is to design the rated capacity of SOFC by matching the demand peak of electrical power without the need of grid connection. The second one, called partial-SOFC strategy, is to satisfy the peak electrical demand partly by the SOFC and partly by the grid, but still maintaining net zero grid-electricity in a year time. In view of the system complexity and the component interaction of SOFC-trigeneration, the environmental and energy performances of different cases were evaluated through year-round dynamic simulation. Compared to the conventional provisions of cooling, heating and power for building, the full- and the partial-SOFC-trigeneration systems could have 51.4% and 23.9% carbon emission cut per annum respectively. In terms of year-round electricity demand, the two zero grid-electricity strategies had corresponding savings of 7.1% and 2.8%. As a whole, the full-SOFC-trigeneration assures both environmental and energy merits for high-rise building in the hot and humid climate. © 2013 Elsevier Ltd. Source


Cao C.,Hong Kong University of Science and Technology | Cheung M.M.S.,University of Sichuan
Construction and Building Materials | Year: 2014

Steel corrosion normally takes pitting pattern in chloride contaminated RC structures. This paper examines the localized corrosion rust accumulation process through electrochemical analysis. The coupled micro- and macro-cell corrosion process involved in typical chloride-induced corrosion is numerically simulated by employing the Finite Element Method (FEM). The modeling results show that macrocell corrosion rate may decrease drastically while microcell corrosion rate may not change so much during the gradual initiation process of corrosion around reinforcing steel. The non-uniform rust distribution around steel-concrete interface is found to be mainly caused by macrocell corrosion circulating between upper active and lower passive rebar surface. © 2013 Elsevier Ltd. All rights reserved. Source


Han D.,South-Central University for Nationalities | Shi L.,Hong Kong University of Science and Technology
Automatica | Year: 2013

We consider the problem of guaranteed cost control (GCC) of affine nonlinear systems in this paper. Firstly, the general affine nonlinear system with the origin being its equilibrium point is represented as a linear-like structure with state-dependent coefficient matrices. Secondly, partition of unity method is used to approximate the coefficient matrices, as a result of which the original affine nonlinear system is equivalently converted into a linear-like system with modeling error. A GCC law is then synthesized based on the equivalent model in the presence of modeling error under certain error condition. The control law ensures that the system under control is asymptotically stable as well as that a given cost function is upper-bounded. A suboptimal GCC law can be obtained via solving an optimization problem in terms of linear matrix inequality (LMI), in stead of state-dependent Riccati equation (SDRE) or Hamilton-Jacobi equations that are usually required in solving nonlinear optimal control problems. Finally, a numerical example is provided to illustrate the validity of the proposed method. © 2012 Elsevier Ltd. All rights reserved. Source


Shiu G.,University of Wisconsin - Madison | Shiu G.,Hong Kong University of Science and Technology | Xu J.,University of Wisconsin - Madison
Physical Review D - Particles, Fields, Gravitation and Cosmology | Year: 2011

We explore the effects of heavy degrees of freedom on the evolution and perturbations of light modes in multifield inflation. We use a simple two-field model as an example to illustrate the subtleties of integrating out massive fields in a time-dependent background. We show that when adiabaticity is violated due to a sharp turn in field space, the roles of massive and massless field are interchanged, and furthermore the fields are strongly coupled; thus the system cannot be described by an effective single-field action. Further analysis shows that the sharp turn imparts a non-Bunch-Davis component in each perturbation mode, leading to oscillatory features in the power spectrum, and a large resonantly enhanced bispectrum. © 2011 American Physical Society. Source


Hu C.,Hong Kong University of Science and Technology
Construction and Building Materials | Year: 2014

Regarding the wide use of fly ash in normal and high-strength concrete, this study aims to investigate the microstructure and mechanical properties of fly ash blended cement pastes. To fulfill the aim of this study, multiple techniques, including scanning electron microscopy with backscattered electron and energy-dispersive X-ray spectroscopy detectors, thermo-gravimetric analysis, X-ray diffraction, instrumented nanoindentation and ultrasonic measurements, were applied on fly ash blended cement pastes at different water to binder ratios. Through this study, the mechanical properties of fly ash blended cement pastes were reported, and the results showed that the incorporation of fly ash significantly influenced the microstructure and mechanical properties of calcium-silicate-hydrates gel. © 2014 Elsevier Ltd. All rights reserved. Source


Lin B.-C.,National University of Tainan | Lea C.-T.,Hong Kong University of Science and Technology
Journal of Lightwave Technology | Year: 2012

Recently a new approach was proposed to tackle the wavelength non-uniformity problem of the silicon photonic ring technology. By lowering the Q value of the ring, the likelihood of finding a common operating wavelength can be significantly increased. But lowering Q will increase the crosstalk level in such a network. This crosstalk problem can be tackled with a generalized space dilation technique. Since crosstalk is a central issue of this approach, computing the crosstalk level accurately is critical for a microring-based photonic interconnect. Prior work on crosstalk analysis for interconnects based on directional couplers assumed that the extinction ratios are the same for the two switching states. But this is usually not the case for silicon photonic microrings. In this paper, we develop an analytical model for analyzing the crosstalk level in a microring-based optical interconnection network. The analytical approach presented in the paper can be used for studying the crosstalk problem in optical networks based on other optical switching technologies. © 2012 IEEE. Source


Betz A.R.,Columbia University | Xu J.,Washington State University | Qiu H.,Hong Kong University of Science and Technology | Attinger D.,Columbia University
Applied Physics Letters | Year: 2010

We demonstrate that smooth and flat surfaces combining hydrophilic and hydrophobic patterns improve pool boiling performance. Compared to a hydrophilic surface with 7° wetting angle, the measured critical heat flux and heat transfer coefficients of the enhanced surfaces are, up to respectively, 65% and 100% higher. Different networks combining hydrophilic and hydrophobic regions are characterized. While all tested networks enhance the heat transfer coefficient, large enhancements of critical heat flux are typically found for hydrophilic networks featuring hydrophobic islands. Hydrophilic networks indeed are shown to prevent the formation of an insulating vapor layer. © 2010 American Institute of Physics. Source


Liu Z.,Nanyang Technological University | Guan Y.L.,Nanyang Technological University | Mow W.H.,Hong Kong University of Science and Technology
IEEE Transactions on Information Theory | Year: 2014

Levenshtein improved the famous Welch bound on aperiodic correlation for binary sequences by utilizing some properties of the weighted mean square aperiodic correlation. Following Levenshtein's idea, a new correlation lower bound for quasi-complementary sequence sets (QCSSs) over the complex roots of unity is proposed in this paper. The derived lower bound is shown to be tighter than the Welch bound for QCSSs when the set size is greater than some value. The conditions for meeting the new bound with equality are also investigated. © 1963-2012 IEEE. Source


Junghans D.,Hong Kong University of Science and Technology
Physical Review D - Particles, Fields, Gravitation and Cosmology | Year: 2014

We revisit the effective low-energy dynamics of the volume modulus in warped flux compactifications with anti-D3-branes in order to analyze the prospects for metastable de Sitter vacua and brane inflation along the lines of KKLT/KKLMMT. At the level of the ten-dimensional supergravity solution, antibranes in flux backgrounds with opposite charge are known to source singular terms in the energy densities of the bulk fluxes, which led to a debate on the consistency of such constructions in string theory. A straightforward yet nontrivial check of the singular solution is to verify that its dimensional reduction in the large-volume limit reproduces the four-dimensional low-energy dynamics expected from known results where the antibranes are treated as a probe. Taking into account the antibrane backreaction in the effective scalar potential, we find that both the volume scaling and the coefficient of the antibrane uplift term are in exact agreement with the probe potential if the singular fluxes satisfy a certain near-brane boundary condition. This condition can be tested explicitly and may thus help to decide whether flux singularities should be interpreted as pathological or benign features of flux compactifications with antibranes. Throughout the paper, we also comment on a number of subtleties related to the proper definition of warped effective field theory with antibranes. © 2014 American Physical Society. Source


Ding C.,Hong Kong University of Science and Technology | Helleseth T.,University of Bergen
IEEE Transactions on Information Theory | Year: 2013

Cyclic codes are a subclass of linear codes and have applications in consumer electronics, data storage systems, and communication systems as they have efficient encoding and decoding algorithms. Perfect nonlinear monomials were employed to construct optimal ternary cyclic codes with parameters [3 m-1, 3m-1-2m, 4] by Carlet, Ding, and Yuan in 2005. In this paper, almost perfect nonlinear monomials, and a number of other monomials over GF(3m) are used to construct optimal ternary cyclic codes with the same parameters. Nine open problems on such codes are also presented. © 1963-2012 IEEE. Source


Ding C.,Hong Kong University of Science and Technology | Gao Y.,Beihang University | Zhou Z.,Southwest Jiaotong University
IEEE Transactions on Information Theory | Year: 2013

As a subclass of linear codes, cyclic codes have applications in consumer electronics, data storage systems, and communication systems as they have efficient encoding and decoding algorithms. In this paper, five families of three-weight ternary cyclic codes whose duals have two zeros are presented. The weight distributions of the five families of cyclic codes are settled. The duals of two families of the cyclic codes are optimal. © 1963-2012 IEEE. Source


Hu L.L.,Sun Yat Sen University | Yu T.X.,Hong Kong University of Science and Technology
International Journal of Impact Engineering | Year: 2010

Based on the repeatable collapsing mechanism of cells' structure under dynamic crushing, an analytical formula of the dynamic crushing strength of regular hexagonal honeycombs is derived in terms of impact velocity and cell walls' thickness ratio. It is consistent with the equation obtained from the shock wave theory that regards cellular material as continuum, in which the key parameter is approximately measured from the "stress-strain" curve of the cellular material. The effect of unequal thickness of cell walls on the honeycomb's dynamic crushing strength is discussed, and the result shows that the dynamic crushing strength of the hexagonal honeycomb with some double-thickness walls is about 1.3 times of that of the hexagonal honeycomb without double-thickness wall. All of the analytical predictions are compared with the numerical simulation results, showing good agreements. © 2009 Elsevier Ltd. All rights reserved. Source


Jeong H.,Samsung | Du S.,Hong Kong University of Science and Technology
Optics Letters | Year: 2010

We theoretically study the stacked optical transients generated from a series of square pulses passing through a cold atomic ensemble. Using the hybrid analysis and fast Fourier transform, we identify the stacked coherent transients [Europhys. Lett. 4,47 (1987)] as optical precursors. With slow-light and electromagnetically induced transparency, we obtain nearly 700% transmitted intensity at the transient spikes resulting from the interference between the delayed main field and the stacked optical precursors. © 2010 Optical Society of America. Source


Venkatesh V.,University of Arkansas | Thong J.Y.L.,Hong Kong University of Science and Technology | Xu X.,Hong Kong Polytechnic University
MIS Quarterly: Management Information Systems | Year: 2012

This paper extends the unified theory of acceptance and use of technology (UTAUT) to study acceptance and use of technology in a consumer context. Our proposed UTAUT2 incorporates three constructs into UTAUT: hedonic motivation, price value, and habit. Individual differences-namely, age, gender, and experience-are hypothesized to moderate the effects of these constructs on behavioral intention and technology use. Results from a two-stage online survey, with technology use data collected four months after the first survey, of 1,512 mobile Internet consumers supported our model. Compared to UTAUT, the extensions proposed in UTAUT2 produced a substantial improvement in the variance explained in behavioral intention (56 percent to 74 percent) and technology use (40 percent to 52 percent). The theoretical and managerial implications of these results are discussed. Source


Luo Z.,Hong Kong University of Science and Technology | Pinto N.J.,University of Puerto Rico at Humacao | Davila Y.,University of Puerto Rico at Humacao | Charlie Johnson A.T.,University of Pennsylvania
Applied Physics Letters | Year: 2012

The electronic properties of graphene are tunable via doping, making it attractive in low dimensional organic electronics. Common methods of doping graphene, however, adversely affect charge mobility and degrade device performance. We demonstrate a facile shadow mask technique of defining electrodes on graphene grown by chemical vapor deposition (CVD) thereby eliminating the use of detrimental chemicals needed in the corresponding lithographic process. Further, we report on the controlled, effective, and reversible doping of graphene via ultraviolet (UV) irradiation with minimal impact on charge mobility. The change in charge concentration saturates at ∼2 × 10 12 cm -2 and the quantum yield is ∼10 -5 e/photon upon initial UV exposure. This simple and controlled strategy opens the possibility of doping wafer-size CVD graphene for diverse applications. © 2012 American Institute of Physics. Source


Jiang S.-H.,Wuhan University | Li D.-Q.,Wuhan University | Zhang L.-M.,Hong Kong University of Science and Technology | Zhou C.-B.,Wuhan University
Engineering Geology | Year: 2014

This paper proposes a non-intrusive stochastic finite element method for slope reliability analysis considering spatially variable shear strength parameters. The two-dimensional spatial variation in the shear strength parameters is modeled by cross-correlated non-Gaussian random fields, which are discretized by the Karhunen-Loève expansion. The procedure for a non-intrusive stochastic finite element method is presented. Two illustrative examples are investigated to demonstrate the capacity and validity of the proposed method. The proposed non-intrusive stochastic finite element method does not require the user to modify existing deterministic finite element codes, which provides a practical tool for analyzing slope reliability problems that require complex finite element analysis. It can also produce satisfactory results for low failure risk corresponding to most practical cases. The non-intrusive stochastic finite element method can efficiently evaluate the slope reliability considering spatially variable shear strength parameters, which is much more efficient than the Latin hypercube sampling (LHS) method. Ignoring spatial variability of shear strength parameters will result in unconservative estimates of the probability of slope failure if the coefficients of variation of the shear strength parameters exceed a critical value or the factor of slope safety is relatively low. The critical coefficient of variation of shear strength parameters increases with the factor of slope safety. © 2013 Elsevier B.V. Source


Law K.T.,Hong Kong University of Science and Technology | Lee P.A.,Massachusetts Institute of Technology
Physical Review B - Condensed Matter and Materials Physics | Year: 2011

It has been shown previously that the coupling between two Majorana end states in single-channel superconducting quantum wires leads to the fractional Josephson effect. However, in realistic experimental conditions, multiple bands of the wires are occupied and the Majorana end states are accompanied by other fermionic end states. This raises the question concerning the robustness of the fractional Josephson effect in these situations. Here we show that the absence of the avoided energy crossing which gives rise to the fractional Josephson effect is robust, even when the Majorana fermions are coupled with arbitrary strengths to other fermions. Moreover, we calculate the temperature dependence of the fractional Josephson current and show that it is suppressed by thermal excitations to the other fermion bound states. © 2011 American Physical Society. Source