Illinois at Singapore Pte Ltd.

Singapore, Singapore

Illinois at Singapore Pte Ltd.

Singapore, Singapore
SEARCH FILTERS
Time filter
Source Type

According to various embodiments, there is provided an electric meter including a sensor circuit configured to provide a plurality of instantaneous magnetic field measurements; a processing circuit configured to generate a time-series of magnetic field vectors, each magnetic field vector of the time-series of magnetic field vectors including the plurality of instantaneous magnetic field measurements; and a total current determination circuit configured to determine a total current, wherein the total current is a sum of currents of each branch of a plurality of branches of a power distribution network; wherein the processing circuit is further configured to compute a de-mixing matrix based on the determined total current and the time-series of magnetic field vectors, and further configured to linear transform each magnetic field vector using the de-mixing matrix to determine a current of each branch.


Jones D.L.,University of Illinois at Urbana - Champaign | Jones D.L.,Illinois at Singapore Pte. Ltd | Johnson E.C.,University of Illinois at Urbana - Champaign | Ratnam R.,University of Illinois at Urbana - Champaign | Ratnam R.,Illinois at Singapore Pte. Ltd
Frontiers in Computational Neuroscience | Year: 2015

A neural code based on sequences of spikes can consume a significant portion of the brain's energy budget. Thus, energy considerations would dictate that spiking activity be kept as low as possible. However, a high spike-rate improves the coding and representation of signals in spike trains, particularly in sensory systems. These are competing demands, and selective pressure has presumably worked to optimize coding by apportioning a minimum number of spikes so as to maximize coding fidelity. The mechanisms by which a neuron generates spikes while maintaining a fidelity criterion are not known. Here, we show that a signal-dependent neural threshold, similar to a dynamic or adapting threshold, optimizes the trade-off between spike generation (encoding) and fidelity (decoding). The threshold mimics a post-synaptic membrane (a low-pass filter) and serves as an internal decoder. Further, it sets the average firing rate (the energy constraint). The decoding process provides an internal copy of the coding error to the spike-generator which emits a spike when the error equals or exceeds a spike threshold. When optimized, the trade-off leads to a deterministic spike firing-rule that generates optimally timed spikes so as to maximize fidelity. The optimal coder is derived in closed-form in the limit of high spike-rates, when the signal can be approximated as a piece-wise constant signal. The predicted spike-times are close to those obtained experimentally in the primary electrosensory afferent neurons of weakly electric fish (Apteronotus leptorhynchus) and pyramidal neurons from the somatosensory cortex of the rat. We suggest that KCNQ/Kv7 channels (underlying the M-current) are good candidates for the decoder. They are widely coupled to metabolic processes and do not inactivate. We conclude that the neural threshold is optimized to generate an energy-efficient and high-fidelity neural code. © 2015 Jones, Johnson and Ratnam.


Johnson E.C.,University of Illinois at Urbana - Champaign | Jones D.L.,University of Illinois at Urbana - Champaign | Jones D.L.,Illinois at Singapore Pte. Ltd | Ratnam R.,University of Illinois at Urbana - Champaign | Ratnam R.,Illinois at Singapore Pte. Ltd
Journal of Computational Neuroscience | Year: 2016

Sensory neurons code information about stimuli in their sequence of action potentials (spikes). Intuitively, the spikes should represent stimuli with high fidelity. However, generating and propagating spikes is a metabolically expensive process. It is therefore likely that neural codes have been selected to balance energy expenditure against encoding error. Our recently proposed optimal, energy-constrained neural coder (Jones et al. Frontiers in Computational Neuroscience, 9, 61 2015) postulates that neurons time spikes to minimize the trade-off between stimulus reconstruction error and expended energy by adjusting the spike threshold using a simple dynamic threshold. Here, we show that this proposed coding scheme is related to existing coding schemes, such as rate and temporal codes. We derive an instantaneous rate coder and show that the spike-rate depends on the signal and its derivative. In the limit of high spike rates the spike train maximizes fidelity given an energy constraint (average spike-rate), and the predicted interspike intervals are identical to those generated by our existing optimal coding neuron. The instantaneous rate coder is shown to closely match the spike-rates recorded from P-type primary afferents in weakly electric fish. In particular, the coder is a predictor of the peristimulus time histogram (PSTH). When tested against in vitro cortical pyramidal neuron recordings, the instantaneous spike-rate approximates DC step inputs, matching both the average spike-rate and the time-to-first-spike (a simple temporal code). Overall, the instantaneous rate coder relates optimal, energy-constrained encoding to the concepts of rate-coding and temporal-coding, suggesting a possible unifying principle of neural encoding of sensory signals. © 2016, Springer Science+Business Media New York.


Jung D.,SK Telecom | Zhang Z.,Illinois at Singapore Pte. Ltd. | Winslett M.,University of Illinois at Urbana - Champaign
Proceedings - International Conference on Data Engineering | Year: 2017

Vibration sensor is becoming an essential part of Internet of Things (IoT), fueled by the quickly evolving technology improving the measurement accuracy and lowering the hardware cost. Vibration sensors physically attach to core equipments in control and manufacturing systems, e.g., motors and tubes, providing key insight into the running status of these devices. Massive readings from vibration sensors, however, pose new technical challenges to the analytical system, due to the non-continuous sampling strategy for sensor energy saving, as well as hardness of data interpretation. To maximize the utility and minimize the operational overhead of vibration sensors, we propose a new analytical framework, especially designed for vibration analysis based on its unique characteristics. In particular, our data engine targets to support Remaining Usefulness Lifetime (RUL) estimation, known as one of the most important problems in cyber-physical system maintenance, to optimize the replacement scheduling over the equipments under monitoring. Our empirical evaluations on real manufacturing sites show that scalable and accurate analysis over the vibration data enables to prolong the average lifetime of the tubes by 1.2x and reduce the replacement cost by 20%. © 2017 IEEE.


Feng J.,National University of Singapore | Ni B.,Illinois at Singapore Pte Ltd. | Tian Q.,University of Texas at San Antonio | Yan S.,National University of Singapore
Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition | Year: 2011

Modern visual classification models generally include a feature pooling step, which aggregates local features over the region of interest into a statistic through a certain spatial pooling operation. Two commonly used operations are the average and max poolings. However, recent theoretical analysis has indicated that neither of these two pooling techniques may be qualified to be optimal. Besides, we further reveal in this work that more severe limitations of these two pooling methods are from the unrecoverable loss of the spatial information during the statistical summarization and the underlying over-simplified assumption about the feature distribution. We aim to address these inherent issues in this work and generalize previous pooling methods as follows. We define a weighted ℓp-norm spatial pooling function tailored for the class-specific feature spatial distribution. Moreover, a sensible prior for the feature spatial correlation is incorporated. Optimizing such pooling function towards optimal class separability yields a so-called geometric ℓp-norm pooling (GLP) method. The described GLP method is capable of preserving the class-specific spatial/geometric information in the pooled features and significantly boosts the discriminating capability of the resultant features for image classification. Comprehensive evaluations on several image benchmarks demonstrate that the proposed GLP method can boost the image classification performance with a single type of feature to outperform or be comparable with the state-of-the-arts. © 2011 IEEE.


Xu J.,Northeastern University China | Zhang Z.,Illinois at Singapore Pte. Ltd. | Tung A.K.H.,National University of Singapore | Yu G.,Northeastern University China
VLDB Journal | Year: 2012

Advances in geographical tracking, multimedia processing, information extraction, and sensor networks have created a deluge of probabilistic data. While similarity search is an important tool to support the manipulation of probabilistic data, it raises new challenges to traditional relational databases. The problem stems from the limited effectiveness of the distance metrics employed by existing database systems. On the other hand, several more complicated distance operators have proven their values for better distinguishing ability in specific probabilistic domains. In this paper, we discuss the similarity search problem with respect to Earth Mover's Distance (EMD). EMD is the most successful distance metric for probability distribution comparison but is an expensive operator as it has cubic time complexity. We present a new database indexing approach to answer EMD-based similarity queries, including range queries and k-nearest neighbor queries on probabilistic data. Our solution utilizes primal-dual theory from linear programming and employs a group of B + trees for effective candidate pruning. We also apply our filtering technique to the processing of continuous similarity queries, especially with applications to frame copy detection in real-time videos. Extensive experiments show that our proposals dramatically improve the usefulness and scalability of probabilistic data management. © 2011 Springer-Verlag.


Chen B.,Illinois at Singapore Pte. Ltd | Zhou Z.,National University of Singapore | Yu H.,National University of Singapore
Proceedings of the Annual International Conference on Mobile Computing and Networking, MOBICOM | Year: 2013

Counting the number of RFID tags, or RFID counting, is needed by a wide array of important wireless applications. Motivated by its paramount practical importance, researchers have developed an impressive arsenal of techniques to improve the performance of RFID counting (i.e., to reduce the time needed to do the counting). This paper aims to gain deeper and fundamental insights in this subject to facilitate future research on this topic. As our central thesis, we find out that the overlooked key design aspect for RFID counting protocols to achieve near-optimal performance is a conceptual separation of a protocol into two phases. The first phase uses small overhead to obtain a rough estimate, and the second phase uses the rough estimate to further achieve an accuracy target. Our thesis also indicates that other performanceenhancing techniques or ideas proposed in the literature are only of secondary importance. Guided by our central thesis, we manage to design near-optimal protocols that are more efficient than existing ones and simultaneously simpler than most of them. © 2013 by the Association for Computing Machinery, Inc.


Zhang J.,Nanyang Technological University | Zhang Z.,Illinois at Singapore Pte. Ltd. | Xiao X.,Nanyang Technological University | Yang Y.,Illinois at Singapore Pte. Ltd. | And 2 more authors.
Proceedings of the VLDB Endowment | Year: 2012

∈-differential privacy is the state-of-the-art model for releasing sensitive information while protecting privacy. Numerous methods have been proposed to enforce ?-differential privacy in various analytical tasks, e.g., regression analysis. Existing solutions for regression analysis, however, are either limited to non-standard types of regression or unable to produce accurate regression results. Motivated by this, we propose the Functional Mechanism, a differentially private method designed for a large class of optimizationbased analyses. The main idea is to enforce ∈-differential privacy by perturbing the objective function of the optimization problem, rather than its results. As case studies, we apply the functional mechanism to address two most widely used regression models, namely, linear regression and logistic regression. Both theoretical analysis and thorough experimental evaluations show that the functional mechanism is highly effective and efficient, and it significantly outperforms existing solutions. © 2012 VLDB Endowment.


Fu T.Z.J.,Illinois at Singapore Pte Ltd | Song Q.,Chinese University of Hong Kong | Chiu D.M.,Chinese University of Hong Kong
Scientometrics | Year: 2014

By means of their academic publications, authors form a social network. Instead of sharing casual thoughts and photos (as in Facebook), authors select co-authors and reference papers written by other authors. Thanks to various efforts (such as Microsoft Academic Search and DBLP), the data necessary for analyzing the academic social network is becoming more available on the Internet. What type of information and queries would be useful for users to discover, beyond the search queries already available from services such as Google Scholar? In this paper, we explore this question by defining a variety of ranking metrics on different entities—authors, publication venues, and institutions. We go beyond traditional metrics such as paper counts, citations, and h-index. Specifically, we define metrics such as influence, connections, and exposure for authors. An author gains influence by receiving more citations, but also citations from influential authors. An author increases his or her connections by co-authoring with other authors, and especially from other authors with high connections. An author receives exposure by publishing in selective venues where publications have received high citations in the past, and the selectivity of these venues also depends on the influence of the authors who publish there. We discuss the computation aspects of these metrics, and the similarity between different metrics. With additional information of author-institution relationships, we are able to study institution rankings based on the corresponding authors’ rankings for each type of metric as well as different domains. We are prepared to demonstrate these ideas with a web site (http://pubstat.org) built from millions of publications and authors. © 2014, Akadémiai Kiadó, Budapest, Hungary.


On B.-W.,Illinois at Singapore Pte Ltd | Lee I.,Troy University | Lee D.,Pennsylvania State University
Knowledge and Information Systems | Year: 2012

When non-unique values are used as the identifier of entities, due to their homonym, confusion can occur. In particular, when (part of) "names" of entities are used as their identifier, the problem is often referred to as a name disambiguation problem, where goal is to sort out the erroneous entities due to name homonyms (e. g., If only last name is used as the identifier, one cannot distinguish "Masao Obama" from "Norio Obama"). In this paper, in particular, we study the scalability issue of the name disambiguation problem-when (1) a small number of entities with large contents or (2) a large number of entities get un-distinguishable due to homonyms. First, we carefully examine two of the state-of-the-art solutions to the name disambiguation problem and point out their limitations with respect to scalability. Then, we propose two scalable graph partitioning algorithms known as multi-level graph partitioning and multi-level graph partitioning and merging to solve the large-scale name disambiguation problem. Our claim is empirically validated via experimentation-our proposal shows orders of magnitude improvement in terms of performance while maintaining equivalent or reasonable accuracy compared to competing solutions. © 2011 Springer-Verlag London Limited.

Loading Illinois at Singapore Pte Ltd. collaborators
Loading Illinois at Singapore Pte Ltd. collaborators