Nagoya-shi, Japan
Nagoya-shi, Japan

The Toyota Technological Institute is a university located in Nagoya, Japan. Founded in 1981 by a large endowment from Toyota Motors Corporation, it originally only accepted students with some industrial work experience.TTI has a School of Engineering, a Master's Program and a Doctoral Program. The programs consist of three areas of coursework: Mechanical Systems Engineering, Electronics & Information Science, and Materials Science & Engineering. In 2003 Toyota also opened the Toyota Technological Institute at Chicago, jointly with the University of Chicago. This campus is mainly for the Ph.D students, studying Machine Learning, Algorithms & Complexity, Computer Vision, Speech Technologies and Computational Biology.Although TTI is a new and tiny university, it has been rapidly growing its reputation, as TSU ranked TTI as the 5th best Japanese university in 2010 and 4th in 2011. In this ranking, TTI has a best employment rate among all Japanese Universities.In 2012, TTI was ranked 1st in Asia in terms of average number of publication per faculty by the QS World University Rankings. Wikipedia.

SEARCH FILTERS
Time filter
Source Type

Chuzhoy J.,Toyota Technological Institute
Proceedings of the Annual ACM Symposium on Theory of Computing | Year: 2012

Given an undirected graph G=(V,E) with edge capacities c e ≥ 1 for e ε E and a subset T of k vertices called terminals, we say that a graph H is a quality-q cut sparsifier for G iff T ⊆ V(H), and for any partition (A,B) of T, the values of the minimum cuts separating A and B in graphs G and H are within a factor q from each other. We say that H is a quality-q flow sparsifier for G iff T ⊆ V(H), and for any set D of demands over the terminals, the values of the minimum edge congestion incurred by fractionally routing the demands in D in graphs G and H are within a factor q from each other. So far vertex sparsifiers have been studied in a restricted setting where the sparsifier H is not allowed to contain any non-terminal vertices, that is V(H)=T. For this setting, efficient algorithms are known for constructing quality-O(log k/log log k) cut and flow vertex sparsifiers, as well as a lower bound of Ω(√log k) on the quality of any flow or cut sparsifier. We study flow and cut sparsifiers in the more general setting where Steiner vertices are allowed, that is, we no longer require that V(H)=T. We show algorithms to construct constant-quality cut sparsifiers of size O(C 3) in time poly(n)·2 C, and constant-quality flow sparsifiers of size C O(log log C) in time n O(log C)·2 C, where C is the total capacity of the edges incident on the terminals. © 2012 ACM.


Grant
Agency: NSF | Branch: Standard Grant | Program: | Phase: CDS&E-MSS | Award Amount: 109.92K | Year: 2015

Across science, engineering, medicine and business we face a deluge of data coming from sensors, from simulations, or from the activities of myriads of individuals on the Internet. Furthermore, the data sets we collect are frequently highly inter-correlated, reflecting information about the same or similar/related entities in the world, or echoing semantically important repetitions/symmetries or hierarchical structures common to both man-made and natural objects. This project will assist scientists and engineers working with correlated data sets in getting the most information and value out of their data. Key to the approach is the idea of joint data analysis, the notion that each piece of data is best understood not in isolation but in the context provided by its peers and partners in a collection of related data sets, using the web of relationships referred to above. The key aim is to complement the social networks of scientists and engineers as they exist today with parallel networks that interlink the data they base their work on, using domain-specific semantic links and aiming at mechanisms that allow algorithmic transport of information between data used by scientists working in the same domain. The resulting system amplifies scientific insights by allowing an observation of one scientist on one piece of data to automatically be transported to other relevant data sets and aggregated and also enables the automated discovery of shared structures or common abstractions that can inform multiple data sets.

In order to accomplish this joint analysis this project interconnects data sets into networks along which information can be transported and aggregated. These data set links are based on efficient matching algorithms using domain-specific features. In the associated setting, these matching or maps are used not to estimate distances or similarities but to build operators that can transport information between different data sets. The research team will exploit a functional analytic framework that allows for encoding of information as functions over the data and leads to linear operators for mapping, enabling the use of many powerful tools from linear algebra and optimization. Using inspiration from homological algebra, this team will join multiple related data sets into networks connected through these operators in a way that allows information transport, correction, and aggregation, with the ultimate goal of using the wisdom of the collection to provide as much information as possible for specific data sets to specific scientists.


Grant
Agency: NSF | Branch: Continuing grant | Program: | Phase: ALGORITHMIC FOUNDATIONS | Award Amount: 228.00K | Year: 2016

Computers play a central role in how we work, play, and communicate with each other in todays world. It is a truism that computers have grown steadily more powerful over the years, but equally important (if not more so) than the amount of sheer computing power available is how efficiently we are able to harness that power. Finding an efficient strategy to solve a given problem (in the language of computer science, an efficient algorithm) can often spell the difference between success and failure. (As an illustrative analogy, consider the task of assembling a large jigsaw puzzle. A poor choice of strategy such as a brute-force approach of trying each pair of pieces against each other may be infeasibly slow, while a cleverer approach such as grouping pieces by their color can be radically more efficient and lead to a feasible solution.) But in order to fully understand the abilities of efficient algorithms, it is crucial to also understand their limits: what is it that efficient algorithms *cannot* do? The field of computational complexity, which is the subject of the PIs project, seeks to mathematically prove that certain computational problems do not admit *any* efficient algorithm no matter how long and hard we try to develop one. Such results can have both practical value (by guiding algorithm development away from dead ends) and deep theoretical significance, as they play a profound role in shaping our fundamental understanding of the phenomenon of computation.

The 1980s witnessed exciting progress on a range of Boolean circuit models (a mathematical abstraction of the digital circuits that modern computers are built from) in computational complexity; researchers succeeded in proving many lower bounds establishing that various computational problems have no efficient algorithms in these models. However, further progress slowed significantly after the 1980s. Many of the landmark results obtained in this era were based on the method of random restrictions, which roughly speaking uses probabilistic arguments to show that Boolean circuits can be dramatically simplified by making certain random substitutions of constant values for input variables. In this project the PIs will intensively investigate an extension of the method of random restrictions which they call the method of random projections. Rather than simply substituting constant values for input variables, the random projection method additionally identifies groups of variables, projecting them all to the same new variable so that they must all take the same value. While the underlying idea is simple, it turns out that this identification of variables helps maintain useful structure which is extremely useful for proving lower bounds. In recent work the PIs have successfully used this new method of random projections to solve several decades-old problems in Boolean circuit lower bounds and related areas (which in some cases had notoriously resisted progress since the 1980s or 1990s). As the main intellectual goals of the project, the PIs will continue to develop and apply the method of random projections to attack important open problems in Boolean circuit complexity.

In addition to the technical goals described above, other central goals of the project are to educate, communicate, and inspire. The PIs will train graduate students through research collaboration, disseminate research results through seminar talks, survey articles and other publications, and continue ongoing outreach activities aimed at increasing interest in and awareness of theoretical computer science topics in a broader population, including presentations at elementary schools.


Grant
Agency: NSF | Branch: Standard Grant | Program: | Phase: Big Data Science &Engineering | Award Amount: 394.52K | Year: 2015

Unsupervised learning of useful features, or representations, is one of the most basic challenges of machine learning. Unsupervised representation learning techniques capitalize on unlabeled data which is often cheap and abundant and sometimes virtually unlimited. The goal of these ubiquitous techniques is to learn a representation that reveals intrinsic low-dimensional structure in data, disentangles underlying factors of variation by incorporating universal AI priors such as smoothness and sparsity, and is useful across multiple tasks and domains.

This project aims to develop new theory and methods for representation learning that can easily scale to large datasets. In particular, this project is concerned with methods for large-scale unsupervised feature learning, including Principal Component Analysis (PCA) and Partial Least Squares (PLS). To capitalize on massive amounts of unlabeled data, this project will develop appropriate computational approaches and study them in the ?data laden? regime. Therefore, instead of viewing representation learning as dimensionality reduction techniques and focusing on an empirical objective on finite data, these methods are studied with the goal of optimizing a population objective based on sample. This view suggests using Stochastic Approximation approaches, such as Stochastic Gradient Descent (SGD) and Stochastic Mirror Descent, that are incremental in nature and process each new sample with a computationally cheap update. Furthermore, this view enables a rigorous analysis of benefits of stochastic approximation algorithms over traditional finite-data methods. The project aims to develop stochastic approximation approaches to PCA and PLS and related problems and extensions, including deep, and sparse variants, and analyze these problems in the data-laden regime.


Grant
Agency: NSF | Branch: Standard Grant | Program: | Phase: ROBUST INTELLIGENCE | Award Amount: 854.13K | Year: 2014

Sign languages are the primary means of communication for millions of Deaf people in the world, including about 350,000-500,000 American Sign Language (ASL) users in the US. While the hearing population has benefited from advances in speech technologies such as speech recognition and spoken web search, much less progress has been made for sign language interfaces. Advances depend on improved technology for analyzing sign language from video. In addition, the linguistics of sign language is less well-understood than that of spoken language. This project addresses both of these needs, with an interdisciplinary approach that will contribute to research in linguistics, language processing, computer vision, and machine learning. Applications of the work include better access to ASL social media video archives, interactive recognition and search applications for Deaf individuals, and ASL-English interpretation assistance.

This project focuses on handshape in ASL, in particular on one constrained but very practical component: fingerspelling, or the spelling out of a word as a sequence of handshapes and trajectories between them. Fingerspelling comprises up to 35% of ASL, depending on the context, and includes 72% of ASL handshapes, making it an excellent testing ground. The project addresses gaps in existing work by focusing on handshape in various conditions, including fast, highly coarticulated signing. The main project activities include development of (1) robust automatic detection and recognition of fingerspelled words using new handshape models, including segmental and multi-segmental graphical models of ASL phonological features; (2) techniques for generalizing across signers, styles, and recording conditions; (3) improved phonetics and phonology of handshape, in particular contributing to an articulatory phonology of sign; and (4) publicly released multi-speaker, multi-style fingerspelling data and associated semi-automatic annotation.


Grant
Agency: NSF | Branch: Standard Grant | Program: | Phase: ALGORITHMIC FOUNDATIONS | Award Amount: 299.99K | Year: 2016

High-throughput experimental techniques have been producing a large amount of protein-protein interaction (PPI) data. Comparative analysis (e.g., alignment) of PPI networks greatly benefits the understanding of evolutionary relationship among species, helps identify functional modules and provides information for protein function annotations. The research goal of this proposal is to study optimization methods that can align PPI networks much more accurately than existing methods. This proposal will apply several elegant and powerful optimization techniques to understand the mathematical structure of the problem and develop efficient alignment algorithms. This proposal will also develop software implementing the proposed algorithms.

The proposed algorithms will be implemented as both a standalone program and Cytoscape plugin so that they can be easily used by biologists. The resultant software and plugin shall benefit a broad range of biological and biomedical applications, such as protein functional annotation, understanding of disease processes, design of novel diagnostics and drugs, and precision medicine. The research results will be disseminated to the optimization, computer vision/graphics and biology communities through a variety of venues. The source code will be released so that it can be useful to other network analysis researchers who want to adapt the code for their own research projects and to other optimization method researchers who want to work on biological network analysis. This project will train a few PhD students and summer interns, who will receive training in the intersection of optimization techniques, network biology and programming. Undergraduate and underrepresented students will be recruited through our summer intern program, CRA-W and collaborators. The research results will be integrated into course materials and used in an Illinois online bioinformatics program that has trained many underrepresented students.

This proposal will study a novel convex optimization algorithm for the alignment of two or multiple PPI networks. This convex method distinguishes itself from the widely-used seed-and-extension or progressive alignment strategy in that it simultaneously aligns all the input networks and proteins while the latter methods use a greedy strategy to build an alignment. A greedy strategy may introduce alignment errors at an early stage that cannot be fixed later, but this convex method can avoid this. Due to its simultaneous alignment strategy, this convex method shall detect many more proteins that are functionally conserved across all input PPI networks than existing methods and produce more accurate pairwise alignments of multiple networks. This proposal will also study a few methods to speed up the proposed convex alignment method, by making use of special topology properties of PPI networks and exploring low-rank representation of proteins. Finally, this proposal will implement the proposed algorithms as a standalone software package and Cytoscape plugin to greatly facilitate the application of comparative network analysis to biological and biomedical science discovery.


Grant
Agency: NSF | Branch: Standard Grant | Program: | Phase: ALGORITHMIC FOUNDATIONS | Award Amount: 449.72K | Year: 2016

Graphs are basic combinatorial objects that are widely used in computer science, engineering, and beyond. Many problems, both theoretical and practical, can be abstracted through graphs, and graphs are also used to represent data. Several basic optimization problems on graphs naturally arise in many different contexts, and it is important to build and expand a toolkit of algorithmic ideas and techniques for addressing such problems. This project will focus on two broad classes of graph optimization problems: graph routing and graph sparsification. Graph routing problems are used as abstractions in many applications, for example, when designing VLSI chips, in multi-robot motion planning, and routing traffic across optical networks. Graph sparsification can be a useful tool in analyzing large and complex networks, including social ones, and in designing faster algorithms for a variety of graph problems. Graph routing and sparsification problems were studied in several different areas, such as approximation algorithms, fixed parameter tractability, and graph theory. In addition to designing better algorithms for such problems, another goal of the project is to increase the flow of ideas and collaboration between these different communities, by studying problems lying in the intersection of these areas. The project will involve graduate and possibly undergraduate students from TTIC and University of Chicago, as well as students from other institutions who participate in TTICs summer internship program. The PI will also participate in activities aimed at increasing the participation of women in theoretical computer science.

Typically in a graph routing problem one is given a graph G and a collection of pairs of vertices, called demand pairs, that need to be connected to each other. The goal is to connect as many of the pairs as possible via paths, usually subject to some restrictions on the maximum load on graph vertices or edges. These are fundamental problems that have been studied extensively by both theoretical computer science and graph theory communities. Unfortunately, there are still wide gaps in our understanding of some of the most basic problems in this area, and this project aims to make progress on them. In graph sparsification problems, one is given a graph G and a small set T of its vertices called terminals. The goal is to represent the graph G by a much smaller graph H (called a sparsifier), that approximately preserves the properties of G with respect to T. The specific types of properties one would like to preserve give rise to several types of sparsifiers (for example, we may want to preserve cuts, flows or integral routings between the terminals). Problems in this area naturally connect to approximation algorithms (where sparsifiers can be used to obtain improved approximation factors to various problems), fixed-parameter tractability (where they give a black-box way to design faster algortihms), and to graph theory (many sparsification problems can be cast in graph theoretic terms and have been studied by the graph theory community). There are still many open problems in this area, and this project will attempt to improve the state of the art on several of them.


Grant
Agency: NSF | Branch: Standard Grant | Program: | Phase: ROBUST INTELLIGENCE | Award Amount: 194.61K | Year: 2016

Vision is a valuable sensing modality because it is versatile. It lets humans navigate through unfamiliar environments, discover assets, grasp and manipulate tools, react to projectiles, track targets through clutter, interpret body language, and recognize familiar objects and people. This versatility stems from low-level visual processes that somehow produce, from ambiguous retinal measurements, useful intermediate representations of depth, surface orientation, motion, and other intrinsic scene properties. This project establishes a mathematical and computational foundation for similar low-level processing in machines. The key challenge it addresses is how to usefully encode and exploit the fact that, visually, the world exhibits substantial intrinsic structure. By advancing understanding of low-level vision in machines, this project makes progress toward computer vision systems that can compare to vision in humans, in terms of accuracy, reliability, speed, and power-efficiency.

This research revisits low-level vision, and develops a comprehensive framework that possesses a common abstraction for information from different optical cues; the ability to encode scene structure across large regions and at multiple scales; implementation as parallel and distributed processing; and large-scale end-to-end learnability. The project approaches low-level vision as a structured prediction task, with ambiguous local predictions from many overlapping receptive fields being combined to produce a consistent global scene map that spans the visual field. The structured prediction models are different from those used for categorical tasks such as semantic segmentation, because they are specifically designed to accommodate the distinctive requirements and properties of low-level vision: continuous-valued output spaces; ambiguities that may form equiprobable manifolds; extreme scale variations; and global scene maps with higher-order piecewise smoothness. By strengthening the computational foundations of low-level vision, this project strives to enable many kinds of vision systems that are more efficient and more versatile, and it strives to have impacts across the breadth of computer vision.


Grant
Agency: NSF | Branch: Standard Grant | Program: | Phase: ADVANCES IN BIO INFORMATICS | Award Amount: 557.03K | Year: 2016

Proteins play fundamental roles in all biological processes. Complete description of protein structures and functions is a fundamental step towards understanding biological life and has various applications. Millions of protein sequences are available, but a majority of them have no experimentally-solved structures and functions. This project aims to greatly improve RaptorX, a fully-automated web server for computational prediction of protein structure and function, with the goal to deliver a long-term sustainable web portal to facilitate transformative research in biology. This web portal shall benefit a broad range of biological and biomedical applications, such as genome annotation, understanding of disease processes, drug design, precision medicine and even biomaterial and bio-energy development. The results will be disseminated to the broader community through a variety of venues: web servers, standalone software, publications and talks. Since late in 2011, RaptorX has served >25,000 worldwide users including middle- and high-school students. The standalone programs have been downloaded by >1500 worldwide users. After this project is fulfilled, RaptorX will contribute much more to the broader community. Students involved in this project will receive training in the intersection of computer science, molecular biology, biophysics, and biochemistry. Undergraduate and underrepresented students will be recruited through summer intern programs and collaborators. The research results will be integrated into course materials and used in the Illinois online bioinformatics program.

The RaptorX web server was originally developed for only template-based protein modeling. This project will transform RaptorX by first developing a few novel and powerful deep learning (e.g., Deep Conditional Convolutional Neural Fields) and structure learning (e.g., group graphical lasso) methods to significantly improve the accuracy of protein structure and functional prediction and then conducting an efficient implementation. The resultant RaptorX will be able to perform much more accurate prediction of protein secondary and tertiary structure, solvent accessibility and disordered regions, and the quality of a theoretical protein 3D model (in the absence of natives). This project will also expand the RaptorX server to perform contact prediction and contact-assisted protein folding for proteins without good templates. The RaptorX web server is available at http://raptorx.uchicago.edu, from which users can also download the standalone programs.


Grant
Agency: NSF | Branch: Standard Grant | Program: | Phase: National Robotics Initiative | Award Amount: 332.73K | Year: 2016

Most existing autonomous systems reason over flat, task-dependent models of the world that do not scale to large, complex environments. This lack of scalability and generalizability is a significant barrier to the widespread adoption of robots for common tasks. This research will advance the state-of-the-art in robot perception, natural language understanding, and learning to develop new models and algorithms that significantly improve the scalability and efficiency of mapping and motion planning in large, complex environments. These contributions will impact the next generation of autonomous systems that interact with humans in many domains, including manufacturing, healthcare, and exploration. Outcomes will include the release of open source software and data, workshops, K-12 STEM outreach efforts, and undergraduate and graduate education in the unique, multidisciplinary fields of perception, natural language understanding, and motion planning.

As robots perform a wider variety of tasks within increasingly complex environments, their ability to learn and reason over expressive models of their environment becomes critical. The goal of this research is to develop models and algorithms for learning adaptive, hierarchical environment representations that afford efficient planning for mobility tasks. These representations will take the form of probabilistic models that capture the rich spatial-semantic properties of the robots environment and are factorable to enable scalable inference. This research will develop algorithms that learn and adapt these representations by fusing knowledge conveyed through human-provided natural language utterances with information extracted from the robots multimodal sensor streams. This research will develop algorithms that then reason over the complexity of these models in the context of the inferred task, thereby identifying simplifications that enable more efficient robot motion planning.

Loading Toyota Technological Institute collaborators
Loading Toyota Technological Institute collaborators