Time filter

Source Type

News Article | May 16, 2017
Site: www.eurekalert.org

Smart homes need smart batteries. Current systems overindulge on power, which can shorten the life of batteries and the devices they power. Future batteries may get an intelligence boost, though. A collaborative research team based in Beijing, China, has proposed a novel programming solution to optimize power consumption in batteries. The scientists, from the Institute of Automation, the Chinese Academy of Sciences, and the School of Automation and Electrical Engineering at the University of Science and Technology Beijing, published their results in IEEE/CAA Journal of Automatica Sinica (JAS), a joint publication of the IEEE and the Chinese Association of Automation. "In smart home energy management systems, the intelligent optimal control of [the] battery is a key technology for saving power consumption," Prof. Qinglai Wei, with the State Key Laboratory of Management and Control for Complex Systems, Institute of Automation, Chinese Academy of Sciences, wrote in the paper. To develop a system in which batteries can learn and optimize their power consumption, Wei and his team turned to adaptive dynamic programming. This method breaks down one big problem - how to best use batteries in smart home systems - into smaller problems. The answer to each small problem builds into the answer to the big problem, and, as the circumstances of the question change, the system can examine all the small answers to see if and how the big answer adapts. Wei and his team are the first to use this method while also considering the physical charging and discharging constraints of the battery. The algorithm learns which inputs, such as the demand for power from a device, lead to which outputs, such as providing power. By continually questioning the link between input and output, the algorithm learns more about the best times to charge and to discharge to limit power consumed from the grid. To extend the battery life, every iteration of learning is constrained by the understanding that the battery can only charge and discharge to certain limits. Anything more, and the battery could experience excessive wear. "The battery [makes] decisions to meet the demand of the home load according to the real-time electricity rate," Wei wrote, noting that the objective of optimal control is to find the ideal balance for each battery state (charging, discharging, and idle) within the battery's constraints, while still minimizing the power needed from the grid. To further extend the lifetime of batteries in smart home systems, Wei and his team will next examine how the damage caused by frequently switching between charging and discharging modes may be avoided. Fulltext of the paper is available: http://ieeexplore. http://html. IEEE/CAA Journal of Automatica Sinica (JAS) is a joint publication of the Institute of Electrical and Electronics Engineers, Inc (IEEE) and the Chinese Association of Automation. JAS publishes papers on original theoretical and experimental research and development in all areas of automation. The coverage of JAS includes but is not limited to: Automatic control/Artificial intelligence and intelligent control/Systems theory and engineering/Pattern recognition and intelligent systems/Automation engineering and applications/Information processing and information systems/Network based automation/Robotics/Computer-aided technologies for automation systems/Sensing and measurement/Navigation, guidance, and control. To learn more about JAS, please visit: http://ieeexplore.


News Article | May 17, 2017
Site: phys.org

A collaborative research team based in Beijing, China, has proposed a novel programming solution to optimize power consumption in batteries. The scientists, from the Institute of Automation, the Chinese Academy of Sciences, and the School of Automation and Electrical Engineering at the University of Science and Technology Beijing, published their results in IEEE/CAA Journal of Automatica Sinica (JAS), a joint publication of the IEEE and the Chinese Association of Automation. "In smart home energy management systems, the intelligent optimal control of [the] battery is a key technology for saving power consumption," Prof. Qinglai Wei of the Chinese Academy of Sciences wrote in the paper. To develop a system in which batteries can learn and optimize their power consumption, Wei and his team turned to adaptive dynamic programming. This method breaks down one big problem—how best to use batteries in smart home systems—into smaller problems. The answer to each small problem contributes to the answer to the big problem, and, as the circumstances change, the system can examine all the small answers to see if and how the big answer adapts. Wei and his team are the first to use this method while also considering the physical charging and discharging constraints of the battery. The algorithm learns which inputs, such as the demand for power from a device, lead to which outputs, such as providing power. By continually questioning the link between input and output, the algorithm learns more about the best times to charge and to discharge to limit power consumed from the grid. To extend battery life, every iteration of learning is constrained by the understanding that the battery can only charge and discharge to certain limits. Anything more, and the battery could experience excessive wear. "The battery [makes] decisions to meet the demand of the home load according to the real-time electricity rate," Wei wrote, noting that the objective of optimal control is to find the ideal balance for each battery state (charging, discharging, and idle) within the battery's constraints, while still minimizing the power needed from the grid. To further extend the lifetime of batteries in smart home systems, Wei and his team will next examine how the damage caused by frequently switching between charging and discharging modes may be avoided. Explore further: 'Virtual batteries' could lead to cheaper, cleaner power More information: Qinglai Wei et al, Optimal constrained self-learning battery sequential management in microgrid via adaptive dynamic programming, IEEE/CAA Journal of Automatica Sinica (2017). DOI: 10.1109/JAS.2016.7510262


Lu H.,CAS Institute of Automation | Yang Y.,CAS Institute of Automation | Gan R.,CAS Institute of Automation | Zhang N.,Chinese Association of Automation
Proceedings of 2012 9th IEEE International Conference on Networking, Sensing and Control, ICNSC 2012 | Year: 2012

The micro-blogs, as a new social media, possesses big differences with other social media on the aspect of information updating frequency, organization structure, user connection and etc, which have astonishing power of convergence and penetration. Based on this, it is proposed in this paper that Micro-Blog Public Opinion Index (MBPOI), which consists of five sub-indexes QI, II, RI, PI and CI (Quantity Index, Intensity Index, Relation Index, Polarity Index, Confidence Index) multi-dimensionally, is used to measure and evaluate the public topics and issues discussed in the micro-blogs. In the meanwhile, taking "ABB automatic world 2011" activity as example, the MBPOI prototype system is verified; it has been shown from the results that the MBPOI, which uses five sub-indexes method, owns much better effect by quantifying the topics/issues' influence in multi-dimensions and multi-levels, and provides effective micro-blogs analysis reports for monitoring and tracking the "ABB automatic world 2011" activity. © 2012 IEEE.


Lu H.,CAS Institute of Automation | Wang F.-Y.,CAS Institute of Automation | Liu D.-R.,CAS Institute of Automation | Zhang N.,Chinese Association of Automation | Zhao X.-L.,Chinese Association of Automation
Zidonghua Xuebao/Acta Automatica Sinica | Year: 2014

Nowadays, automation science and technology based on automatic control and information processing has become an essential impetus to productive forces and human life. So a comprehensive understanding of the latest research progress in this discipline is essential for its significant reference value to scholars and research institutions. In this paper, the automation science and technology discipline is divided into five research fields, which are specifically defined as control theory and engineering, pattern recognition and intelligent systems, measurement technology and automatic equipment, navigation and guidance, and systems engineering. Each field is depicted by analyzing and mapping the data from 46 242 academic articles published on 88 journals during 2011~2013. The results show that the research interests are different between domestic and abroad, and that the domestic institutions and ethnic Chinese scholars have played an important role in promoting the development of automation science and technology. Copyright © 2014 Acta Automatica Sinica. All rights reserved.


Artificial intelligence (AI) is learning - from the real world. Nine months ago, a computer beat one of the world's best players at one of the world's oldest games, Go. That was the start of a new era, the era of new IT: Intelligent Technology, according to Fei-Yue Wang, a professor at the Chinese Academy of Sciences. "This victory stunned many in the AI field and beyond," wrote Wang in an editorial published in IEEE/CAA Journal of Automatica Sinica (JAS), based on a speech he gave at the 30th anniversary of the Institute of Artificial Intelligence and Robotics at the Xian Jiaotong University in Xian, China. "It marked the beginning of a new era in AI... parallel intelligence." Defined as the interaction between actual reality and virtual reality, parallel intelligence flips traditional AI. Rather than big, universal laws directing small amounts of data, small, complex laws guide huge data, a jump from Newton to Merton, as pointed out by Professor Wang. AlphaGo, the computer victorious against Go player Lee Sedol, played more than 30 million games with itself - more than a single, century-old person could play in their entire life. And the computer learned from every game. "[Sedol] was not defeated by a computer program, but by all the humans standing behind the program, combined with the significant cyber-physical information inside it," Wang wrote. "This also verifies the belief of many AI experts that intelligence must emerge from the process of computing and interacting." Input X, output Y is not as simple as it once was. There's more to physical space than just cyber space. Machines must also make room for social space. According to Wang, AI is lingering in a phase of hybrid intelligence where humans, information, and machines are equally integral to the process of progress. The problem lies in learning how to model AI on shifting terms of possibility. X does not always cause Y in such complicated systems, where uncertainty, diversity, and complexity typically prevail. In order to move forward, a new framework is needed to model the next step of parallel intelligence. Wang proposes the ACP Approach to generate big data from small data, then reduce big data to specific laws, where software (Artificial systems) learn from millions of scenarios (Computational experiments) to make the best decisions while interacting (in Parallel) with real-world physical systems. AlphaGo had learned, from 30 million games, how to make the best decisions when faced with the physical being of Sedol. It paid off. "AI is not 'artificial' anymore," Wang wrote. "Ultimately, it becomes the 'real' intelligence that can be embodied into machines, artifacts, and our societies." Fulltext of the paper is available: http://ieeexplore. IEEE/CAA Journal of Automatica Sinica (JAS) is a joint publication of the Institute of Electrical and Electronics Engineers, Inc (IEEE) and the Chinese Association of Automation. JAS publishes papers on original theoretical and experimental research and development in all areas of automation. The coverage of JAS includes but is not limited to: Automatic control/Artificial intelligence and intelligent control/Systems theory and engineering/Pattern recognition and intelligent systems/Automation engineering and applications/Information processing and information systems/Network based automation/Robotics/Computer-aided technologies for automation systems/Sensing and measurement/Navigation, guidance, and control. To learn more about JAS, please visit: http://ieeexplore.


News Article | October 27, 2016
Site: www.eurekalert.org

Machines make our lives easier in many ways. Whether it's a smart thermostat that learns when to turn the heat on or automatic brakes, machines traffic in the language of classical calculus. Classical calculus is good enough to capture the basic features of biological or mechanical systems and even human behavior, but it paints a grainy picture. To provide a richer description of these systems, experts are turning to fractional calculus. Mathematical models built from this more exotic but more general form of calculus come pre-installed with a way of accounting for past events. This feature allows them to mimic the memory-like effects observed in real systems such as the stock market, communications networks, and, of course, the human brain. But before fractional models are uploaded into our devices, researchers have to be sure not only that they're complex enough to reflect real processes, but that they're not too complex to render our devices unstable and therefore useless. Mathematically speaking, they have to ensure that a system that deviates from its rest state--room temperature, for instance, in the case of a thermostat--can be controlled back to that state within a reasonable amount of time. To address this problem, a team of mathematicians asked whether such control could be achieved for equations called fractional stochastic differential inclusions. These equations describe some of the most unpredictable and noisy systems found in the real world, such as financial markets and quantum systems. They proved the existence of solutions for two forms of these equations: convex and nonconvex. In math, convex cases are typically easier to solve when looking for the best way to control a system. Nonconvex cases are trickier to solve. The ability to prove controllability in both cases is therefore a major advantage of this method. The mathematicians tested their technique numerically on a spring-like model. Although seemingly simple, the model was built using the same type of equations the team looked at before, making its behavior highly unpredictable--but not uncontrollable. The team was able to show that the spring could theoretically be brought back to rest from any position it might adopt. Mathematical tools such as this will likely find increasing applications as fractional models become more widespread and complex. Scientists and engineers may do well to add these to their toolkits and ensure that their designs for new devices are both highly adaptable and controllable. Fulltext of the paper is available: http://ieeexplore. IEEE/CAA Journal of Automatica Sinica (JAS) is a joint publication of the Institute of Electrical and Electronics Engineers, Inc (IEEE) and the Chinese Association of Automation. JAS publishes papers on original theoretical and experimental research and development in all areas of automation. The coverage of JAS includes but is not limited to: Automatic control/Artificial intelligence and intelligent control/Systems theory and engineering/Pattern recognition and intelligent systems/Automation engineering and applications/Information processing and information systems/Network based automation/Robotics/Computer-aided technologies for automation systems/Sensing and measurement/Navigation, guidance, and control. To learn more about JAS, please visit: http://ieeexplore.


News Article | December 22, 2016
Site: www.eurekalert.org

If you've ever searched for ways to curb your car's gas-guzzling appetite, you've probably heard that running on cruise control can help reduce your trips to the pump. How? Cars, it turns out, are much better than people at following what control systems experts call a setpoint--in this case, a set speed across different terrain. But they could be even better. Calling upon a branch of mathematics known as fractional calculus, a team of researchers has developed a new setpoint-tracking strategy that can improve the response time and stability of automated systems--and not just those found in your car. One popular method for tracking setpoints is to use what's known as a setpoint filter. A setpoint filter helps solve the problem of under- or overshooting a far-away target. Blast furnaces, for example, have to go from room temperature to precisely thirteen hundred degrees to infuse iron with carbon to make steel. Some temperature controllers may overshoot as they quickly try to reach that temperature. Adding a setpoint filter smooths the path to make sure the furnace reaches the target temperature without going over. The problem, of course, is that it takes longer to get there. That's where fractional calculus comes in. Compared with classical (or integer-order) calculus, which forms the mathematical basis of most control systems, fractional calculus is better equipped to handle the time-dependent effects observed in real-world processes. These include the memory-like behavior of electrical circuits and chemical reactions in batteries. By recasting the design of a setpoint filter as a fractional calculus problem, researchers created a filter that could not only suppress overshooting but also minimize the response time of a virtual controller. A side-by-side comparison showed that their fractional filter outperformed an integer-order filter, tracking the complex path of a given setpoint more closely. One drawback of this fractional design is that it's difficult to incorporate into existing automated systems, unlike integer-order filters, which are generally plug-and-play. But as the world of automation becomes increasingly complex, fractional filters may ultimately set the new standard for controlling everything from robotics and self-driving cars to medical devices. Fulltext of the paper is available: IEEE/CAA Journal of Automatica Sinica (JAS) is a joint publication of the Institute of Electrical and Electronics Engineers, Inc (IEEE) and the Chinese Association of Automation. JAS publishes papers on original theoretical and experimental research and development in all areas of automation. The coverage of JAS includes but is not limited to: Automatic control/Artificial intelligence and intelligent control/Systems theory and engineering/Pattern recognition and intelligent systems/Automation engineering and applications/Information processing and information systems/Network based automation/Robotics/Computer-aided technologies for automation systems/Sensing and measurement/Navigation, guidance, and control. To learn more about JAS, please visit: http://ieeexplore.


News Article | December 22, 2016
Site: www.eurekalert.org

Roads are paved with obstacles than can interfere with our driving. They can be as easy to avoid or adjust to as far-away debris or as hard to anticipate as strong gusts of wind. As self-driving cars and other autonomous vehicles become a reality, how can researchers make sure these systems remain in control under highly uncertain conditions? A team of automation experts may have found a way. Using a branch of mathematics called fractional calculus, the researchers created tools called disturbance observers that make on-the-fly calculations to put a disturbed system back on track. Disturbance observers are not new to the world of automation. For decades, these algorithms have played an important role in controlling railways, robots, and hard drives. That's because, unlike other algorithms that aim to minimize interference, disturbance observers rely only on the signals that go into and come out of a system; they know nothing about the interfering signal itself. What is new is how automation algorithms have begun to perceive the world around us. Engineering processes previously described using Newtonian physics and calculus are being recast in the light of so-called fractional calculus. This more general form of calculus is better equipped to model the real processes that affect how automated systems operate, such as battery discharge and the memory-like behavior of electrical circuits. Using fractional calculus, the team of researchers created a suite of observers that could accurately estimate disturbances of varying complexity. When tested on a model of a gas turbine, two observers clearly outperformed the rest. And when combined, the pair operated well under the harshest conditions, keeping close track of highly fluctuating disturbance signals. Disturbance monitoring, however, is only half the battle. Once the signal associated with a disturbance is carefully measured, it has to be eliminated. Future studies will be dedicated to figuring out how disturbance observers can be coupled with other control elements to make machines operate even more smoothly. Fulltext of the paper is available: http://html. IEEE/CAA Journal of Automatica Sinica (JAS) is a joint publication of the Institute of Electrical and Electronics Engineers, Inc (IEEE) and the Chinese Association of Automation. JAS publishes papers on original theoretical and experimental research and development in all areas of automation. The coverage of JAS includes but is not limited to: Automatic control/Artificial intelligence and intelligent control/Systems theory and engineering/Pattern recognition and intelligent systems/Automation engineering and applications/Information processing and information systems/Network based automation/Robotics/Computer-aided technologies for automation systems/Sensing and measurement/Navigation, guidance, and control. To learn more about JAS, please visit: http://ieeexplore.


News Article | August 26, 2016
Site: phys.org

Go: a game of complexity and a symbol for unity of contradiction. Credit: Chinese Association of Automation On March 15, 2016, Lee Sodol, an 18-time world champion of the ancient Chinese board game Go, was defeated by AlphaGo, a computer program. The event is one of the most historic in the field of artificial intelligence since Deep Blue bested chess Grandmaster Garry Kasparov in the late 1990s. The difference is that AlphaGo may represent an even bigger turning point in AI research. As outlined in a recently published paper, AlphaGo and programs like it possess the computational architecture to handle complex problems that lie well beyond the game table. Invented over 2500 years ago in China, Go is a game in which two players battle for territory on a gridded board by strategically laying black or white stones. While the rules that govern play are simple, Go is vastly more complex than chess. In chess, the total number of possible games is on the order of 10100. But the number for Go is 10700. That level of complexity is much too high to use the same computational tricks used to make Deep Blue a chess master. And this complexity is what makes Go so attractive to AI researchers. A program that could learn to play Go well would, in some ways, approach the complexity of human intelligence. Perhaps surprisingly, the team that developed AlphaGo, Google Deep Mind, did not create any new concepts or methods of artificial intelligence. Instead, the secret to AlphaGo's success is how it integrates and implements recent data-driven AI approaches, especially deep learning. This branch of AI deals with learning how to recognize highly abstract patterns in unlabeled data sets, mainly by using computational networks that mirror how the brain processes information. According to the authors, this kind of neural network approach can be considered a specific example of a more general technique called ACP, which is short for "artificial systems," "computational experiments," and "parallel execution." ACP effectively reduces the game space AlphaGo must search through to decide on a move. Instead of wading through all possible moves, AlphaGo is trained to recognize game patterns by continuously playing games against itself and examining its game play history. In effect, AlphaGo gets a feel for what Go players call "the shape of a game." Developing this kind of intuition is what the authors believe can also advance the management of complex engineering, economic, and social problems. The idea is that any decision problem that can be solved by a human being can also be solved by any AlphaGo-like program. This proposal, which the authors advance as the AlphaGo thesis, is a decision-oriented version of the Church-Turing thesis, which states that a simple computer called a Turing machine can compute all functions computable by humans. AlphaGo's recent triumph therefore holds a lot of promise for the field of artificial intelligence. Although advances in deep learning that extend beyond the game of Go will likely be the result of decades more research, AlphaGo is a good start. Explore further: Human champion certain he'll beat AI at ancient Chinese game


News Article | August 31, 2016
Site: phys.org

The world's oldest board game still has a few moves to play. Go, a game of strategy and instinct considered more difficult to master than chess, was created roughly in the same era as the written word. The game is uniquely human—at least, it used to be. Last year, a computer program called AlphaGo defeated an internationally ranked professional player. The computer's win signaled a significant evolution of information technology (IT) and artificial intelligence (AI), according to Fei-Yue Wang, a professor at the Chinese Academy of Sciences. As a result, IT is no longer "information technology"—the new IT is intelligent technology. In a recent editorial published in the IEEE/CAA Journal of Automatica Sinica, Wang argues that core principles of automation and Al must be reconsidered as the world navigates an IT paradigm shift. "AlphaGo is not only a milestone in the quest for AI, but also an indication that IT now has entered a new era," said Wang, who is also the vice president and secretary general of the Chinese Association of Automation. Wang sketches the progress of robotic and neural machine-human interaction in a timeline of five "control" eras. Automation evolved from the pure mechanics of ancient water clocks and steam engines to the eventual development of electric circuits and transfer functions that gave way to power grids. Digital computers and microprocessors signaled the third shift and paved the way for the fourth—the internet and the World Wide Web. In the first four controls, physical and mental realities were approximated as accurately as possible and adjusted through the use of dual control theory. A machine with a set of conditions and a goal could succeed or fail. As the machine acts, it also investigates to learn what action may result in a better future outcome. Between the physical and mental spaces, there is another reality in need of double control. Augmented reality, or artificial reality, bridges the gap of actuality and imagination. Pokémon GO is a prime example, as people navigate the physical world to find fictional creatures with only experience as a guide. The parameters and goals shift with each new exposure. "In Control 5.0... only association revealed by data or experience is available, and causality is a luxury that is no longer attainable with limited resources for uncertainty, diversity, and complexity," Wang said. Recognition of all three worlds and the dual learning roles of each, according to Wang, will be essential in the fifth era of intelligent technology. Explore further: Human champion certain he'll beat AI at ancient Chinese game

Loading Chinese Association of Automation collaborators
Loading Chinese Association of Automation collaborators