Time filter

Source Type

Lanzhou, China

Chen P.,Information Engineering
Proceedings - International Conference on Natural Computation | Year: 2013

In this paper, on the basis of the original genetic algorithm, an improved genetic algorithm for the Traveling Salesman Problem (TSP) is proposed. Firstly, the diversity of species is ensured by amending the calculation method of the individual fitness. Secondly, the mutation operator is improved by the combination of shift mutation and insertion mutation. Before the crossover, the operator checks whether the degradation phenomenon will occur. Finally, experimental results further determine that above improvements provide a significant effect for solving the TSP. © 2013 IEEE.

Almosallam I.A.,King Abdulaziz City for Science and Technology | Lindsay S.N.,Information Engineering | Jarvis M.J.,Oxford Astrophysics | Roberts S.J.,University of the Western Cape
Monthly Notices of the Royal Astronomical Society | Year: 2016

Accurate photometric redshifts are a lynchpin for many future experiments to pin down the cosmological model and for studies of galaxy evolution. In this study, a novel sparse regression framework for photometric redshift estimation is presented. Synthetic data set simulating the Euclid survey and real data from SDSS DR12 are used to train and test the proposed models. We show that approaches which include careful data preparation and model design offer a significant improvement in comparison with several competing machine learning algorithms. Standard implementations of most regression algorithms use the minimization of the sum of squared errors as the objective function. For redshift inference, this induces a bias in the posterior mean of the output distribution, which can be problematic. In this paper, we directly minimize the target metric Δz = (zs - zp)/(1 + zs) and address the bias problem via a distribution-based weighting scheme, incorporated as part of the optimization objective. The results are compared with other machine learning algorithms in the field such as artificial neural networks (ANN), Gaussian processes (GPs) and sparse GPs. The proposed framework reaches a mean absolute Δz = 0.0026(1 + zs), over the redshift range of 0 ≤ zs ≤ 2 on the simulated data, and Δz = 0.0178(1 + zs) over the entire redshift range on the SDSS DR12 survey, outperforming the standard ANNz used in the literature. We also investigate how the relative size of the training sample affects the photometric redshift accuracy. We find that a training sample of >30 per cent of total sample size, provides little additional constraint on the photometric redshifts, and note that our GP formalism strongly outperforms ANNz in the sparse data regime for the simulated data set. © 2015 The Authors.

News Article
Site: www.scientificcomputing.com

With enough computing effort most contemporary security systems will be broken. But a research team at the University of Sydney has made a major breakthrough in generating single photons (light particles), as carriers of quantum information in security systems. The collaboration involving physicists at the Centre for Ultrahigh bandwidth Devices for Optical Systems (CUDOS), an ARC Centre of Excellence headquartered in the School of Physics, and electrical engineers from the School of Electrical and Information Engineering, has been published in Nature Communications. The team's work resolved a key issue holding back the development of password exchange which can only be broken by violating the laws of physics. Photons are generated in a pair, and detecting one indicates the existence of the other. This allows scientists to manage the timing of photon events so that they always arrive at the time they are expected. Lead author Dr. Chunle Xiong, from the School of Physics, said: "Quantum communication and computing are the next generation technologies poised to change the world." Among a number of quantum systems, optical systems offer particularly easy access to quantum effects. Over the past few decades, many building blocks for optical quantum information processing have developed quickly," Xiong said. "Implementing optical quantum technologies has now come down to one fundamental challenge: having indistinguishable single photons on-demand," he said. "This research has demonstrated that the odds of being able to generate a single photon can be doubled by using a relatively simple technique — and this technique can be scaled up to ultimately generate single photons with 100 percent probability." CUDOS director and co-author of the paper, Professor Ben Eggleton, said the interdisciplinary research was set to revolutionize our ability to exchange data securely — along with advancing quantum computing, which can search large databases exponentially faster. "The ability to generate single photons, which form the backbone of technology used in laptops and the Internet, will drive the development of local secure communications systems — for safeguarding defense and intelligence networks, the financial security of corporations and governments and bolstering personal electronic privacy, like shopping online," Professor Eggleton said. "Our demonstration leverages the CUDOS Photonic chip that we have been developing over the last decade, which means this new technology is also compact and can be manufactured with existing infrastructure." Co-author and Professor of Computer Systems, Philip Leong, who developed the high-speed electronics crucial for the advance, said he was particularly excited by the prospect of further exploring the marriage of photonics and electronics to develop new architectures for quantum problems. "This advance addresses the fundamental problem of single photon generation — promises to revolutionize research in the area," Professor Leong said. The group — which is now exploring advanced designs and expects real-world applications within three to five years — has involved research with University of Melbourne, CUDOS nodes at Macquarie University and Australian National University and an international collaboration with Guangdong University of Technology, China.

News Article
Site: www.scientificcomputing.com

Artificial intelligence must be kept under human control or we may become defenseless against its capabilities, warn two University of Sydney machine-learning experts. Professor Dong Xu, Chair in Computer Engineering from the University of Sydney’s School of Electrical Engineering and Information Engineering says the defeat of the world champion Go player has raised fresh concerns about the future role of artificial intelligence (AI) devices. The Professor, whose research interests include computer vision, machine-learning and multimedia content analysis, says the question now is how much we should control AI’s ability to self-learn. “The scientists and technology investors have been enthusiastic about AI for several years, but the triumph of the supercomputer has finally made the public conscious of its capabilities. This marks a significant breakthrough in the technology world,” Professor Xu says. “Supercomputers are more powerful than the human mind. Competitive games such as Go or chess are actually all about rules — they are easy for a computer. Once a computer grasps them, it will become very good at playing the games.” Professor Xu says: “The problem is that computers like AlphaGo aren’t good at the overall strategy, but they are good at partial ones because they search better within a smaller area. This explains why AI will often lag behind in the beginning but catches up later. “A human player can be affected by emotions, such as pressure or happiness, but a computer will not. “It’s said that a person is able to memorize 1000 games in a year, but a computer can memorize tens of thousands or hundreds of thousands during the same period. And a supercomputer can always improve — if it loses one game, then it would analyze it and do better next time. “If a supercomputer could totally imitate the human brain, and have human emotions, such as being angry or sad, it will be even more dangerous." Currently, AI is good for the labor-intensive industries and can work as human substitutes to serve the public interest. They can clean, work as agricultural robots in the fields, or probe deep underground. "Another challenge is that AI needs a more intelligent environment. For instance, self-driven automobiles often can’t recognize a red light, so if the traffic lights could send a signal to the cars and they could sense them, it would solve the problem. Singapore is making an effort to build an area with roads that are friendly or responsive to self-driven vehicles." Professor Xu believes it is crucial for companies, such as Google and Facebook, to set up “moral and ethics committees” to take control to ensure scientific research won’t head in the wrong direction and create machines that act maliciously. Dr. Michael Harre, a senior lecturer in complex systems who spent several years studying the AI behind the ancient Chinese board game, says: “Go is probably the most complicated game that is commonly played today. Even when compared to chess, which has a very large number of possible patterns, Go has more possible patterns than there are atoms in the universe. “The technology has developed to a point that it can now outsmart a human in both simple and complex tasks. This is a concern, because artificial intelligence technology may reach a point in a few years where it is feasible that it could be adapted to areas of defense where a human may no longer be needed in the control loop: truly autonomous AI."

Abstract: Our current understanding of how the brain works is very poor. The electrical signals travel around the brain and throughout the body, and the electrical properties of the biological tissues are studied using electrophysiology. For acquiring a large amplitude and a high quality of neuronal signals, intracellular recording is a powerful methodology compared to extracellular recording to measure the voltage or current across the cell membranes. Nanowire- and nanotube-based devices have been developed for the intracellular recording applications to demonstrate the advantages of these devices having high spatial resolution and high sensitivity. However, length of these nanowire/nanotube electrode devices is currently limited to less than 10 µm due to process issues that occur during fabrication of high-aspect-ratio nanoscale devices, which are more than 10-µm long. Thus, conventional nanodevices are not applicable to neurons/cells within thick biological tissues, including brain slices and brain in vivo. A research team in the Department of Electrical and Electronic Information Engineering and the Electronics-Inspired Interdisciplinary Research Institute (EIIRIS) at Toyohashi University of Technology has developed three-dimensional microneed?e-based nanoscale-tipped electrodes (NTEs) that are longer than 100 µm. The needle length exceeds that of the conventional nanowire/nanotube-based intracellular devices, thus expanding the range of applications of nanodevices in intracellular recording, such as deep tissue penetration. Additionally, they perform intracellular recordings using muscle cells. "A technological challenge in electrophysiology is intracellular recordings within a thick biological tissue. For example, a needle length of more than 40 µm is necessary for performing brain slice experiments. However, it is almost impossible to penetrate nanoscale diameter needles with a high-aspect-ratio, because of the long hair-like nanostructure that has insufficient stiffness. On the other hand, our NTE, which is 120-µm-long cone-shaped electrode, has sufficient stiffness to punch tissues and cells", explains the first author PhD candidate, Yoshihiro Kubota. The leader of the research team, Associate Professor Takeshi Kawano said "Although we demonstrated the preliminary results of our NTE device, the batch fabrication of such intracellular electrodes, which have a needle length more than 100 µm, should lead to an advancement in the device technologies. This will eventually lead to realization of multisite, depth-intracellular recordings for biological tissues, including brain slices and brain in vivo, which are beyond the capability of conventional intracellular devices." As addressed by the research team, the NTE has the potential to be used in cells that are deep within a biological tissue, including brain slice and brain in vivo, thus accelerating the understanding of the brain. ### Funding agency: This work was supported by Grants-in-Aid for Scientific Research (S) (No. 20226010), (A) (No. 25249047), for Young Scientist (A) (No. 26709024), (B) (No. 22760251), and the PRESTO Program from JST. Yoshihiro Kubota was supported by the Leading Graduate School Program R03 of MEXT. Rika Numano was also supported by a Grant-in-Aid for Scientific Research (C) (No. 24590350), the Asahi Glass Foundation and the Takeda Science Foundation. About Toyohashi University of Technology 120-µm-height 'nanotower' electrode is punching a cell membrane. Silicon growth technology and three-dimensional nano/microfabrication techniques realize such high-aspect-ratio intracellular electrodes. COPYRIGHT (C) TOYOHASHI UNIVERSITY OF TECHNOLOGY. ALL RIGHTS RESERVED. If you have a comment, please us. Issuers of news releases, not 7th Wave, Inc. or Nanotechnology Now, are solely responsible for the accuracy of the content.

Discover hidden collaborations