Entity

Time filter

Source Type


Hastad J.,Royal Institute of Technology
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) | Year: 2011

We study the problem where we are given a system of polynomial equations defined by multivariate polynomials over GF[2] of fixed constant degree d > 1 and the aim is to satisfy the maximal number of equations. A random assignment approximates this problem within a factor 2 -d and we prove that for any ε > 0, it is NP-hard to obtain a ratio 2 -d + ε. When considering instances that are perfectly satisfiable we give a probabilistic polynomial time algorithm that, with high probability, satisfies a fraction 2 1-d -2 1-2d and we prove that it is NP-hard to do better by an arbitrarily small constant. The hardness results are proved in the form of inapproximability results of Max-CSPs where the predicate in question has the desired form and we give some immediate results on approximation resistance of some predicates. © 2011 Springer-Verlag.


One promising approach for scalable quantum computing is to use an all-optical architecture, in which the qubits are represented by photons and manipulated by mirrors and beam splitters. So far, researchers have demonstrated this method, called Linear Optical Quantum Computing, on a very small scale by performing operations using just a few photons. In an attempt to scale up this method to larger numbers of photons, researchers in a new study have developed a way to fully integrate single-photon sources inside optical circuits, creating integrated quantum circuits that may allow for scalable optical quantum computation. The researchers, Iman Esmaeil Zadeh, Ali W. Elshaari, and coauthors, have published a paper on the integrated quantum circuits in a recent issue of Nano Letters. As the researchers explain, one of the biggest challenges facing the realization of an efficient Linear Optical Quantum Computing system is integrating several components that are usually incompatible with each other onto a single platform. These components include a single-photon source such as quantum dots; routing devices such as waveguides; devices for manipulating photons such as cavities, filters, and quantum gates; and single-photon detectors. In the new study, the researchers have experimentally demonstrated a method for embedding single-photon-generating quantum dots inside nanowires that, in turn, are encapsulated in a waveguide. To do this with the high precision required, they used a "nanomanipulator" consisting of a tungsten tip to transfer and align the components. Once inside the waveguide, single photons could be selected and routed to different parts of the optical circuit, where logical operations can eventually be performed. "We proposed and demonstrated a hybrid solution for integrated quantum optics that exploits the advantages of high-quality single-photon sources with well-developed silicon-based photonics," Zadeh, at Delft University of Technology in The Netherlands, told Phys.org. "Additionally, this method, unlike previous works, is fully deterministic, i.e., only quantum sources with the selected properties are integrated in photonic circuits. "The proposed approach can serve as an infrastructure for implementing scalable integrated quantum optical circuits, which has potential for many quantum technologies. Furthermore, this platform provides new tools to physicists for studying strong light-matter interaction at nanoscales and cavity QED [quantum electrodynamics]." One of the most important performance metrics for Linear Optical Quantum Computing is the coupling efficiency between the single-photon source and photonic channel. A low efficiency indicates photon loss, which reduces the computer's reliability. The set-up here achieves a coupling efficiency of about 24% (which is already considered good), and the researchers estimate that optimizing the waveguide design and material could improve this to 92%. In addition to improving the coupling efficiency, in the future the researchers also plan to demonstrate on-chip entanglement, as well as increase the complexity of the photonic circuits and single-photon detectors. "Ultimately, the goal is to realize a fully integrated quantum network on-chip," said Elshaari, at Delft University of Technology and the Royal Institute of Technology (KTH) in Stockholm. "At this moment there are a lot of opportunities, and the field is not well explored, but on-chip tuning of sources and generation of indistinguishable photons are among the challenges to be overcome."


News Article
Site: www.materialstoday.com

I am very pleased to announce that two new Editors joined Polymer Testing on 1 October 2015: Professor Ulf W. Gedde from the Royal Institute of Technology (KTH) in Stockholm, Sweden as Editor for Plastics and Dr Matthias Jaunich from the Federal Institute for Materials Research and Testing (BAM) in Berlin, Germany as Associate Editor for Rubber and Plastics. Please read my Publisher's note for introduction of Professor Gedde and Dr Jaunich.


(Left) Illustration and (right) color-coded microscope image of a nanowire (green) integrated in a photonic waveguide (gray on left, purple on right). In the illustration, the photons emitted from the nanowire are depicted as red spheres. Insets show a light-emitting nanowire, which in the microscope image is attached to the tip of a nanomanipulator. Credit: Zadeh, et al. ©2016 American Chemical Society (Phys.org)—One promising approach for scalable quantum computing is to use an all-optical architecture, in which the qubits are represented by photons and manipulated by mirrors and beam splitters. So far, researchers have demonstrated this method, called Linear Optical Quantum Computing, on a very small scale by performing operations using just a few photons. In an attempt to scale up this method to larger numbers of photons, researchers in a new study have developed a way to fully integrate single-photon sources inside optical circuits, creating integrated quantum circuits that may allow for scalable optical quantum computation. The researchers, Iman Esmaeil Zadeh, Ali W. Elshaari, and coauthors, have published a paper on the integrated quantum circuits in a recent issue of Nano Letters. As the researchers explain, one of the biggest challenges facing the realization of an efficient Linear Optical Quantum Computing system is integrating several components that are usually incompatible with each other onto a single platform. These components include a single-photon source such as quantum dots; routing devices such as waveguides; devices for manipulating photons such as cavities, filters, and quantum gates; and single-photon detectors. In the new study, the researchers have experimentally demonstrated a method for embedding single-photon-generating quantum dots inside nanowires that, in turn, are encapsulated in a waveguide. To do this with the high precision required, they used a "nanomanipulator" consisting of a tungsten tip to transfer and align the components. Once inside the waveguide, single photons could be selected and routed to different parts of the optical circuit, where logical operations can eventually be performed. "We proposed and demonstrated a hybrid solution for integrated quantum optics that exploits the advantages of high-quality single-photon sources with well-developed silicon-based photonics," Zadeh, at Delft University of Technology in The Netherlands, told Phys.org. "Additionally, this method, unlike previous works, is fully deterministic, i.e., only quantum sources with the selected properties are integrated in photonic circuits. "The proposed approach can serve as an infrastructure for implementing scalable integrated quantum optical circuits, which has potential for many quantum technologies. Furthermore, this platform provides new tools to physicists for studying strong light-matter interaction at nanoscales and cavity QED [quantum electrodynamics]." One of the most important performance metrics for Linear Optical Quantum Computing is the coupling efficiency between the single-photon source and photonic channel. A low efficiency indicates photon loss, which reduces the computer's reliability. The set-up here achieves a coupling efficiency of about 24% (which is already considered good), and the researchers estimate that optimizing the waveguide design and material could improve this to 92%. In addition to improving the coupling efficiency, in the future the researchers also plan to demonstrate on-chip entanglement, as well as increase the complexity of the photonic circuits and single-photon detectors. "Ultimately, the goal is to realize a fully integrated quantum network on-chip," said Elshaari, at Delft University of Technology and the Royal Institute of Technology (KTH) in Stockholm. "At this moment there are a lot of opportunities, and the field is not well explored, but on-chip tuning of sources and generation of indistinguishable photons are among the challenges to be overcome." More information: Iman Esmaeil Zadeh, et al. "Deterministic Integration of Single Photon Sources in Silicon Based Photonic Circuits." Nano Letters. DOI: 10.1021/acs.nanolett.5b04709


News Article | December 8, 2015
Site: motherboard.vice.com

It was 1968 when Phillip K. Dick posited the question, “Do androids dream of electric sheep?” While we still can’t answer that with unequivocal certainty, we took one step closer this July as Google unveiled its DeepDream project. DeepDream uses a convolutional neural network to augment imagery through a series of algorithms. In the most basic terms it’s an artificial intelligence program that views images in order to categorise them for Google—a computer trying to bring order to the chaos of the mundane. But it’s sexier than it sounds, because in searching for patterns these computational visions create a kaleidoscopic pareidolia: ordinary images transformed by vivid colours, phantasmagoric pagodas, flora and fauna, hundreds of terrifying eyes and, err… lots of abstract dog snouts. Something which feels like a closer representation of the visions and hallucinations Dick experienced in his own life than any he wrote about in his fiction. On the left, a plain image of the sky. On the right, a DeepDream "overinterpretation" of the same image. Image: Google Research But, beyond altering your Facebook profile picture into something vaguely freaky, what does the future hold for this technology? The first step in answering that conundrum comes from an unlikely source: UK pop trio and BBC “Sound of 2015” winners Years & Years. Or rather, the band’s most recent music video (a remix of their best-selling single “Desire”), which was the brainchild of US-based director Brian Harrison and marks the first commercially released project to incorporate DeepDream. Harrison has long been concerned with heightened states of perception, and his work is strongly influenced strongly by the writings of author, intellectual and renowned psychonaut Terence McKenna and his colleagues Ralph Abraham and Rupert Sheldrake (to whom the film is dedicated). “Most of my writing, directing and creative energies are focused on the mysteries of consciousness and the psyche,” he explained. The idea of blending DeepDream with film came to Harrison after watching a clip from Fear and Loathing In Las Vegas online, which had been uploaded by Roelof Pieters, a data science consultant and PhD candidate in Deep Learning at the Royal Institute of Technology in Stockholm, and was augmented with DeepDream to hallucinogenic effect. It left Harrison feeling compelled to utilize the technology, and not simply because of the rather unique cinematic potential. “When I saw the Fear and Loathing clip I was excited, not only by its visual implications, but by the idea of machines being birthed into consciousness through dreaming… and that their consciousness seemed completely psychedelic,” he said. Pieters created the video with an open-source DeepDream animation tool, which he built with collaborative partner Samim Winiger, whose work also explores the intersection between creativity and machine learning. The pair, drawn together by a mutual appreciation for “experimentation, generative systems and ethical computing,” met serendipitously online, and immediately saw the value in DeepDream’s future. “Its release was a seminal moment,” said Winiger. “It brought creative AI and generative tech in the consciousness of the general public.” “And besides the fun imagery, visualizing what a deep neural network learns is important for research,” he continued. “Because it helps us develop a better understanding of machine learning processes and build better models.” Harrison contacted the duo not long after the release of DeepDream, and they immediately saw potential for collaboration on “Desire.” “Pop culture is an important tool to drive interest and understanding of machine learning processes” explained Winiger. “And DeepDream combines machine learning and pop culture in a way not seen since fractals were popularized in the 80s.” Although if you thought it was as easy as simply overlaying DeepDream over moving images, think again. It was a laborious process, involving months of work and four days of shooting in locations across California. “We went through thousands of ideas and initial inputs before we even began the process,” Harrison said. Meanwhile, Winiger and Pieters were coding editing software which allowed Harrison’s hours of real-time footage to be pulled together and amalgamated with DeepDream. They were aided by some still-under-wraps tech called “DeepUI,” a tool designed by Winiger and Pieters to edit the seemingly random dreamscape of DeepDream. For now, the pair remain tight-lipped on its wider ramifications. “The DeepUI was developed in the context of this project.” said Winiger. “And while it will be released as open-source soon, it’s ongoing, so we’re not discussing it in detail publicly.” What they will say is that advances in creative AI are moving forward at a breakneck speed: “New approaches and technologies come out practically every week.” explained Winiger. “We see these approaches as an emerging revolution for creativity, which allows machines and humans to collaborate as equal partners.” Where they take it next will be under the guise of Artificial Experience, an agency designed to bisect the technological and creative industries, which is currently in preparation stages but promises big things for 2016. “While Artificial Intelligence gets the headlines, Artificial Experience (as a medium that patterns your thinking) is often invisible,” said Winiger. “AE is a distributed team, focused on artificial experience design that will bring applied machine learning to creative industries.” Whether the wild and weird imagery of DeepDream continues to captivate us, or ends up as tomorrow’s digital equivalent of tie-dye, the concepts behind machine learning and their influence on the creative arts continue to grow apace. And as a filmmaker, Harrison remains excited for the future possibilities. “As engineers and researchers continue to move this tech forward, you’ll start to see it more and more in all forms of the visual arts,” he said. “It just amazes the mind to think that you are watching the embryonic stage of true machine consciousness: of man and machine interfacing in art.”

Discover hidden collaborations