Berlin, Germany
Berlin, Germany

Time filter

Source Type

BERLIN, Germany, Nov. 09, 2016 (GLOBE NEWSWIRE) -- InterDigital (NASDAQ:IDCC), Fraunhofer Heinrich Hertz Institute HHI, and Core Network Dynamics (CND), three partners from the H2020 5GPPP 5G-Crosshaul consortium, today announced the successful result of an extended, real-world deployment of an integrated fronthaul/backhaul network delivering 5G throughput and latency. The test, a first of its kind, sets the stage for cost-effective, highly flexible 5G network architecture. The results of the integrated millimeter wave (mmW) fronthaul/backhaul 5G Berlin Testbed were announced at the November 2nd IEEE 5G Berlin Summit and will be further presented at the 5G-PPP Global 5G event, taking place today and tomorrow in Rome. The 5G-Crosshaul test was carried out over more than a month at the Fraunhofer Heinrich Hertz Institute in Berlin, and delivered higher than 1.2 Gbps throughput and less than millisecond latency. Beyond the speed, the test’s integrated fronthaul/backhaul provides a working model for future 5G networks that will combine 4G architecture with a 5G fronthaul-based network edge. With this deployment, 5G radio network solutions can be implemented using commodity servers or even in the cloud – a major innovation that throws open the doors for new operator models. The 5G Berlin Testbed is a 5G field trial of InterDigital’s EdgeLink™ 60GHz solution, multiplexing both backhaul and CND’s Cloud-RAN next generation fronthaul solution over an integrated mmW mesh transport network. The system is installed outdoors, executing under environmental conditions from the end of September through November. The trial has included both natural and induced link failure events, to test network resiliency. “Millimeter wave technology will be a decisive cornerstone to bring 5G forward to enhanced mobile broadband harvesting new spectrum opportunities well above 6GHz, ultra-dense deployments and energy-efficient multi-Gigabit transmission,” explains Dr. Thomas Haustein, Head of Department for Wireless Communication and Networks at Fraunhofer HHI. “The 5G Berlin Testbed will provide valuable information that can be used to help advance the evolving 5G standards and specifications. We are already adapting OpenEPC to support critical 5G requirements. These include a distributed core network, plus architectures to support C-RAN and the cloudification of the radio access network,” said Carsten Brinkschulte, CEO, Core Network Dynamics. “Many companies have demonstrated systems that they qualify as ‘5G’ because of speed or latency characteristics, but this extended outdoor trial is the first example of a network edge architecture, tested in real-world conditions, that will be a key in eventual 5G deployment,” said, Alan Carlton, Vice President, InterDigital Europe. “Crosshaul’s major innovation may set the stage for a world where our definitions of what constitutes a network operator or infrastructure equipment are radically changed.” 5G-Crosshaul is an international project with 21 members aimed at developing integrated fronthaul and backhaul system solutions to support flexibility and unified management for 5G network architectures. To learn more about the project, visit http://5g-crosshaul.eu/. InterDigital develops mobile technologies that are at the core of devices, networks, and services worldwide. We solve many of the industry's most critical and complex technical challenges, inventing solutions for more efficient broadband networks and a richer multimedia experience years ahead of market deployment. InterDigital has licenses and strategic relationships with many of the world's leading wireless companies. Founded in 1972, InterDigital is listed on NASDAQ and is included in the S&P MidCap 400® index. InterDigital is a registered trademark of InterDigital, Inc. EdgeLink is a trademark of InterDigital, Inc. Innovations for the digital society of the future are the focus of research and development work at the Fraunhofer Heinrich Hertz Institute HHI. In this area, Fraunhofer HHI is a world leader in the development for mobile and optical communication networks and systems as well as processing and coding of video signals. Together with international partners from research and industry, Fraunhofer HHI works in the whole spectrum of digital infrastructure – from fundamental research to the development of prototypes and solutions. www.hhi.fraunhofer.de About Core Network Dynamics Headquartered in Berlin, Core Network Dynamics develops and markets OpenEPC, a complete mobile network infrastructure in software. Target markets include: carriers designing next-generation mobile networks using SDN/NFV; first responder/public safety organizations requiring a secure private LTE network compatible with off-the-shelf Smartphones; companies operating in remote areas where mobile coverage is patchy or non-existent; and operators evaluating advanced Mobile Edge Computing (MEC) concepts to implement distributed mobile networks for IoT applications. www.corenetdynamics.com


BARCELONA, Spain, Feb. 27, 2017 (GLOBE NEWSWIRE) -- MOBILE WORLD CONGRESS 2017 -- The biggest trade event in the mobile industry is abuzz with 5G news, and InterDigital, Inc. (NASDAQ:IDCC) has been invited to present its technology at Mobile World Congress’ headline demo event, a mainstage feature of live, interactive demonstrations by industry R&D leaders. ‘5G Impact’ will showcase network technology, innovative services, and life-changing applications through live and interactive demonstrations followed by a panel discussion surrounding the impact of 5G plus enhanced senses.  InterDigital will demonstrate demanding low-latency traffic within a remote surgery game application via 5G-Crosshaul technology with EdgeLink™60 Ghz platform, a transport technology that aims to solve architecture challenges of 5G. The demo by InterDigital and others will be followed by a panel discussion led by Jennifer Pigg Clark, Vice President, Network Research, 451 Research, and featuring the following speakers: 5G-Crosshaul is an international project with 21 members aimed at developing integrated fronthaul and backhaul system solutions to support flexibility and unified management for 5G network architectures. In November 2016, InterDigital, Fraunhofer Heinrich Hertz Institute HHI, and Core Network Dynamics (CND), three partners from the H2020 5GPPP 5G-Crosshaul consortium, announced the successful result of an extended, real-world deployment of the system, a first of its kind. To learn more about the project, visit http://5g-crosshaul.eu/. The conference session will take place on Thursday, March 2 from 11:30 a.m.  – 1:00 p.m. CET in Hall 4 Auditorium 5. To learn more about the panel, please click here. For more information on the Mobile World Congress conference agenda, please visit https://www.mobileworldcongress.com/start-here/agenda. Attendees of Mobile World Congress can see the Crosshaul demo, and other 5G and IoT demos, at InterDigital’s pavilion in Hall 7, Stand 7C61. InterDigital develops mobile technologies that are at the core of devices, networks, and services worldwide. We solve many of the industry's most critical and complex technical challenges, inventing solutions for more efficient broadband networks and a richer multimedia experience years ahead of market deployment. InterDigital has licenses and strategic relationships with many of the world's leading wireless companies. Founded in 1972, InterDigital is listed on NASDAQ and is included in the S&P MidCap 400® index. InterDigital is a registered trademark of InterDigital, Inc. EdgeLink is a trademark of InterDigital.


BARCELONA, Spain, Feb. 27, 2017 (GLOBE NEWSWIRE) -- MOBILE WORLD CONGRESS 2017 -- The biggest trade event in the mobile industry is abuzz with 5G news, and InterDigital, Inc. (NASDAQ:IDCC) has been invited to present its technology at Mobile World Congress’ headline demo event, a mainstage feature of live, interactive demonstrations by industry R&D leaders. ‘5G Impact’ will showcase network technology, innovative services, and life-changing applications through live and interactive demonstrations followed by a panel discussion surrounding the impact of 5G plus enhanced senses.  InterDigital will demonstrate demanding low-latency traffic within a remote surgery game application via 5G-Crosshaul technology with EdgeLink™60 Ghz platform, a transport technology that aims to solve architecture challenges of 5G. The demo by InterDigital and others will be followed by a panel discussion led by Jennifer Pigg Clark, Vice President, Network Research, 451 Research, and featuring the following speakers: 5G-Crosshaul is an international project with 21 members aimed at developing integrated fronthaul and backhaul system solutions to support flexibility and unified management for 5G network architectures. In November 2016, InterDigital, Fraunhofer Heinrich Hertz Institute HHI, and Core Network Dynamics (CND), three partners from the H2020 5GPPP 5G-Crosshaul consortium, announced the successful result of an extended, real-world deployment of the system, a first of its kind. To learn more about the project, visit http://5g-crosshaul.eu/. The conference session will take place on Thursday, March 2 from 11:30 a.m.  – 1:00 p.m. CET in Hall 4 Auditorium 5. To learn more about the panel, please click here. For more information on the Mobile World Congress conference agenda, please visit https://www.mobileworldcongress.com/start-here/agenda. Attendees of Mobile World Congress can see the Crosshaul demo, and other 5G and IoT demos, at InterDigital’s pavilion in Hall 7, Stand 7C61. InterDigital develops mobile technologies that are at the core of devices, networks, and services worldwide. We solve many of the industry's most critical and complex technical challenges, inventing solutions for more efficient broadband networks and a richer multimedia experience years ahead of market deployment. InterDigital has licenses and strategic relationships with many of the world's leading wireless companies. Founded in 1972, InterDigital is listed on NASDAQ and is included in the S&P MidCap 400® index. InterDigital is a registered trademark of InterDigital, Inc. EdgeLink is a trademark of InterDigital.


BARCELONA, Spain, Feb. 27, 2017 (GLOBE NEWSWIRE) -- MOBILE WORLD CONGRESS 2017 -- The biggest trade event in the mobile industry is abuzz with 5G news, and InterDigital, Inc. (NASDAQ:IDCC) has been invited to present its technology at Mobile World Congress’ headline demo event, a mainstage feature of live, interactive demonstrations by industry R&D leaders. ‘5G Impact’ will showcase network technology, innovative services, and life-changing applications through live and interactive demonstrations followed by a panel discussion surrounding the impact of 5G plus enhanced senses.  InterDigital will demonstrate demanding low-latency traffic within a remote surgery game application via 5G-Crosshaul technology with EdgeLink™60 Ghz platform, a transport technology that aims to solve architecture challenges of 5G. The demo by InterDigital and others will be followed by a panel discussion led by Jennifer Pigg Clark, Vice President, Network Research, 451 Research, and featuring the following speakers: 5G-Crosshaul is an international project with 21 members aimed at developing integrated fronthaul and backhaul system solutions to support flexibility and unified management for 5G network architectures. In November 2016, InterDigital, Fraunhofer Heinrich Hertz Institute HHI, and Core Network Dynamics (CND), three partners from the H2020 5GPPP 5G-Crosshaul consortium, announced the successful result of an extended, real-world deployment of the system, a first of its kind. To learn more about the project, visit http://5g-crosshaul.eu/. The conference session will take place on Thursday, March 2 from 11:30 a.m.  – 1:00 p.m. CET in Hall 4 Auditorium 5. To learn more about the panel, please click here. For more information on the Mobile World Congress conference agenda, please visit https://www.mobileworldcongress.com/start-here/agenda. Attendees of Mobile World Congress can see the Crosshaul demo, and other 5G and IoT demos, at InterDigital’s pavilion in Hall 7, Stand 7C61. InterDigital develops mobile technologies that are at the core of devices, networks, and services worldwide. We solve many of the industry's most critical and complex technical challenges, inventing solutions for more efficient broadband networks and a richer multimedia experience years ahead of market deployment. InterDigital has licenses and strategic relationships with many of the world's leading wireless companies. Founded in 1972, InterDigital is listed on NASDAQ and is included in the S&P MidCap 400® index. InterDigital is a registered trademark of InterDigital, Inc. EdgeLink is a trademark of InterDigital.


BARCELONA, Spain, Feb. 27, 2017 (GLOBE NEWSWIRE) -- MOBILE WORLD CONGRESS 2017 -- The biggest trade event in the mobile industry is abuzz with 5G news, and InterDigital, Inc. (NASDAQ:IDCC) has been invited to present its technology at Mobile World Congress’ headline demo event, a mainstage feature of live, interactive demonstrations by industry R&D leaders. ‘5G Impact’ will showcase network technology, innovative services, and life-changing applications through live and interactive demonstrations followed by a panel discussion surrounding the impact of 5G plus enhanced senses.  InterDigital will demonstrate demanding low-latency traffic within a remote surgery game application via 5G-Crosshaul technology with EdgeLink™60 Ghz platform, a transport technology that aims to solve architecture challenges of 5G. The demo by InterDigital and others will be followed by a panel discussion led by Jennifer Pigg Clark, Vice President, Network Research, 451 Research, and featuring the following speakers: 5G-Crosshaul is an international project with 21 members aimed at developing integrated fronthaul and backhaul system solutions to support flexibility and unified management for 5G network architectures. In November 2016, InterDigital, Fraunhofer Heinrich Hertz Institute HHI, and Core Network Dynamics (CND), three partners from the H2020 5GPPP 5G-Crosshaul consortium, announced the successful result of an extended, real-world deployment of the system, a first of its kind. To learn more about the project, visit http://5g-crosshaul.eu/. The conference session will take place on Thursday, March 2 from 11:30 a.m.  – 1:00 p.m. CET in Hall 4 Auditorium 5. To learn more about the panel, please click here. For more information on the Mobile World Congress conference agenda, please visit https://www.mobileworldcongress.com/start-here/agenda. Attendees of Mobile World Congress can see the Crosshaul demo, and other 5G and IoT demos, at InterDigital’s pavilion in Hall 7, Stand 7C61. InterDigital develops mobile technologies that are at the core of devices, networks, and services worldwide. We solve many of the industry's most critical and complex technical challenges, inventing solutions for more efficient broadband networks and a richer multimedia experience years ahead of market deployment. InterDigital has licenses and strategic relationships with many of the world's leading wireless companies. Founded in 1972, InterDigital is listed on NASDAQ and is included in the S&P MidCap 400® index. InterDigital is a registered trademark of InterDigital, Inc. EdgeLink is a trademark of InterDigital.


BARCELONA, Spain, Feb. 27, 2017 (GLOBE NEWSWIRE) -- MOBILE WORLD CONGRESS 2017 -- The biggest trade event in the mobile industry is abuzz with 5G news, and InterDigital, Inc. (NASDAQ:IDCC) has been invited to present its technology at Mobile World Congress’ headline demo event, a mainstage feature of live, interactive demonstrations by industry R&D leaders. ‘5G Impact’ will showcase network technology, innovative services, and life-changing applications through live and interactive demonstrations followed by a panel discussion surrounding the impact of 5G plus enhanced senses.  InterDigital will demonstrate demanding low-latency traffic within a remote surgery game application via 5G-Crosshaul technology with EdgeLink™60 Ghz platform, a transport technology that aims to solve architecture challenges of 5G. The demo by InterDigital and others will be followed by a panel discussion led by Jennifer Pigg Clark, Vice President, Network Research, 451 Research, and featuring the following speakers: 5G-Crosshaul is an international project with 21 members aimed at developing integrated fronthaul and backhaul system solutions to support flexibility and unified management for 5G network architectures. In November 2016, InterDigital, Fraunhofer Heinrich Hertz Institute HHI, and Core Network Dynamics (CND), three partners from the H2020 5GPPP 5G-Crosshaul consortium, announced the successful result of an extended, real-world deployment of the system, a first of its kind. To learn more about the project, visit http://5g-crosshaul.eu/. The conference session will take place on Thursday, March 2 from 11:30 a.m.  – 1:00 p.m. CET in Hall 4 Auditorium 5. To learn more about the panel, please click here. For more information on the Mobile World Congress conference agenda, please visit https://www.mobileworldcongress.com/start-here/agenda. Attendees of Mobile World Congress can see the Crosshaul demo, and other 5G and IoT demos, at InterDigital’s pavilion in Hall 7, Stand 7C61. InterDigital develops mobile technologies that are at the core of devices, networks, and services worldwide. We solve many of the industry's most critical and complex technical challenges, inventing solutions for more efficient broadband networks and a richer multimedia experience years ahead of market deployment. InterDigital has licenses and strategic relationships with many of the world's leading wireless companies. Founded in 1972, InterDigital is listed on NASDAQ and is included in the S&P MidCap 400® index. InterDigital is a registered trademark of InterDigital, Inc. EdgeLink is a trademark of InterDigital.


News Article | September 7, 2016
Site: phys.org

Scientists at the Fraunhofer Institute for Telecommunications, Heinrich Hertz Institute, HHI in Berlin have developed a method by which the realistic image of a person can be transmitted in a virtual world; and just like in science fiction movies, the image appears full sized and three dimensional. The image can be viewed from different directions and the viewer can even walk around it – just like in the movie. Until now, this was not possible; even virtual reality (VR) still has its limits. People can be represented by artificial three-dimensional models (so-called avatars) that can be seen when the viewer puts on VR data goggles. Nevertheless, these artificial figures do not have a lifelike appearance or natural movement. Another option is to play the video image of a person in frontal view in the VR data goggles. However, the viewer cannot walk around the image. As a result, the whole scene looks artificial as one moves through the virtual world. The person always turns his two-dimensional front to the viewer. In contrast, the HHI researchers have perfected the three-dimensional impression. To do so, they have developed a camera system that films the person. The core of this system is a stereo camera: Just as people do with their two eyes, the camera records the person with two lenses. This stereoscopic vision results in distances being estimated well, because both eyes look at an object from a slightly different angle. The result is a three-dimensional impression. Recording a person in detail from all directions takes more than one camera. "We are currently using more than 20 stereo cameras to map a human," says Oliver Schreer, Head of the Research Group "Immersive Media & Communication" at HHI. Each camera only captures a part of the person. The challenge is to merge the individual camera images together so that a realistic overall picture is produced. The system includes more than just the camera technology. The researchers have developed algorithms that can quickly extract depth information from the stereoscopic camera images. This is necessary in order to calculate the 3-D form of a captured person. The computer calculates a virtual model of the human, which is then transferred into the virtual scene. The cameras perceive the surface shape with many details. In this way even small wrinkles, e.g. on the clothes of the person, can be shown. The model has a natural and realistic appearance. "In developing these algorithms, special care has been taken to ensure they work efficiently and fast, so the movements of dialogue partners can very quickly be converted into a dynamic model," Schreer says, since this is the only way that the movements will look natural. The images from a single camera pair can be processed in real time. The fusing of the 3-D information from the various camera images takes a few seconds. The illusion has already been perfected, though. The system transmits the three-dimensional dynamic model of a person rapidly in virtual reality. A person can move freely in a dedicated capture area. The virtual image portrays every gesture and movement realistically. "Our goal is that in the future a realistic image copy of a human is able to directly interact with the virtual world – for example, to let it grab virtual objects," says Schreer. In the future, the new camera system is planned to be used for other application fields too. For example, the researchers work on a virtual video conferencing application. It could as well be used for infotainment applications. Instead of a passive, frontal viewing experience, a television viewer could be directly involved in a movie scene by means of VR goggles. The viewer would not only see a three-dimensional image of the scene on the television, but could virtually walk around inside it, and, for example be a part of the adventures of his science fiction heroes. "We can also imagine installing the camera system at different locations in small studios," says Schreer. "Film producers could use it to transfer the movement of actors into scenes more easily than ever before." That has been very costly so far. In general, the actor's movements are recorded using the motion tracking method. As part of this process, the face and body of the actor are marked with small dots. The computer tracks the movement of the points and transfers it to the computer-generated artificial image of the actor – for example, an action star jumping from skyscraper to skyscraper. However, with individual marker points, motion tracking methods can only detect movements and, especially, fine facial expressions very inaccurately or with very high technical effort. That means a lot of post processing for the computer-graphic artists until the scene looks more realistic. "With our camera system, however, our goal is to break down and represent a person and it's movement in the future with much more details," says Schreer. The researchers are currently improving their camera system and the accompanying analysis software. Whether they will make both of these available as a service or license them out to production companies is not yet known. Explore further: New technology for animation film experts: Movie heroes to be transferred to virtual worlds more easily, realistically


News Article | February 23, 2017
Site: www.scientificcomputing.com

Sorting photos on the computer used to be a tedious job. Today, you simply click on face recognition and instantly get a selection of photos of your daughter or son. Computers have gotten very good at analyzing large volumes of data and searching for certain structures, such as faces in images. This is made possible by neural networks, which have developed into an established and sophisticated IT analysis method (see box, “How neural networks function”). The problem is that it isn’t just researchers who currently don’t know exactly how neural networks function step by step, or why they reach one result or another. Neural networks are, in a sense, black boxes – computer programs that people feed values into and that reliably return results. If you want to teach a neural network, for instance, to recognize cats, then you instruct the system by feeding it thousands of cat pictures. Just like a small child that slowly learns to distinguish cats from dogs, the neural network, too, learns automatically. “In many cases, though, researchers are less interested in the result and far more interested in what the neural network actually does – how it reaches decisions,” says Dr. Wojciech Samek, head of the Machine Learning Group at Fraunhofer Heinrich Hertz Institute HHI in Berlin. So Samek and his team, in collaboration with colleagues from TU Berlin, developed a method that makes it possible to watch a neural network think. This is important, for instance, in detecting diseases. We already have the capability today to feed patients’ genetic data into computers – or neural networks – which then analyze the probability of a patient having a certain genetic disorder. “But it would be much more interesting to know precisely which characteristics the program bases its decisions on,” says Samek. It could be certain genetic defects the patient has – and these, in turn, could be a possible target for a cancer treatment that is tailored to individual patients. The researchers’ method allows them to watch the work of the neural networks in reverse: they work through the program backwards, starting from the result. “We can see exactly where a certain group of neurons made a certain decision, and how strongly this decision impacted the result,” says Samek. The researchers have already impressively demonstrated – multiple times – that the method works. For instance, they compared two programs that are publicly available on the Internet and that are both capable of recognizing horses in images. The result was surprising. The first program actually recognized the horses’ bodies. The second one, however, focused on the copyright symbols on the photos, which pointed to forums for horse lovers, or riding and breeding associations, enabling the program to achieve a high success rate even though it had never learned what horses look like. “So you can see how important it is to understand exactly how such a network functions,” says Samek. This knowledge is also of particular interest to industry. “It is conceivable, for instance, that the operating data of a complex production plant could be analyzed to deduce which parameters impact product quality or cause it to fluctuate,” he says. The invention is also interesting for many other applications that involve the neural analysis of large or complex data volumes. “In another experiment, we were able to show which parameters a network uses to decide whether a face appears young or old.” According to Samek, for a long time banks have even been using neural networks to analyze bank customers’ creditworthiness. To do this, large volumes of customer data are collected and evaluated by a neural network. “If we knew how the network reaches its decision, we could reduce the data volume right from the start by selecting the relevant parameters,” he says. This would certainly be in the customers’ interests, too. At the CeBIT trade fair in Hannover from March 20 to 24, 2017, Samek’s team of researchers will demonstrate how they use their software to analyze the black boxes of neural networks – and how these networks can deduce a person’s age or sex from their face, or recognize animals.


News Article | February 23, 2017
Site: www.scientificcomputing.com

Sorting photos on the computer used to be a tedious job. Today, you simply click on face recognition and instantly get a selection of photos of your daughter or son. Computers have gotten very good at analyzing large volumes of data and searching for certain structures, such as faces in images. This is made possible by neural networks, which have developed into an established and sophisticated IT analysis method (see box, “How neural networks function”). The problem is that it isn’t just researchers who currently don’t know exactly how neural networks function step by step, or why they reach one result or another. Neural networks are, in a sense, black boxes – computer programs that people feed values into and that reliably return results. If you want to teach a neural network, for instance, to recognize cats, then you instruct the system by feeding it thousands of cat pictures. Just like a small child that slowly learns to distinguish cats from dogs, the neural network, too, learns automatically. “In many cases, though, researchers are less interested in the result and far more interested in what the neural network actually does – how it reaches decisions,” says Dr. Wojciech Samek, head of the Machine Learning Group at Fraunhofer Heinrich Hertz Institute HHI in Berlin. So Samek and his team, in collaboration with colleagues from TU Berlin, developed a method that makes it possible to watch a neural network think. This is important, for instance, in detecting diseases. We already have the capability today to feed patients’ genetic data into computers – or neural networks – which then analyze the probability of a patient having a certain genetic disorder. “But it would be much more interesting to know precisely which characteristics the program bases its decisions on,” says Samek. It could be certain genetic defects the patient has – and these, in turn, could be a possible target for a cancer treatment that is tailored to individual patients. The researchers’ method allows them to watch the work of the neural networks in reverse: they work through the program backwards, starting from the result. “We can see exactly where a certain group of neurons made a certain decision, and how strongly this decision impacted the result,” says Samek. The researchers have already impressively demonstrated – multiple times – that the method works. For instance, they compared two programs that are publicly available on the Internet and that are both capable of recognizing horses in images. The result was surprising. The first program actually recognized the horses’ bodies. The second one, however, focused on the copyright symbols on the photos, which pointed to forums for horse lovers, or riding and breeding associations, enabling the program to achieve a high success rate even though it had never learned what horses look like. “So you can see how important it is to understand exactly how such a network functions,” says Samek. This knowledge is also of particular interest to industry. “It is conceivable, for instance, that the operating data of a complex production plant could be analyzed to deduce which parameters impact product quality or cause it to fluctuate,” he says. The invention is also interesting for many other applications that involve the neural analysis of large or complex data volumes. “In another experiment, we were able to show which parameters a network uses to decide whether a face appears young or old.” According to Samek, for a long time banks have even been using neural networks to analyze bank customers’ creditworthiness. To do this, large volumes of customer data are collected and evaluated by a neural network. “If we knew how the network reaches its decision, we could reduce the data volume right from the start by selecting the relevant parameters,” he says. This would certainly be in the customers’ interests, too. At the CeBIT trade fair in Hannover from March 20 to 24, 2017, Samek’s team of researchers will demonstrate how they use their software to analyze the black boxes of neural networks – and how these networks can deduce a person’s age or sex from their face, or recognize animals.


Fujitsu Laboratories and the Fraunhofer Heinrich Hertz Institute HHI today announced the development of a new method to simultaneously convert the wavelengths of wavelength-division-multiplexed signals necessary for optical communication relay nodes in future wavelength-division-multiplexed optical networks, and have successfully tested the method using high-bandwidth signal transmission in the range of 1 Tbps.

Loading Heinrich Hertz Institute collaborators
Loading Heinrich Hertz Institute collaborators