Time filter

Source Type

Monash, Australia

Lui W.L.,Monash Vision Group
Conference proceedings : ... Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Conference | Year: 2012

Implanted visual prostheses provide bionic vision with very low spatial and intensity resolution when compared against healthy human vision. Vision processing converts camera video to low resolution imagery for bionic vision with the aim of preserving salient features such as edges. Transformative Reality extends and improves upon traditional vision processing in three ways. Firstly, a combination of visual and non-visual sensors are used to provide multi-modal data of a person's surroundings. This enables the sensing of features that are difficult to sense with only a camera. Secondly, robotic sensing algorithms construct models of the world in real time. This enables the detection of complex features such as navigable empty ground or people. Thirdly, models are visually rendered so that visually complex entities such as people can be effectively represented in low resolution. Preliminary simulated prosthetic vision trials, where a head mounted display is used to constrain a subject's vision to 25×25 binary phosphenes, suggest that Transformative Reality provides functional bionic vision for tasks such as indoor navigation, object manipulation and people detection in scenes where traditional processing is unusable. Source

Zhou K.,Monash Vision Group | Zhou K.,Monash University | Thouas G.A.,University of Melbourne | Bernard C.C.,Monash University | And 4 more authors.
ACS Applied Materials and Interfaces | Year: 2012

Electroactive scaffolds that are passively conductive and able to transmit applied electrical stimuli are of increasing importance for neural tissue engineering. Here, we report a process of rendering both 2D and 3D polymer scaffolds electrically conducting, while also enhancing neuron attachment. Graphene-heparin/poly-l-lysine polyelectrolytes were assembled via layer-by-layer (LbL) deposition onto 2D surfaces and 3D electrospun nanofibers. The employed LbL coating technique in this work enables the electro- and biofunctionalization of complex 3D scaffold structures. LbL assembly was characterized by a steady mass increase during the in situ deposition process in 2D, with regular step changes in hydrophobicity. Uniform coverage of the graphene/polyelectrolyte coatings was also achieved on nanofibers, with hydrodynamic flow and post-thermal annealing playing an important role in controlling sheet resistance of 2D surfaces and nanofibers. Cell culture experiments showed that both 2D and 3D graphene-PEMs supported neuron cell adhesion and neurite outgrowth, with no appreciable cell death. This electroactive scaffold modification may therefore assist in neuronal regeneration, for creating functional and biocompatible polymer scaffolds for electrical entrainment or biosensing applications. © 2012 American Chemical Society. Source

Li W.H.,Monash Vision Group | Tang T.J.J.,Monash Vision Group | Lui W.L.D.,Monash Vision Group
2013 IEEE International Conference on Image Processing, ICIP 2013 - Proceedings | Year: 2013

Currently, most implanted visual prosthetic systems generate vision by translating sensor data from a headworn camera into electrical stimulation of the human vision system. Unfortunately, the resulting bionic vision has low spatial resolution and limited dynamic range. This dramatically reduces the usefulness of bionic vision in many real world scenarios. Historically, this problem is treated as immutable pathology. Recently, image processing has been proposed as a potential remedy to improve the useability of bionic vision. We explore another alternative: Combining multiple sensing modalities and robotic sensing algorithms. This paper gives a top level summary of ongoing research exploring this alternative. © 2013 IEEE. Source

Lui W.L.D.,Monash University | Lui W.L.D.,Monash Vision Group | Tang T.J.J.,Monash University | Tang T.J.J.,Monash Vision Group | And 3 more authors.
Proceedings - IEEE International Conference on Robotics and Automation | Year: 2012

This paper presents a 6 degrees of freedom egomotion estimation method using Iterative Closest Point (ICP) for low cost and low accuracy range cameras such as the Microsoft Kinect. Instead of Euclidean coordinates, the method uses inverse depth coordinates which better conforms to the error characteristics of raw sensor data. Novel inverse depth formulations of point-to-point and point-to-plane error metrics are derived as part of our implementation. The implemented system runs in real time at an average of 28 frames per second (fps) on a standard computer. Extensive experiments were performed to evaluate different combinations of error metrics and parameters. Results show that our system is accurate and robust across a variety of motion trajectories. The point-to-plane error metric was found to be the best at coping with large inter-frame motion while remaining accurate and maintaining real time performance. © 2012 IEEE. Source

Discover hidden collaborations