Image Metrics Ltd.

Manchester, United Kingdom

Image Metrics Ltd.

Manchester, United Kingdom
SEARCH FILTERS
Time filter
Source Type

Kendrick C.,Manchester Metropolitan University | Tan K.,Manchester Metropolitan University | Williams T.,Image Metrics Ltd | Yap M.H.,Manchester Metropolitan University
Proceedings - 12th IEEE International Conference on Automatic Face and Gesture Recognition, FG 2017 - 1st International Workshop on Adaptive Shot Learning for Gesture Understanding and Production, ASL4GUP 2017, Biometrics in the Wild, Bwild 2017, Heterogeneous Face Recognition, HFR 2017, Joint Challenge on Dominant and Complementary Emotion Recognition Using Micro Emotion Features and Head-Pose Estimation, DCER and HPE 2017 and 3rd Facial Expression Recognition and Analysis Challenge, FERA 2017 | Year: 2017

Annotation of data is fundamental for training any modern facial tracking system. Methods, such as deep and machine learning, require large amounts of pre-annotated data to produce results found in many state-of-the-art systems. 2- Dimensional (images) annotations rely on the texture information of the face to annotate the features. However, in 3-Dimensions (models), the task becomes more complex. In 3D, the conventional 2D approaches are ineffective as facial landmarks are difficult to accurately identify without texture information. There has been little research into the accuracy of methods for annotating 3D facial models. This paper proposes a method for annotating 3D models which uses texture information by aligning a model to a 2D image to compare its accuracy and throughput to the conventional methods of 3D annotation. For evaluation, 16 nonexpert volunteers were recruited and instructed to annotate three models using both the proposed method and the conventional method. The resultant annotations were compared to ground truth data generated by an experienced annotator. The results demonstrate significant improvement in the throughput of the proposed annotation method compared to the conventional approach without significant differences in accuracy. The proposed method also highlights that the conventional method does not successfully identify all the facial landmarks. The proposed method will be made freely available to use online. © 2017 IEEE.


Trademark
Faceware Technologies Inc. and Image Metrics Inc | Date: 2012-10-23

Computer software for facial animation for games, films, and other entertainment industry use.


Balamoody S.,University of Manchester | Balamoody S.,University of Liverpool | Williams T.G.,University of Manchester | Williams T.G.,Image Metrics Ltd. | And 11 more authors.
Skeletal Radiology | Year: 2013

Objective: The transverse relaxation time (T2) in MR imaging has been identified as a potential biomarker of hyaline cartilage pathology. This study investigates whether MR assessments of T2 are comparable between 3-T scanners from three different vendors. Design: Twelve subjects with symptoms of knee osteoarthritis and one or more risk factors had their knee scanned on each of the three vendors' scanners located in three sites in the UK. MR data acquisition was based on the United States National Institutes of Health Osteoarthritis Initiative protocol. Measures of cartilage T2 and R2 (inverse of T2) were computed for precision error assessment. Intrascanner reproducibility was also assessed with a phantom (all three scanners) and a cohort of 5 subjects (one scanner only). Results: Whole-organ magnetic resonance (WORM) semiquantitative cartilage scores ranged from minimal to advanced degradation. Intrascanner R2 root-mean-square coefficients of variation (RMSCOV) were low, within the range 2.6 to 6.3% for femoral and tibial regions. For one scanner pair, mean T2 differences ranged from -1.2 to 2.8 ms, with no significant difference observed for the medial tibia and patella regions (p < 0.05). T2 values from the third scanner were systematically lower, producing interscanner mean T2 differences within the range 5.4 to 10.0 ms. Conclusion: Significant interscanner cartilage T2 differences were found and should be accounted for before data from scanners of different vendors are compared. © 2012 ISS.


Patent
Image Metrics Ltd | Date: 2011-08-22

Computer-implemented methods and computer program products for automatically transferring expressions between rigs with consistent joint structure, and for automatically transferring skin weights between different skin meshes based on joint positioning. A method is provided for transferring an expression between a plurality of source rigs and a target rig, where each rig characterizes an animated character, and each rig, in turn, is characterized by a set of joints and a skin mesh having a plurality of vertices, with each vertex characterized by a matrix of weights relating a response of the vertex to movement of associated joints. A set of offsets is calculated of joint positions of a goal expression of each source rig relative to a neutral expression of the source rig. A scaling transformation is then applied to the set of offsets to produce a scaled set of offsets, which are added, in turn, to a neutral expression of the target rig. Methods are also provided for transferring a set of skin weights between the source rigs and the target rig.


Patent
Image Metrics Inc. | Date: 2012-08-30

Systems and methods are disclosed for performing voice personalization of video content. The personalized media content may include a composition of a background scene having a character, head model data representing an individualized three-dimensional (3D) head model of a user, audio data simulating the users voice, and a viseme track containing instructions for causing the individualized 3D head model to lip sync the words contained in the audio data. The audio data simulating the users voice can be generated using a voice transformation process. In certain examples, the audio data is based on a text input or selected by the user (e.g., via a telephone or computer) or a textual dialogue of a background character.

Loading Image Metrics Ltd. collaborators
Loading Image Metrics Ltd. collaborators