Paris, France
Paris, France

Time filter

Source Type

PARIS & NEW YORK--(BUSINESS WIRE)--A2iA (@A2iA), a world leading developer of artificial intelligence and machine-learning based text recognition, information extraction and intelligent document classification toolkits, today announced the availability of a2ia TextReader™ V5.0. A software toolkit, a2ia TextReader enables full lines of printed and handwritten text to be transcribed without prior segmentation into characters or words. With this new version, global enterprises and business processing organizations can address additional languages, including Simplified and Traditional Chinese, as well as Russian, with the support for Cyrillic characters. Currently supported Western languages - English, French, Spanish, Portuguese, German and Italian – also see an increase in performance, boasting on average 14% higher accuracy rates for cursive handwriting. “A2iA is committed to addressing global market demands, including the growing need to process mixed workflows in multiple languages,” said Jean-Louis Fages, A2iA President and Chairman of the Board. “a2ia TextReader’s simple plug-and-play features enable organizations to gain access to all data quickly and with the highest levels of accuracy.” Award-winning with research and development at its core, A2iA, Artificial Intelligence and Image Analysis (www.a2ia.com), is a science and R&D driven software company with deep roots in artificial intelligence, machine learning and neural networks. With simple, easy to use and intuitive toolkits, A2iA delivers add-on features to speed automation, simplify customer engagement and quickly capture all types of printed and handwritten data from documents – whether captured by a desktop scanner or mobile device. By enhancing solutions from systems integrators and independent software vendors, A2iA allows complex and cursive data from all types of documents to become part of a structured database, making it searchable and reportable, with the same level of flexibility as printed or digital data. For more information, visit www.a2ia.com or call +1 917-237-0390 within the Americas, or +33 1 44 42 00 80 within EMEA, India or Asia.


New Version of a2ia TextReader Boasts Higher Accuracy Rates for Cursive Handwriting for Western Languages New Version of a2ia TextReader™ Supports Chinese and Cyrillic Characters, and Boast Improved Read Rates for Western Languages New York, NY, March 01, 2017 --( a2ia TextReader V5.0 Boasts: RNN-based software toolkit (SDK). No customization required. Full text transcription from machine printed, hand printed, and cursive handwritten documents that contain alpha and numeric data. Transcription results can be applied to existing software solutions, including third-party classification and/or extraction engines. Support for documents written in: English, French, Spanish, Portuguese, German, Italian, Arabic, Chinese (Simplified and Traditional), Russian. 14% Improved accuracy rates on cursive handwriting for Western languages. “A2iA is committed to addressing global market demands, including the growing need to process mixed workflows in multiple languages,” said Jean-Louis Fages, A2iA President and Chairman of the Board. “a2ia TextReader’s simple plug-and-play features enable organizations to gain access to all data quickly and with the highest levels of accuracy.” About A2iA Award-winning with research and development at its core, A2iA, Artificial Intelligence and Image Analysis (www.a2ia.com), is a science and R&D driven software company with deep roots in artificial intelligence, machine learning and neural networks. With simple, easy to use and intuitive toolkits, A2iA delivers add-on features to speed automation, simplify customer engagement and quickly capture all types of printed and handwritten data from documents – whether captured by a desktop scanner or mobile device. By enhancing solutions from systems integrators and independent software vendors, A2iA allows complex and cursive data from all types of documents to become part of a structured database, making it searchable and reportable, with the same level of flexibility as printed or digital data. For more information, visit www.a2ia.com or call +1 917-237-0390 within the Americas, or +33 1 44 42 00 80 within EMEA, India or Asia. Media Inquiries: A2iA Communications Marketing@a2ia.com Americas: + 1 917.237.0390 EMEA, India, APAC: +33 (0)1 44 42 00 80 New York, NY, March 01, 2017 --( PR.com )-- A2iA (@A2iA), a world leading developer of artificial intelligence and machine-learning based text recognition, information extraction and intelligent document classification toolkits, today announced the availability of a2ia TextReader™ V5.0. A software toolkit, a2ia TextReader enables full lines of printed and handwritten text to be transcribed without prior segmentation into characters or words. With this new version, global enterprises and business processing organizations can address additional languages, including Simplified and Traditional Chinese, as well as Russian, with the support for Cyrillic characters. Currently supported Western languages - English, French, Spanish, Portuguese, German and Italian – also see an increase in performance, boasting on average 14% higher accuracy rates for cursive handwriting.a2ia TextReader V5.0 Boasts:RNN-based software toolkit (SDK). No customization required.Full text transcription from machine printed, hand printed, and cursive handwritten documents that contain alpha and numeric data.Transcription results can be applied to existing software solutions, including third-party classification and/or extraction engines.Support for documents written in: English, French, Spanish, Portuguese, German, Italian, Arabic, Chinese (Simplified and Traditional), Russian.14% Improved accuracy rates on cursive handwriting for Western languages.“A2iA is committed to addressing global market demands, including the growing need to process mixed workflows in multiple languages,” said Jean-Louis Fages, A2iA President and Chairman of the Board. “a2ia TextReader’s simple plug-and-play features enable organizations to gain access to all data quickly and with the highest levels of accuracy.”About A2iAAward-winning with research and development at its core, A2iA, Artificial Intelligence and Image Analysis (www.a2ia.com), is a science and R&D driven software company with deep roots in artificial intelligence, machine learning and neural networks. With simple, easy to use and intuitive toolkits, A2iA delivers add-on features to speed automation, simplify customer engagement and quickly capture all types of printed and handwritten data from documents – whether captured by a desktop scanner or mobile device. By enhancing solutions from systems integrators and independent software vendors, A2iA allows complex and cursive data from all types of documents to become part of a structured database, making it searchable and reportable, with the same level of flexibility as printed or digital data. For more information, visit www.a2ia.com or call +1 917-237-0390 within the Americas, or +33 1 44 42 00 80 within EMEA, India or Asia.Media Inquiries:A2iA CommunicationsMarketing@a2ia.comAmericas: + 1 917.237.0390EMEA, India, APAC: +33 (0)1 44 42 00 80 Click here to view the list of recent Press Releases from A2iA


News Article | February 27, 2017
Site: www.businesswire.com

巴黎 & 紐約--(BUSINESS WIRE)---基於人工智慧與機器學習技術的文字識別、資訊提取與智慧文件歸類工具包世界領先開發商A2iA (@A2iA),今日宣佈a2ia TextReader™ V5.0版隆重面世。a2ia TextReader軟體工具包可識別印刷和手寫文字,轉寫為數位格式,且無需事先將文字中的字或詞分隔開。此新版令跨國企業和商業機構輕鬆駕馭更多語言,包括簡體與正體中文。因其支持西里爾字母,還可識別俄文。目前支持的西方語言—英文、法文、西班牙文、葡萄牙文、德文和義大利文—識別效率獲得改進,草書手寫體識別準確率平均提高14%。 a2ia TextReader V5.0版優勢: 基於遞歸神經網絡(RNN)的軟體工具包(SDK)。無需定製化。 對包含文字與數值內容的機器印刷、手工印刷、草書手寫文件的全文轉寫。 轉寫結果可應用於現有軟體解決方案,包括第三方歸類和/或提取引擎。 支持以下列語言書寫的文件:英文、法文、西班牙文、葡萄牙文、德文、義大利文、阿拉伯文、中文(簡體與正體)、俄文。 西文草書手寫體識別準確率提高14%。 A2iA總裁、董事局主席尚-路易•法吉(Je


Bianne-Bernard A.-L.,A2iA SA | Bianne-Bernard A.-L.,Telecom ParisTech | Menasri F.,A2iA SA | Al-Hajj Mohamad R.,Telecom ParisTech | And 4 more authors.
IEEE Transactions on Pattern Analysis and Machine Intelligence | Year: 2011

This study aims at building an efficient word recognition system resulting from the combination of three handwriting recognizers. The main component of this combined system is an HMM-based recognizer which considers dynamic and contextual information for a better modeling of writing units. For modeling the contextual units, a state-tying process based on decision tree clustering is introduced. Decision trees are built according to a set of expert-based questions on how characters are written. Questions are divided into global questions, yielding larger clusters, and precise questions, yielding smaller ones. Such clustering enables us to reduce the total number of models and Gaussians densities by 10. We then apply this modeling to the recognition of handwritten words. Experiments are conducted on three publicly available databases based on Latin or Arabic languages: Rimes, IAM, and OpenHart. The results obtained show that contextual information embedded with dynamic modeling significantly improves recognition. © 2011 IEEE.


Messina R.,A2iA S.A. | Kermorvant C.,A2iA S.A.
Proceedings - 11th IAPR International Workshop on Document Analysis Systems, DAS 2014 | Year: 2014

Hybrid statistical grammars both at word and character levels can be used to perform open-vocabulary recognition. This is usually done by allowing the special symbol for unknown-word in the word-level grammar and dynamically replacing it by a (long) n-gramat character-level, as the full transducer does not fit in the memory of most current computers. We present a modification of a finite-state-transducer (fst) n-gram that enables the creation of a static transducer, i.e. when it is not possible to perform on-demand composition. By combining paths in the 'LG' transducer (composition of lexicon and n-gram)making it over-generative with respect to the n-grams observed in the corpus, it is possible to reduce the number of actual occurrences of the character-level grammar, the resulting transducer fits the memory of practical machines. We evaluate this model for handwriting recognition using the RIMES and the IAM dabases. We study its effect on the vocabulary size and show that this model is competitive with state-of-the-art solutions. © 2014 IEEE.


The systems and methods of the present disclosure use a mobile device equipped with a camera to capture and preprocess images of objects including financial documents, financial cards, and identification cards, and to recognize information in the images of the objects. The methods include detecting quadrangles in images of an object in an image data stream generated by the camera, capturing a first image, transforming the first image, binarizing the transformed image, recognizing information in the binarized image, and determining the validity of the recognized information. The method also includes communicating with a server of a financial institution or other organization to determine the validity of the recognized information. The mobile device may include a camera, a display to display an image data stream and captured images, a memory to store a configuration file including parameters for the preprocessing and recognition functions, captured images, and software, and a communication unit to communicate with a server of the financial institution or other organization.


The systems and methods of the present disclosure use a mobile device equipped with a camera to capture and preprocess images of objects including financial documents, financial cards, and identification cards, and to recognize information in the images of the objects. The methods include detecting quadrangles in images of an object in an image data stream generated by the camera, capturing a first image, transforming the first image, binarizing the transformed image, recognizing information in the binarized image, and determining the validity of the recognized information. The method also includes communicating with a server of a financial institution or other organization to determine the validity of the recognized information. The mobile device may include a camera, a display to display an image data stream and captured images, a memory to store a configuration file including parameters for the preprocessing and recognition functions, captured images, and software, and a communication unit to communicate with a server of the financial institution or other organization.


The systems and methods of the present disclosure enable a user to use a mobile device to automatically capture a high resolution image of a rectangular object. The methods include capturing a low resolution image of the rectangular object and detecting edges of the rectangular object in the low resolution image, where the edges form a quadrangle, calculating a coordinate of each corner of the quadrangle, calculating an average coordinate of each corner of the quadrangle in a most recent predetermined number of low resolution images, calculating a dispersion of each corner of the quadrangle in the most recent predetermined number of low resolution images from a corresponding coordinate of each calculated average coordinate, determining whether the dispersion of each corner of the quadrangle is less than a predetermined value, capturing a high resolution image of the rectangular object when it is determined that the dispersion of each corner of the quadrangle is less than the predetermined value, and geometrically transforming the quadrangle of the rectangular object in the high resolution image into a rectangle.


The systems and methods of the present disclosure use a mobile device equipped with a camera to capture and preprocess images of objects including financial documents, financial cards, and identification cards, and to recognize information in the images of the objects. The methods include detecting quadrangles in images of an object in an image data stream generated by the camera, capturing a first image, transforming the first image, binarizing the transformed image, recognizing information in the binarized image, and determining the validity of the recognized information. The method also includes communicating with a server of a financial institution or other organization to determine the validity of the recognized information. The mobile device may include a camera, a display to display an image data stream and captured images, a memory to store a configuration file including parameters for the preprocessing and recognition functions, captured images, and software, and a communication unit to communicate with a server of the financial institution or other organization.


Patent
A2iA S.A. | Date: 2011-01-26

A system, method, and computer program product for processing of objects are disclosed. A processor coupled to a graphical user interface is configured to display an object. The processor receives input from a user concerning the object, wherein input relates to at least a partial location of the object, as a mouse position close to the object, a line approximately covering the vertical or horizontal extent of the object, or a box approximately covering the object. The processor provides input to a keying module, wherein the keying module processes the received input and provides the input to a recognition engine. The recognition engine is in communication with the keying module. Based on the received input, the recognition engine provides an exact information concerning the received input to the keying module, as an exact location, a recognition result, and a confidence score qualifying the reliability of the recognition results. The keying module generates an enhanced information about the object based on the information received from the recognition engine and predetermined information concerning the object.

Loading A2iA SA collaborators
Loading A2iA SA collaborators