Time filter

Source Type

Burlington, MA, United States

Nuance Communications is an American multinational computer software technology corporation, headquartered in Burlington, Massachusetts, United States,a suburb of Boston, that provides speech and imaging applications. Current business products focus on server & embedded speech recognition, telephone call steering systems, automated telephone directory services, medical transcription software & systems, optical character recognition software, and desktop imaging software. The company also maintains a small division which does software and system development for military and government agencies. In October 2011, unconfirmed research suggested that its servers power Apple's iPhone 4S Siri voice recognition application.As of 2008, the company is a result of organic growth, mergers, and acquisitions. ScanSoft and Nuance merged in October 2005; before the merger, the two companies competed in the commercial large scale speech application business. The officially termed "merger" was a de facto acquisition of Nuance by ScanSoft, though the combined company changed its name to Nuance following the transaction. Before 1999, ScanSoft was known as Visioneer, a hardware and software scanner company. In 1999, Visioneer bought ScanSoft – a Xerox spin-off – and adopted ScanSoft as the company name. The original ScanSoft had its roots in Kurzweil Computer Products, a software company that developed the first omni-font character recognition system. Wikipedia.

Nuance Communications | Date: 2015-01-15

Differential dynamic content delivery including providing a session document for a presentation, wherein the session document includes a session grammar and a session structured document; selecting from the session structured document a classified structural element in dependence upon user classifications of a user participant in the presentation; presenting the selected structural element to the user; streaming presentation speech to the user including individual speech from at least one user participating in the presentation; converting the presentation speech to text; detecting whether the presentation speech contains simultaneous individual speech from two or more users; and displaying the text if the presentation speech contains simultaneous individual speech from two or more users.

Nuance Communications | Date: 2015-05-01

Systems and methods for intelligent language models that can be used across multiple devices are provided. Some embodiments provide for a client-server system for integrating change events from each device running a local language processing system into a master language model. The change events can be integrated, not only into the master model, but also into each of the other local language models. As a result, some embodiments enable restoration to new devices as well as synchronization of usage across multiple devices. In addition, real-time messaging can be used on selected messages to ensure that high priority change events are updated quickly across all active devices. Using a subscription model driven by a server infrastructure, utilization logic on the client side can also drive selective language model updates.

Automated user-machine interaction is gaining attraction in many applications and services. However, implementing and offering smart automated user-machine interaction services still present technical challenges. According to at least one example embodiment, a dialogue manager is configured to handle multiple dialogue applications independent of the language, the input modalities, or output modalities used. The dialogue manager employs generic semantic representation of user-input data. At a step of a dialogue, the dialogue manager determines whether the user-input data is indicative of a new request or a refinement request based on the generic semantic representation and at least one of a maintained state of the dialogue, general knowledge data representing one or more concepts, and data representing history of the dialogue. The dialogue manager then responds to determined user-request with multi-facet output data to a client dialogue application indicating action(s) to be performed.

Nuance Communications | Date: 2015-06-25

Techniques disclosed herein include systems and methods for open-domain voice-enabled searching that is speaker sensitive. Techniques include using speech information, speaker information, and information associated with a spoken query to enhance open voice search results. This includes integrating a textual index with a voice index to support the entire search cycle. Given a voice query, the system can execute two matching processes simultaneously. This can include a text matching process based on the output of speech recognition, as well as a voice matching process based on characteristics of a caller or user voicing a query. Characteristics of the caller can include output of voice feature extraction and metadata about the call. The system clusters callers according to these characteristics. The system can use specific voice and text clusters to modify speech recognition results, as well as modifying search results.

Nuance Communications | Date: 2015-01-14

An assignment device (

Discover hidden collaborations