Entity

Time filter

Source Type

NJ, United States

Patent
Educational Testing Service | Date: 2015-03-26

Systems and methods described herein automate imposture detection in, e.g., test settings based on voice samples. Based on user instructions, a processing system may determine at least one set of appointments, each having voice samples and a voice print, and a comparison plan for comparing the appointments. The comparison plan defines a plurality of appointment pairs. For each appointment pair, the system compares the associated first and second appointments by, e.g., comparing the first appointments voice samples to the second appointments voice print and generating corresponding raw scores, which may be used to compute a composite score. If the composite score satisfies a predetermined threshold condition for fraud, the system may determine whether flagging/holding criteria are satisfied by the raw scores. If the criteria are satisfied, a flag or hold notice may be associated with the appointment pair to trigger an appropriate system/human response (e.g., withholding the appointments test results).


Patent
Educational Testing Service | Date: 2015-06-01

Systems and methods are provided for identifying one or more target words of a corpus that have a lexical relationship to a plurality of provided cue words. The cue words and statistical lexical information derived from a corpus of documents are analyzed to determine candidate words that have a lexical association with the cue words. The statistical information includes numerical values indicative of probabilities of word pairs appearing together as adjacent words in a well-formed text or appearing together within a paragraph of a well-formed text. For each candidate word, a statistical association score between the candidate word and each of the cue words is determined. An aggregate score for each of the candidate words is determined based on the statistical association scores. One or more of the candidate words are selected to be the one or more target words based on the aggregate scores.


Systems and methods are provided for a computer-implemented method of providing a score that measures an essays usage of source material provided in at least one written text and an audio recording. Using one or more data processors, a determination is made of a list of n-grams present in a received essay. For each of a plurality of present n-grams, an n-gram weight is determined, where the n-gram weight is based on a number of appearances of that n-gram in the at least one written text and a number of appearances of that n-gram in the audio recording, and an n-gram sub-metric is determined based on the presence of the n-gram in the essay and the n-gram weight. A source usage metric is determined based on the n-gram sub-metrics for the plurality of present n-grams, and a scoring model is used to generate a score for the essay based on the source usage metric.


Systems and methods are provided for generating an intelligibility score for speech of a non-native speaker. Words in a speech recording are identified using an automated speech recognizer, where the automated speech recognizer provides a string of words identified in the speech recording, and where the automated speech recognizer further provides an acoustic model likelihood score for each word in the string of words. For a particular word in the string of words, a context metric value is determined based upon a usage of the particular word within the string of words. An acoustic score for the particular word is determined based on the acoustic model likelihood score for the particular word from the automated speech recognizer. An intelligibility score is determined for the particular word based on the acoustic score for the particular word and the context metric value for the particular word.


Patent
Educational Testing Service | Date: 2015-03-24

Systems and methods described herein utilize supervised machine learning to generate a model for scoring interview responses. The system may access a training response, which in one embodiment is an audiovisual recording of a person responding to an interview question. The training response may have an assigned human-determined score. The system may extract at least one delivery feature and at least one content feature from the audiovisual recording of the training response, and use the extracted features and the human-determined score to train a response scoring model for scoring interview responses. The response scoring model may be configured based on the training to automatically assign scores to audiovisual recordings of interview responses. The scores for interview responses may be used by interviewers to assess candidates.

Discover hidden collaborations