Educational Testing Service ETS

Red Bank, NJ, United States

Educational Testing Service ETS

Red Bank, NJ, United States
SEARCH FILTERS
Time filter
Source Type

Chen L.,Educational Testing Service ETS | Feng G.,Educational Testing Service ETS | Leong C.W.,Educational Testing Service ETS | Lehman B.,Educational Testing Service ETS | And 4 more authors.
ICMI 2016 - Proceedings of the 18th ACM International Conference on Multimodal Interaction | Year: 2016

As the popularity of video-based job interviews rises, so does the need for automated tools to evaluate interview performance. Real world hiring decisions are based on assessments of knowledge and skills as well as holistic judgments of person-job t. While previous research on automated scoring of interview videos shows promise, it lacks coverage of monologue-style responses to structured interview (SI) questions and content-focused interview rating. We report the development of a standardized video interview protocol as well as human rating rubrics focusing on verbal content, personality, and holistic judgment. A novel feature extraction method using visual words automatically learned from video analysis outputs and the Doc2Vec paradigm is proposed. Our promising experimental results suggest that this novel method provides effective representations for the automated scoring of interview videos. © 2016 ACM.


Bechard S.,Inclusive Educational Assessment | Abell R.,ROSE Consulting | Barton K.,CTB McGraw Hill | Burling K.,Pearson Assessments | And 10 more authors.
Journal of Technology, Learning, and Assessment | Year: 2010

This article represents one outcome from the Invitational Research Symposium on Technology-Enabled and Universally Designed Assessments, which examined technologyenabled assessments (TEA) and universal design (UD) as they relate to students with disabilities (SWD). It was developed to stimulate research into TEAs designed to better understand the pathways to achievement for the full range of the student population through enhanced measurement capabilities offered by TEA. This paper presents important questions in four critical areas that need to be addressed by research efforts to enhance the measurement of cognition for students with disabilities: (a) better measurement of achievement for students with unique cognitive pathways to learning, (b) how interactive-dynamic assessments can assist investigations into learning progressions, (c) improvement of the validity of assessments for students previously in the margins, and (d) the potential consequences of TEA for students with disabilities. The current efforts for educational reform provide a unique window for action, and test designers are encouraged to take advantage of new opportunities to use TEA in ways that were not possible with paper and pencil tests. Symposium participants describe how technologyenabled assessments have the potential to provide more diagnostic information about students from various assessment sources about progress toward learning targets, generate better information to guide instruction and identify areas of focus for professional development, and create assessments that are more inclusive and measure achievement with improved validity for all students, especially students with disabilities. © 2010 by the Journal of Technology, Learning, and Assessment.


Almond R.G.,Educational Testing Service ETS
International Journal of Approximate Reasoning | Year: 2010

For a number of situations, a Bayesian network can be split into a core network consisting of a set of latent variables describing the status of a system, and a set of fragments relating the status variables to observable evidence that could be collected about the system state. This situation arises frequently in educational testing, where the status variables represent the student proficiency and the evidence models (graph fragments linking competency variables to observable outcomes) relate to assessment tasks that can be used to assess that proficiency. The traditional approach to knowledge engineering in this situation would be to maintain a library of fragments, where the graphical structure is specified using a graphical editor and then the probabilities are entered using a separate spreadsheet for each node. If many evidence model fragments employ the same design pattern, a lot of repetitive data entry is required. As the parameter values that determine the strength of the evidence can be buried on interior screens of an interface, it can be difficult for a design team to get an impression of the total evidence provided by a collection of evidence models for the system variables, and to identify holes in the data collection scheme. A Q-matrix - an incidence matrix whose rows represent observable outcomes from assessment tasks and whose columns represent competency variables - provides the graphical structure of the evidence models. The Q-matrix can be augmented to provide details of relationship strengths and provide a high level overview of the kind of evidence available. The relationships among the status variables can be represented with an inverse covariance matrix; this is particularly useful in models from the social sciences as often the domain experts' knowledge about the system states comes from factor analyses and similar procedures that naturally produce covariance matrixes. The representation of the model using matrixes means that the bulk of the specification work can be done using a desktop spreadsheet program and does not require specialized software, facilitating collaboration with external experts. The design idea is illustrated with some examples from prior assessment design projects. © 2009 Elsevier Inc. All rights reserved.


Chen L.,Educational Testing Service ETS | Leong C.W.,Educational Testing Service ETS | Feng G.,Educational Testing Service ETS | Lee C.M.,Educational Testing Service ETS | Somasundaran S.,Educational Testing Service ETS
2015 International Conference on Affective Computing and Intelligent Interaction, ACII 2015 | Year: 2015

Public speaking, an important type of oral communication, is critical to success in both learning and career development. However, there is a lack of tools to efficiently and economically evaluate presenters' verbal and nonverbal behaviors. The recent advancements in automated scoring and multimodal sensing technologies may address this issue. We report a study on the development of an automated scoring model for public speaking performance using multimodal cues. A multimodal presentation corpus containing 14 subjects' 56 presentations has been recorded using a Microsoft Kinect depth camera. Task design, rubric development, and human rating were conducted according to standards in educational assessment. A rich set of multimodal features has been extracted from head poses, eye gazes, facial expressions, motion traces, speech signal, and transcripts. The model building experiment shows that jointly using both lexical/speech and visual features achieves more accurate scoring, which suggests the feasibility of using multimodal technologies in the assessment of public speaking skills. © 2015 IEEE.


Zeidner M.,Haifa University | Matthews G.,University of Cincinnati | Roberts R.D.,Educational Testing Service ETS
Applied Psychology: Health and Well-Being | Year: 2012

This paper reviews the claimed pivotal role of emotional intelligence (EI) in well-being and health. Specifically, we examine the utility of EI in predicting health and well-being and point to future research issues that the field might profitably explore. EI is predictive of various indicators of well-being, as well as both physical and psychological health, but existing research has methodological limitations including over-reliance on self-report measures, and neglect of overlap between EI and personality measures. Interventions focusing on emotional perception, understanding and expression, and emotion regulation, seem potentially important for improving health and well-being, but research on EI has not yet made a major contribution to therapeutic practice. Future research, using a finer-grained approach to measurement of both predictors and criteria might most usefully focus on intra- and inter-personal processes that may mediate effects of EI on health. A video abstract of this article can be viewed at. © 2011 The Authors. Applied Psychology: Health and Well-Being © 2011 The International Association of Applied Psychology.


Xie S.,Educational Testing Service ETS | Evanini K.,Educational Testing Service ETS | Zechner K.,Educational Testing Service ETS
NAACL HLT 2012 - 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Proceedings of the Conference | Year: 2012

Most previous research on automated speech scoring has focused on restricted, predictable speech. For automated scoring of unrestricted spontaneous speech, speech proficiency has been evaluated primarily on aspects of pronunciation, fluency, vocabulary and language usage but not on aspects of content and topicality. In this paper, we explore features representing the accuracy of the content of a spoken response. Content features are generated using three similarity measures, including a lexical matching method (Vector Space Model) and two semantic similarity measures (Latent Semantic Analysis and Pointwise Mutual Information). All of the features exhibit moderately high correlations with human proficiency scores on human speech transcriptions. The correlations decrease somewhat due to recognition errors when evaluated on the output of an automatic speech recognition system; however, the additional use of word confidence scores can achieve correlations at a similar level as for human transcriptions. © 2012 Association for Computational Linguistics.


Chen L.,Educational Testing Service ETS | Feng G.,Educational Testing Service ETS | Joe J.,Educational Testing Service ETS | Leong C.W.,Educational Testing Service ETS | And 2 more authors.
ICMI 2014 - Proceedings of the 2014 International Conference on Multimodal Interaction | Year: 2014

Traditional assessments of public speaking skills rely on human scoring. We report an initial study on the development of an automated scoring model for public speaking performances using multimodal technologies. Task design, rubric development, and human rating were conducted according to standards in educational assessment. An initial corpus of 17 speakers with 4 speaking tasks was collected using audio, video, and 3D motion capturing devices. A scoring model based on basic features in the speech content, speech delivery, and hand, body, and head movements significantly predicts human rating, suggesting the feasibility of using multimodal technologies in the assessment of public speaking skills. Copyright 2014 ACM.

Loading Educational Testing Service ETS collaborators
Loading Educational Testing Service ETS collaborators