Pacific Metrics Corporation

Monterey, CA, United States

Pacific Metrics Corporation

Monterey, CA, United States
SEARCH FILTERS
Time filter
Source Type

Choi S.W.,Pacific Metrics Corporation | van der Linden W.J.,Pacific Metrics Corporation
Quality of Life Research | Year: 2017

Purpose: Most computerized adaptive testing (CAT) applications in patient-reported outcomes (PRO) measurement to date are reliability-centric, with a primary objective of maximizing measurement efficiency. A key concern and a potential threat to validity is that, when left unconstrained, individual CAT administrations could have items with systematically different attributes, e.g., sub-domain coverage. This paper aims to provide a solution to the problem from an optimal test design framework using the shadow-test approach to CAT. Methods: Following the approach, a case study was conducted using the PROMIS® (Patient-Reported Outcomes Measurement Information System) fatigue item bank both with empirical and simulated response data. Comparisons between CAT administrations without and with the enforcement of content and item pool usage constraints were examined. Results: The unconstrained CAT exhibited a high degree of variation in items selected from different substrata of the item bank. Contrastingly, the shadow-test approach delivered CAT administrations conforming to all specifications with a minimal loss in measurement efficiency. Conclusions: The optimal test design and shadow-test approach to CAT provide a flexible framework for solving complex test-assembly problems with better control of their domain coverage than for the conventional use of CAT in PRO measurement. Applications in a wide array of PRO domains are expected to lead to more controlled and balanced use of CAT in the field. © 2017 Springer International Publishing AG


PubMed | University of Notre Dame and Pacific Metrics Corporation
Type: | Journal: Behavior research methods | Year: 2016

In this article, we propose a simplified version of the maximum information per time unit method (MIT; Fan, Wang, Chang, & Douglas, Journal of Educational and Behavioral Statistics 37: 655-670, 2012), or MIT-S, for computerized adaptive testing. Unlike the original MIT method, the proposed MIT-S method does not require fitting a response time model to the individual-level response time data. It is also computationally efficient. The performance of the MIT-S method was compared against that of the maximum information (MI) method in terms of measurement precision, testing time saving, and item pool usage under various item response theory (IRT) models. The results indicated that when the underlying IRT model is the two- or three-parameter logistic model, the MIT-S method maintains measurement precision and saves testing time. It performs similarly to the MI method in exposure control; both result in highly skewed item exposure distributions, due to heavy reliance on the highly discriminating items. If the underlying model is the one-parameter logistic (1PL) model, the MIT-S method maintains the measurement precision and saves a considerable amount of testing time. However, its heavy reliance on time-saving items leads to a highly skewed item exposure distribution. This weakness can be ameliorated by using randomesque exposure control, which successfully balances the item pool usage. Overall, the MIT-S method with randomesque exposure control is recommended for achieving better testing efficiency while maintaining measurement precision and balanced item pool usage when the underlying IRT model is 1PL.


Bassiri D.,ACT Inc. | Mathew Schulz E.,Pacific Metrics Corporation
Journal of Applied Measurement | Year: 2011

In this study, the Rasch rating scale model (Andrich, 1978) was applied to college grades of four freshman cohorts from a large public university. After editing, the data represented approximately 34,000 students, 1,700 courses and 119 departments. The rating scale model analysis yielded measures of student achievement and course difficulty. Indices of the difficulty of academic departments were derived through secondary analyses of course difficulty measures. Differences between rating scale model measures and simple grade averages were examined for both students, courses, and academic departments. The differences were provocative and suggest that the rating scale model could be a useful tool in addressing a variety of issues that concern college administrators.


Matthew Schulz E.,Pacific Metrics Corporation | Mitzel H.C.,Pacific Metrics Corporation
Journal of Applied Measurement | Year: 2011

This article describes a Mapmark standard setting procedure, developed under contract with the National Assessment Governing Board (NAGB). The procedure enhances the bookmark method with spatially representative item maps, holistic feedback, and an emphasis on independent judgment. A rationale for these enhancements, and the bookmark method, is presented, followed by a detailed description of the materials and procedures used in a meeting to set standards for the 2005 National Assessment of Educational Progress (NAEP) in Grade 12 mathematics. The use of difficulty-ordered content domains to provide holistic feedback is a particularly novel feature of the method. Process evaluation results comparing Mapmark to Anghoff-based methods previously used for NAEP standard setting are also presented. Copyright© 2011.


Patent
Pacific Metrics Corporation | Date: 2013-11-26

A system, method, and computer-readable medium for detecting plagiarism in a set of constructed responses by accessing and pre-processing the set of constructed responses to facilitate the pairing and comparing of the constructed responses. The similarity value generated from the comparison of a pair of constructed responses serves as an indicator of possible plagiarism.


Patent
Pacific Metrics Corporation | Date: 2014-10-30

A computerized system for scoring constructed responses to one or more prompts. The system receives a plurality of constructed responses in an electronic font-based format and separates the plurality of constructed responses into a first group of constructed responses that are scorable by the system and a second group of constructed responses that are not scorable by the system. The constructed responses in the first group are assigned scores based on predetermined rules, and the scores are sent to a score database. In a preferred embodiment, the first group includes constructed responses that do not answer the prompt and constructed responses that match pre-scored responses. The second group of constructed responses are sent by the system to a hand-scorer for manual scoring.


Trademark
Pacific Metrics Corporation | Date: 2014-01-17

Computer software which provides a web-based application to create and administer educational tests via mobile, handheld or traditional desktop computers. Providing an online non-downloadable web-based application to create and administer educational tests for grades K-12 and above, intended to measure student learning across multiple disciplines including but not limited to: mathematics, English, science, history and social studies for tests which include both in-classroom formative assessments and higher stakes summative assessments.


Trademark
Pacific Metrics Corporation | Date: 2014-01-17

Computer software which provides a web-based application to create and administer educational tests via mobile, handheld or traditional desktop computers. Providing an online non-downloadable web-based application to create and administer educational tests for grades K-12 and above, intended to measure student learning across multiple disciplines including but not limited to: mathematics, English, science, history and social studies for tests which include both in-classroom formative assessments and higher stakes summative assessments.


Trademark
Pacific Metrics Corporation | Date: 2012-10-30

COMPUTER SOFTWARE DESIGNED TO SCORE STUDENTS TEST ANSWERS AND RESPONSES.


Trademark
Pacific Metrics Corporation | Date: 2014-06-04

SOFTWARE SYSTEM FOR GRADING STUDENT ESSAYS.

Loading Pacific Metrics Corporation collaborators
Loading Pacific Metrics Corporation collaborators