Entity

Time filter

Source Type

Arlington, VA, United States

Beaudoin M.E.,Strategic Analysis Enterprises, Inc. | Schmorrow D.D.,Office of the Secretary of Defense
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) | Year: 2011

This paper provides a summary of the presentations presented in the Operational Neuroscience session during Augmented Cognition International 2011 at Human Computer Interaction International 2011 in Orlando, Florida, July, 2011. © 2011 Springer-Verlag.


O'Donnel J.E.,Johns Hopkins University | George A.S.,Evidence Based Research, Inc. | Wynn D.M.,Evidence Based Research, Inc. | Brett S.W.,Evidence Based Research, Inc. | And 3 more authors.
Proceedings of SPIE - The International Society for Optical Engineering | Year: 2012

This is another in a sequence of papers reporting on the development of innovative methods and tools for estimating demand requirements for network supply capabilities. An extension of the demand estimation methodology, this paper focuses on steps required to assess the adequacy of performance of candidate networks by means of an integrated tool. The steps include mapping units in a scenario to units in the associated database to determine their aggregate demand, developing an appropriate logical network with computational constraints dictated by the scenario, and calculating inter-unit demand of the units in the logical network. Because of the complexity of the end-to-end process, assuring repeatability while facilitating rapid exploration of issues is a challenge. Earlier tools implementing this process were fragmented and prone to error, requiring significant analyst effort to accomplish even the smallest changes. To address these limitations, the process has been implemented in an easy to use, integrated tool. This allows complete exibility in manipulating data and promotes rapid, but repeatable analyses of tailored scenarios.


Miles D.,Office of the Secretary of Defense
Military Engineer | Year: 2010

The U.S. Army Corps of Engineers (USACE) is working in partnership with unit-level engineers and contractors to provide basic comforts while also focusing heavily on longer-term projects considered critical to the troops' ultimate success. Among the long-term projects to be supported by USACE are permanent facilities for the Afghan national security forces. Meanwhile, USACE also is overseeing multiple infrastructure and development projects that promote economic development and stability. USACE is building Ring Road Highway in Afghanistan, which when completed, will connect Afghanistan's major cities and villages, opening more Afghan markets and enabling Afghan manufacturers and entrepreneurs to increase their trade with Uzbekistan, Tajikistan and other neighbors. The American President has sent 20,000 Marines to the country to support electricity, potable water and waste facilities as well as headquarters buildings and aircraft facilities.


Zais M.,Office of the Secretary of Defense | Zhang D.,University of Colorado at Boulder
International Journal of Production Research | Year: 2015

Personnel retention is one of the most significant challenges faced by the US Army. Central to the problem is understanding the incentives of the stay-or-leave decision for military personnel. Using three years of data from the US Department of Defense, we construct and estimate a Markov chain model of military personnel. Unlike traditional classification approaches, such as logistic regression models, the Markov chain model allows us to describe military personnel dynamics over time and answer a number of managerially relevant questions. Building on the Markov chain model, we construct a finite-horizon stochastic dynamic programming model to study the monetary incentives of stay-or-leave decisions. The dynamic programming model computes the expected pay-off of staying versus leaving at different stages of the career of military personnel, depending on employment opportunities in the civilian sector. We show that the stay-or-leave decisions from the dynamic programming model possess surprisingly strong predictive power, without requiring personal characteristics that are typically employed in classification approaches. Furthermore, the results of the dynamic programming model can be used as an input in classification methods and lead to more accurate predictions. Overall, our work presents an interesting alternative to classification methods and paves the way for further investigations on personnel retention incentives. © 2015 This work was authored as part of the Contributor's official duties as an Employee of the United States Government and is therefore a work of the United States Government. In accordance with 17 U.S.C. 105, no copyright protection is available for such works under U.S. Law.


Long E.A.,Office of the Secretary of Defense | Long E.A.,Scientific Research Corporation | Gullo L.,Raytheon Co. | Nikora A.P.,Jet Propulsion Laboratory
Proceedings - 23rd IEEE International Symposium on Software Reliability Engineering Workshops, ISSREW 2012 | Year: 2012

An increase in the number of software programs failing suitability and effectiveness requirements during DoD Initial Operational Test and Evaluation (IOT&0026;amp;E) has resulted in a mandate from the Office of the Secretary of Defense (OSD), Director, Operational Test &0026;amp; Evaluation (DOT&0026;amp;E) to manage software reliability growth within a program's development and test lifecycles. The intent is to allow Program Managers (PMs) to better determine whether the current system state is sufficient for a product release, or if the release date should be deferred if existing software issues require resolution and corrective actions. The purpose of this paper is to develop a methodology for applying software reliability growth models (SRGM) to track and predict software reliability growth by applying categorizations of software usage in DoD systems. DOT&0026;amp;E defines three types of software usage in DoD systems: (1) Hybrid systems containing a combination of software, hardware, and human interfaces, but critical functionality is a combination of hardware and software sub systems, i.e., complicated ground combat vehicles, aircraft, and ships, (2) Net centric systems consisting of both hardware and software, but the critical functionality is software centric with hardware being highly reliable and/or redundant, i.e., the C4ISR concept of Command, Control, Communications, Computers, Intelligence, Surveillance and Reconnaissance, and (3) Space systems, principally satellites. As defined in IEEE/AIAA P1633, Recommended Practice on Software Reliability, a software reliability model specifies the general form of the dependence of the failure process on the principal factors that affect it: fault introduction, fault removal and the operational environment. Reliability assessments in test and operational phases may follow the same Failure Reporting, Analysis and Corrective Action System (FRACAS) procedure, but there a few differences. During the test phase, software faults are fixed when the corresponding software failures are detected, depending on severity of the failure effects. As a result of design improvements, reliability growth (e.g., decreasing failures over time) should be observed. However, in the operational phase, correcting a software fault may involve delays in providing the new software release or software patch to the customers' sites. Therefore, the realization of reliability growth in the customer application may not be immediate. Software reliability modeling is done to: (1) estimate the execution time during test required to meet a specified reliability objective, and (2) identify the expected reliability of the software when the product is released. The three general classes of software reliability prediction models we consider are: (1) Exponential non-homogeneous Poisson process (NHPP) models, (2) non-exponential "standard&x0022; NHPP models, and (3) Bayesian models. The NHPP-based models are more commonly used because of their simplicity, convenience, and tractability. In spite of their simplicity, they often provide estimates having a good fit to the actual failure data. The other two classes of model are used less often, usually after experimentation has shown that the NHPP-based model predictions do not fit the actual data well. The non-exponential NHPP models, for example, assume that the earlier discovered faults have a greater impact on reducing the failure intensity than those encountered later. They also assume that there is no upper bound on the number of total failures. These two attributes make these models more likely to produce accurate estimates in environments in which the software is undergoing change in addition to defect repair. Other types of software reliability models, such as the Schneidewind model and Shick-Wolverton model, which are NHPP-based models, and the Littlewood-Verrall model, a Bayesian model, will also be assessed and compared in the paper. We also examine the following important data collection issues: &2022; Assuring that failures are counted consistently for each component of the system and for each testing phase. Usually this includes a stipulation that failures are only counted once during testing, even if they occur more than once. This is to maintain consistency with the assumption that underlying faults are repaired. &2022; Assuring that all failures are recorded and properly categorized. The authors' experience indicates that failures are not always recorded. In addition, if software failures are tracked in a problem reporting system that also tracks other types of failures (e.g., hardware, procedural), software failures may be incorrectly classified if the problem closure process does not ensure the failures are correctly labeled. &2022; Assuring that the dates and times at which the failures were observed are correctly recorded. This is particularly important if the SRGMs being used are based on times between failures.

Discover hidden collaborations