Time filter

Source Type

Canberra, Australia

The Australian Defence Force Academy is a tri-service military Academy that provides military and tertiary academic education for junior officers of the Australian Defence Force in the Royal Australian Navy , Australian Army and Royal Australian Air Force .Tertiary education is provided by the University of New South Wales Canberra campus, which is the awarding body for ADFA qualifications. Apart from educating future leaders of the Australian Defence Force, UNSW campus also provides postgraduate programs and short courses both to Department of Defence personnel and the general public.The stated purpose of ADFA is "to serve Australia by providing the Australian Defence Force with tertiary graduates who have the attributes, intellect and skills required of an officer."ADFA is located in the suburb of Campbell, Canberra, Australian Capital Territory, near the Australian Government district of Russell. It is situated next to Mount Pleasant, which gives some parts of ADFA a view over the rest of Canberra. The ADFA is also adjacent to the Australian Army military academy, the Royal Military College, Duntroon.The junior officers that attend the Australian Defence Force Academy hold the rank of Midshipman in the Royal Australian Navy, Officer Cadet in the Australian Army and Officer Cadet in the Royal Australian Air Force. The Academy is currently commanded by Air Commodore Alan Clements. Wikipedia.

Petersen I.R.,Australian Defence Force Academy
Systems and Control Letters | Year: 2012

This paper presents a method for approximating a class of complex transfer function matrices corresponding to physically realizable complex linear quantum systems. The class of linear quantum systems under consideration includes interconnections of passive optical components such as cavities, beam-splitters, phase-shifters and interferometers. This approximation method builds on a previous result for cascade realization and gives good approximations at low frequencies. © 2011 Elsevier B.V. All rights reserved. Source

This paper presents a new approach to constructive output feedback robust nonlinear guaranteed cost controller design. The approach involves a class of controllers which include copies of the slope bounded nonlinearities occurring in the plant. Dynamic multipliers are introduced to exploit these repeated nonlinearities. The linear part of the controller is synthesized using minimax LQG control theory. © 2010 Elsevier Ltd. All rights reserved. Source

Petersen I.R.,Australian Defence Force Academy
IEEE Transactions on Automatic Control | Year: 2013

This technical note considers the use of singular perturbation approximations for a class of linear quantum systems arising in the area of linear quantum optics. The technical note presents results on the physical realizability properties of the approximate system arising from singular perturbation model reduction. © 2012 IEEE. Source

Merrick K.E.,Australian Defence Force Academy
IEEE Transactions on Autonomous Mental Development | Year: 2012

Incorporating intrinsic motivation with reinforcement learning can permit agents to independently choose, which skills they will develop, or to change their focus of attention to learn different skills at different times. This implies an autonomous developmental process for skills in which a skill-acquisition goal is first identified, then a skill is learned to solve the goal. The learned skill may then be stored, reused, temporarily ignored or even permanently erased. This paper formalizes the developmental process for skills by proposing a goal-lifecycle using the option framework for motivated reinforcement learning agents. The paper shows how the goal-lifecycle can be used as a basis for designing motivational state-spaces that permit agents to reason introspectively and autonomously about when to learn skills to solve goals, when to activate skills, when to suspend activation of skills or when to delete skills. An algorithm is presented that simultaneously learns: 1) an introspective policy mapping motivational states to decisions that change the agent's motivational state, and 2) multiple option policies mapping sensed states and actions to achieve various domain-specific goals. Two variations of agents using this model are compared to motivated reinforcement learning agents without introspection for controlling non-player characters in a computer game scenario. Results show that agents using introspection can focus their attention on learning more complex skills than agents without introspection. In addition, they can learn these skills more effectively. © 2009-2011 IEEE. Source

Harris K.R.,Australian Defence Force Academy
Journal of Chemical Physics | Year: 2010

Previously [K. R. Harris, J. Chem. Phys. 131, 054503 (2009)] it was shown that both real and model liquids fit the fractional form of the Stokes-Einstein relation [fractional Stokes-Einstein (FSE)] over extended ranges of temperature and density. For example, the self-diffusion coefficient and viscosity of the Lennard-Jones fluid fit the relation (D/T) = (1/η)t with t= (0.921±0.003) and a range of molecular and ionic liquids for which high pressure data are available behave similarly, with t values between 0.79 and 1. At atmospheric pressure, normal and heavy water were also found to fit FSE from 238 to 363 K and from 242 to 328 K, respectively, but with distinct transitions in the supercooled region at about 258 and 265 K, respectively, from t=0.94 (high temperature) to 0.67 (low temperature). Here the recent self-diffusion data of Yoshida [J. Chem. Phys. 129, 214501 (2008)] for the saturation line are used to extend the high temperature fit to FSE to 623 K for both isotopomers. The FSE transition temperature in bulk water can be contrasted with much lower values reported in the literature for confined water. © 2010 American Institute of Physics. Source

Discover hidden collaborations