Energid Technologies is an engineering firm providing robotics , machine vision, and remote controlsoftware. Its headquarters are in Cambridge, Massachusetts. It has regional presence in New York, New York; Pittsburgh, Pennsylvania; Tucson, Arizona; Austin, Texas; and Chicago, Illinois. Energid also has an international presence in Bangalore, India. Energid Technologies develops tools for robotic applications in the aerospace, agriculture, transportation, defense, and medical industries. Energid's Actin and Selectin products provide advanced robotics technology in the form of extendable software toolkits. Energid has applied its software to control robots for seafloor oil exploration, nuclear reactor inspection, and citrus harvesting. Wikipedia.
Agency: Department of Defense | Branch: Army | Program: SBIR | Phase: Phase I | Award Amount: 150.00K | Year: 2014
Energid Technologies proposes a software architecture and supporting implementation focused on training robots. Currently, robot training for complex tasks often involves writing software, which is impractical in the field. To address this, Energid will leverage its commercial Actin robot control and simulation software to create a new open interface that will allow researchers and developers to share existing software and add algorithms that make robot training effective and broadly accepted. The new training software will include physics-based simulation for use both within control algorithms and for validating training procedures. It will apply a set of control primitives, including positioning and orienting of components and joints, which is both practical and provably complete for robot control. Built around this will be an algorithm development framework for scripting application of the control primitives in complex and powerful ways, and for integrating advanced sensor and planning algorithms for grasping and manipulation. In this work, Energid will leverage its more than 10 years of corporate experience in developing robot software to create an open system with the potential for transforming the use of robots.
Agency: National Aeronautics and Space Administration | Branch: | Program: SBIR | Phase: Phase I | Award Amount: 125.00K | Year: 2015
Energid Technologies proposes a game-theory inspired simulation tool for testing and validating robotic lunar and planetary missions. It applies Monte Carlo simulation within a multi-optimization environment tailored to the needs of NASA. Stochastic optimization is combined with randomized simulation to maximize multiple statistical measures of performance and calculate the parameters giving the extreme scenarios. The tool works with continuous parameters, such as mass and terrain properties, and with discrete parameters, such as lighting selection, gearing selection, and navigation parameters. It includes accurate modeling of sensors and terrain interaction using calculations performed on Graphical Processing Units (GPUs). The technique proposed is computationally expensive, but highly parallelizable, and the approach includes a design for distribution of computational burden over multiple computers, GPUs, clusters, and cloud configurations. The proposed combination of fast algorithms and game-theory-inspired statistical optimization will provide a powerful tool for NASA's use in planning missions.
Agency: National Aeronautics and Space Administration | Branch: | Program: SBIR | Phase: Phase I | Award Amount: 124.99K | Year: 2015
Energid Technologies proposes a lightweight kinematically redundant robot arm and software toolkit to extend the capabilities of Assistive Free-Flyers (AFFs). The arm will leverage Energid's existing design actuated with modified smart servos to reduce cost and weight and improve performance. A unique feature of the arm is universal end-of-arm-tooling interface that provides extensibility through standard mechanical, data, and power connectors. This interface will allow the arm to host a variety of sensors and tools for broad application. Energid's Actin software toolkit will be extended for control and simulation of AFFs with attached manipulators. Control techniques will be developed to enable novel modes of acrobatic maneuvering and whole-body manipulation. This control and simulation capability will apply to one or multiple arms attached to an AFF. The new control modes will enable use of the arms in perching and momentum conserving acrobatics, where the arm and thrusters are simultaneously actuated to reach a goal. The Actin simulation engine, which supports articulated dynamics, contact dynamics, and sensors, will be extended to include aerodynamics and thruster models for AFF design validation, mission planning, and testing. The new arm and toolkit will seamlessly transition between simulating terrestrial mobile robots and fielded AFFs in microgravity.
Agency: Department of Defense | Branch: Office of the Secretary of Defense | Program: SBIR | Phase: Phase II | Award Amount: 1000.00K | Year: 2015
It is not likely that an autonomous system can be programmed with all the information it requires to perform every mission or every variation of every contingency within the near future. One way to address this issue is to provide user-friendly means to teach or program a robot to add to its autonomous capability. This proposal focuses on the physical aspects of the robot performance, manipulation and mobility, and the combination of the two, whole-body manipulation.1 Optimally, one would teach a robot as one would teach a human how to perform a physical task. Since this would be an initial entry into intuitive user friendly methods to teach a robot to perform complex physical acts, it is advisable to start with a virtual representation or simulation that lay the foundation for the interface and programming syntax that would describe the physical actions of the robot.2 Programming a robot to use a new tool provides a good example where teaching will prove beneficial. There are issues with teaching proper grasp of the tool, and the proper pose that would allow the robot to use the tool effectively and apply forces that would not damage the object(s) the tool is acting upon, the tool, or the robot. Thus, before the issues of the human interface are addressed, the basic issues of control for a dynamic multi-body system must be solved in manner that can be represented as a form that can be programmed into a robot.2,3 Focus should be placed first on the development of a software architecture that encompasses and integrates task and motion planning for whole body manipulation. The software should have the ability to program a robot to perform physical tasks through a virtual environment capable of modeling dynamics and physical contact/interaction without the need for a user such as a Soldier to write a C-level program. However, the programming constructs or subroutines created by this architecture to operate the robot must be accessible and usable in common programming languages such as C/C++ and Python as open architecture libraries and subroutines. The newly learned behavior must be of a structure that the robot can incorporate it into its existing control programming and implement it autonomously. It is expected that in order to implement the virtual environment model of the robot which would include descriptions of its physical configurations, actuators, and sensors, highly trained and educated personnel may be needed. The virtual model should be of a fidelity that allows control for a dynamic multi-body system to be developed. However, once the detailed model is implemented in the virtual environment, the software should provide an interface that allows a user such as a Soldier to interact with and program a robot through the virtual environment to perform a physical task such as grasping or repetitive tasks such as sweeping for mines and IEDs, and trenching for wires. A secondary focus should be to develop an open software architecture that can be built upon and evolve through time that will allow DoD, Universities, and private industry to collaborate as a software development community on this problem. Open architecture efforts such as ROS should serve as an example. There should be interfaces for well-established speech recognition and vision libraries such as OpenCV. It is expected that the architecture will allow for future advancements to be added. For example there are different methodologies for teaching how to recognize a physical object. 4 The open software architecture should have the proper interface to allow object and feature recognition packages/algorithms to be added and updated.
Agency: Department of Defense | Branch: Office of the Secretary of Defense | Program: SBIR | Phase: Phase II | Award Amount: 999.93K | Year: 2015
Recent advancements in sensor simulation tools  have improved our ability to model radar, lidar, camera, and GPS with software/hardware in the loop. Of course, our ability to model the physics of heavy trucks  is quite mature as well. To address the challenge of developing the STE, we will provide our autonomy algorithms as Government Furnished Equipment (GFE). The focus of this topic is: 1) to build an environment that mirrors actual test data to provide a departure point for Monte Carlo simulations. 2) research the failure modes for autonomy algorithms within the capabilities of current sensor models and 3) simulate the corner cases that would exercise these failure modes. This topic is not focused on improving physics-based simulation of heavy trucks or building better sensor models. Neither do we seek to develop new algorithms for autonomous behavior, but rather to leverage existing GFE autonomy algorithms to study the open research question of how we can test these algorithms in simulation, and certify that they are safe to the fullest extent possible within current simulation environments.