Manufacturing Engineering Laboratory NIST logo
HOME About MEL Research products and 
services what's new search

Divisions
PED
MMD
ISD
MSID

MEL
Postdoc
page

Postdoc Opportunities for the
Intelligent Systems Division

Gaithersburg, Maryland

U.S. citizenship is required

Computer Vision for Intelligent Mobile Systems 50.82.31.B1589

Shneier, Michael O. (301) 975-3421
michael.shneier@nist.gov

This project focuses on developing perception systems for intelligent mobile robots to enable them to navigate in environments ranging from urban buildings and manufacturing facilities to outdoor, off-road missions. Our research goal is to use vision and other sensors to support autonomous path planning, obstacle negotiation, road following, and tracking of moving objects. We use a mixture of Ladar,stereo, and color vision linked with the vehicle's Inertial Navigation System and Global Positioning System sensors. The goal is to interpret the world around the vehicle in real time, and to use the information to build a model suitable for planning and controlling future motions. We use the model to predict what should be seen in future images, and are concerned with computing windows of attention over interesting or important features. This facilitates tracking and focuses resources on the most critical regions. Multiple features are used to improve confidence in the detection and classification of objects in the images, and feedback between the model and the sensors dynamically updates both the model confidences and the sensors' focus of attention.

Equipment for this project includes a HMMWV instrumented for autonomous driving, area- and line-scan Ladars, cameras, pan-tilt head, on-board, real-time computing capabilities, and workstations for algorithm development.

Integration of Multiple Sensors for Intelligent Systems 50.82.31.B1590

Shneier, Michael O. (301) 975-3421
michael.shneier@nist.gov

Most sensing approaches perform poorly in unstructured or unknown environments because of inadequacies in gathering and processing sensor data. By using multiple sensors with different capabilities, we hope to capitalize on the strengths of each sensor to improve performance. Also, we wish to develop predictions based on previously-sensed data to focus attention on interesting regions in each of the sensors, and so further enhance performance. The application area is for autonomous mobile robots for both indoor and outdoor applications. Research issues include multisensor fusion, multisensor cooperation, prediction, and integration into intelligent control and planning systems.

Equipment for this project includes a HMMWV instrumented for autonomous driving, area- and line-scan Ladars, cameras, pan-tilt head, on-board, real-time computing capabilities, and workstations for algorithm development.

Return to top

Computer Control Technology for Intelligent Systems 50.82.31.B1591

Albus, James S. (301) 975-3418
albus@cme.nist.gov

Hierarchically structured, sensory-interactive, real-time control systems enable robots to operate effectively in the partially unconstrained environments of manufacturing and construction.

This research is based on a theoretical model containing three parallel interconnected hierarchies of computing modules: (1) a control hierarchy in which high-level commands are decomposed into low-level actions through computations at various subordinate levels of the hierarchy; (2) a sensory processing hierarchy in which processing modules at each level extract from the sensory data stream information that is necessary for control decisions at that level; and (3) a world-model hierarchy that generates expectations as to what sensory data should be expected at each level of the processing hierarchy, at specific times, based on what actions are performed.

Research topics include architectures and programming methods control theory, task-decomposition techniques, knowledge-representation techniques, and many aspects of pattern recognition and sensory-information processing.

Return to top

Self-Calibration of Intelligent Robot Systems 50.82.31.B1594

Shneier, Michael O. (301) 975-3421
michael.shneier@nist.gov

A major problem faced by a robot system is the establishment and maintenance of calibration among its sensory and motor systems. Wear, part replacement, and re-assembly all necessitate recalibration. Even small angular alignment errors can cause large sensing errors. Technical challenges involve calibrating the alignment of egomotion sensors with actuation systems, cross-calibrating multiple visual sensors (e.g., video and range sensors), aligning visual and egomotion and position sensors on a vehicle, and calibrating cameras, probes, and scales on a coordinate measuring machine. Applications include automated site modeling, autonomous vehicle control, and inspection of machined parts.

Available resources include an autonomous vehicle equipped with video cameras, range cameras, and inertial and GPS sensors, and a coordinate measuring machine augmented with a pan-tilt-zoom camera and state of the art probes.

Return to top

Sensors for Mobile Robot Navigation 50.82.31.B4023

Albus, James S. (301) 975-3418
albus@cme.nist.gov

Most research in mobile robot navigation has focused on sensor data processing, sensor fusion, and path planning. Relatively little work has gone into the physics and engineering of the sensors themselves. Opportunities exist to improve the speed and safety of mobile robots in both indoor and outdoor environments through the use of new sensor technology. Modalities of interest include lidars, radars, microwave sensors including MIR, and low-cost imaging sensors. Other modalities may be proposed.

Measures of Performance for Mobile Intelligent Systems 50.82.31.B1596

Hong, Tsai-Hong (301) 975-3444
hongt@cme.nist.gov

The Intelligent Systems Division is working on developing advanced performance metrics and measurements technology for evaluating autonomous intelligent systems and sensors. Practical and commercial applications of autonomous vehicles and other robots will depend on safety and performance standards that ensure the systems are safe, reliable and dependable. Applications under investigation are advanced intelligent automation for manufacturing, unmanned ground vehicles primarily for military applications, and in-vehicle crash avoidance systems for private vehicles. These studies involve the development and evaluation of vision and other sensor systems for accurate measurement of the dynamic performance of intelligent systems. This is a greatly needed avenue of research with broad applications. Skills required for the position have a strong background and demonstrated research experience in image analysis, computer vision or a robot subfield, and expertise in a relevant subfields such as motion analysis, visual servioing, or autonomous vehicles. The NIST Intelligent Systems Division is a long standing, successful research unit with substantial research facilities including computer systems for real-time vision processing, vision-based measurement and estimation algorithms and software for road and obstacle detection and tracking, advanced sensor technology, a state of the art dynamic laser tracker facility, and autonomous testbed vehicles and robots

Sensor Fusion for Intelligent Systems 50.82.31.B3907

Hong, Tsai-Hong (301) 975-3444
hongt@cme.nist.gov

Our research focuses on the representation and understanding of the environment based on fusion of information from different sensors. Sensors used include cameras, FLIRs, flash ladar, and Ladar. Numerous types of human intelligence (e.g., recognition, interpretation, reasoning, planning, problem solving, and language understanding) require information from one or more sources to be integrated to provide a human internal world model. Our research goal is to effectively represent the fused information and to select features that capture elements of human intelligence. We also would like to conduct and research to improve measurement-based decision-making. This could be done by reducing the data from sensors and evaluating the effectiveness of our understanding of the environment based on the fused information (as compared to human performance).

Skills required for the position are a strong background and demonstrated research experience in computer vision, image analysis and active vision, and expertise in a relevant subfield such as robotic, visual servoing, or, autonomous robots. The NIST Intelligent Systems Division is a long standing, successful research unit with substantial research facilities including computer systems for real-time vision processing, vision-based measurement and estimation algorithms and software for road and obstacle detection and tracking, advanced sensor technology, a state of the art dynamic laser tracker facility , and autonomous testbed vehicles and robots.

Development of Metrology Tools, Sensors, Models, Calibration Techniques, and Controllers for Meso-Micro-Nano-Robots and Meso-Micro-Nano-Devices 50.82.31.B4024

Dagalakis, Nicholas (301) 975-5845
dagalakis@cme.nist.gov

A new emerging field in engineering is that of micro-nano-robots and micro-nano-devices(http://www.isd.mel.nist.gov/meso_micro/). Potential applications include the manufacture of optical communication devices, CD and DVD drives, flat panel displays, medical devices, scientific instruments, spacecraft components, deformable controlled shape structures, electronics and computer components, and consumer products. Building very small systems requires a better understanding of the underlying science, and investigating and measuring the physical phenomena that contribute or impede the realization of micro-nano-dynamical systems.

Research opportunities exist in the development of metrology tools, sensors, models, calibration techniques, and controllers for for meso-micro-nano-robots and meso-micro-nano-devices.

Return to top

Development of Intelligent Material Delivery and Handling Tools for the Assembly of Meso-Micro-Nano Devices 50.82.31.B3908

Dagalakis, Nicholas (301) 975-5845
dagalakis@cme.nist.gov

A new emerging field in engineering is that of meso-micro-nano- robots and meso-micro-nano-devices, with dimensions in the range of a few nanometers to a few mm. (http://www.isd.mel.nist.gov/eso_micro/). There is a need for the development of intelligent and flexible material delivery (part feeders) and material handling (grippers) for these devices. These tools could be equipped with sensors (e.g., touch, force) and design features, which will allow them to handle gravity, electrostatic, van der Waals, and surface tension forces, which control the behavior of objects of meso-micro-nano size. Conventional, MEMs, or a combination of the two design techniques could be used for the development of these tools.

Develop Theoretical Principles for Meso-Micro-Nano Devices Assembly and Design for Assembly 50.82.31.B3909

Dagalakis, Nicholas (301) 975-5845
dagalakis@cme.nist.gov

A new emerging field in engineering is that of meso-micro-nano-robots and meso-micro-nano-devices, with dimensions in the range of a few nanometers to a few mm (http://www.isd.mel.nist.gov/meso_micro/). There are principles for assembly and design for assembly for the macro world; similar principles are necessary for the macro world; similar principles are necessary for the meso-micro-macro world. Objects with dimensions in the range of a few nanometers to a few mm are located in a transition range where forces of gravity and inertia could become insignificant compared to electrostatic, van der Waals, and surface tension forces. New part grasping, mating, bonding techniques have to be developed. This work could involve theoretical modeling and experimental testing.

Return to top

Develop Packaging Technologies for Meso-Micro-Nano Devices 50.82.31.B3910

Dagalakis, Nicholas (301) 975-5845
dagalakis@cme.nist.gov

A new emerging field in engineering is that of meso-micro-nano-robots and meso-micro-nano-devices, with dimensions in the range of a few nanometers to a few mm (http://www.isd.mel.nist.gov/meso_micro/). The reliable packaging of these devices and the transmission of power and data, like commands and sensor feedback signals, to parts of these devices is a serious problem. The objective of this work would be to develop scale up interfaces to the macro world, with remote and wireless power, command and feedback signals, interfaces and communication protocols. (http://www.isd.mel.nist.gov/meso_micro/Manu_14_30Mar06_Top_Down_Manu-Final.pdf). This work could involve theoretical modeling and experimental testing.

Return to top

Develop Stereo Microscoping Vision Technology for Meso-Micro Devices 50.82.31.B3911

Dagalakis, Nicholas (301) 975-5845
dagalakis@cme.nist.gov

A new emerging field in engineering is that of meso-micro-nano robots and meso-micro-nano-devices, with dimensions in the range of a few nanometers to a few mm(http://www.isd.mel.nist.gov/meso_micro/). Stereo microscopic vision sensors are very important for the inspection, manipulation, machining and assembly of these devices. Easy to use calibration techniques need to be developed. Intelligent algorithms for the identification of very small, highly specular parts at various depths of field and magnifications are necessary. This work could involve theoretical modeling and experimental testing.

Laser Beam Micro Arrays for Nano-Manufacturing Technology   50.82.31.B5199

Dagalakis, Nicholas (301) 975-5845
dagalakis@cme.nist.gov

The new emerging field of nano-technology and nano-manufacturing is giving scientists and engineers new ways for building and measuring things at the nanoscale (dimensions smaller than 1 micro-meter). Building precise structures at that scale can be accomplished by self-replication or by moving nano-particles until they assume the desired position and orientation. Neither of those two operations is easy to start and to control; here we will concern ourselves with the trapping and manipulation of nano-particles. Well-focused collimated laser beams can trap and manipulate nano-particles in a non-contact non-intrusive way. In order for this technique to build economically competitive products, it must be able to manipulate a large number of these particles in an easily automated fashion. We are experimenting with the use of laser beam micro arrays, which are generated by arrays of micro-mirrors or micro-positioners. The direction of these laser beams can be controlled independent of each other, thus multiplying the effectiveness and controllability of our fabrication equipment. The objectives of this work could be to explore the following: (1) mathematical models and calibration algorithms for micro-mirror arrays or micro-positioner array scanners; (2) control interfaces to dozens of these micro-scanners that can be expanded to hundreds or thousands of these devices; (3) mathematical models that can describe the interation of multiple beams with noano-particles. 4) Expand this technology to arrays of ultrasound or micro-wave beams ((4) Expand this technology to arrays of ultrasound or micro-wave beams.

Distributed Aperture and Micro Array Scanners Sensor    50.82.31.B6232

Dagalakis, Nicholas (301) 975-5845
dagalakis@cme.nist.gov

One of the most important problems of mobile robots is their inability to develop accurate 3D terrain maps in real time, while they are moving. We have proposed a new sensor design concept for the collection of terrain data, which we call "distributed aperture and micro array scanners," (http://www.isd.mel.nist.gov/meso_micro/). This concept proposes the use of distributed MEMS arrays of micro scanners of narrow and wide scanning angles, which will be distributed on the outer surface of the vehicle. Arrays of sensor receivers will receive the laser micro scanner beams (beamlets), after they are reflected by the terrain objects and syunthesize an accurate 3 D image of the terrain. This concept has the potential to increase the depth accuracy of the terrain mapping senor and to decrease its cost and size. The objectives of this work could be to explore the following subjects: 1) Mathematical modeling and simulation of this sensor concept. 2) Controller design and fabrication of experimental prototypes. 3) Development of calibration and performance testing techniques. 4) Optimization design based on various performance goals.

Environment Mapping    50.82.31.B5200

Dagalakis, Nicholas (301) 975-5845
dagalakis@cme.nist.gov

Hazardous environmental conditions, like smoke, dust, and chemicals, may obstruct the vision, hearing, and touch senses of emergency response personnel. This could diminish the effectiveness of the emergency response personnel and put their lives at risk. There is a need for small portable or wearable devices that can replace the information provided by these senses. One of the most important of these senses is a three-dimensional visualization of the immediate surrounding environment. We are developing small-scale simulators of hazardous environmental conditions, which will be used to measure the performance of various environment-mapping technologies. The objectives of this work could be to explore the following: (1) effective emergency personnel interfaces (head-up display, audio, skin touch); (2) effective environment mapping workspace, sensor update speed, accuracy, and resolution; (3) development of advance detection technologies; (4) the use of meso-micro-nanotechnologies in order to improve performance and to reduce the size and cost of these sensors; and (5) development of performance and calibration tests and standard artifacts.

Macro-Meso-Micro Robots for Extremely Hazardous Environments

Dagalakis, Nicholas (301) 975-5845
dagalakis@cme.nist.gov

There are occasions where there is need to obtain environments where high temperature, ionizing radiation, or other extremely hazardous conditions make the operation of conventional electronic circuits and sensors problematic. One possible solution is to transfer the functions of these electronic circuits to a safe location away from the hazardous environment, and to transmit power and communication signals to a hazard resistant device inside the hazardous environment, through arrays of power beams like microwave or laser beams. Intelligent modulation of these beams will carry command signals to the hazard resistant device and the device will beam back sensory feedback signals. We are experimenting with a small-scale simulator of extremely hazardous environment conditions that can test the performance of various micro-robot technologies. The objectives of this work could be to explore the following: (1) intelligent interfaces and communication techniques for the remote transfer of power, command, and sensory feedback signals; (2) development of environment mapping sensors that are hazard resistant and do not require the use of electronic components; (3) development of remediation tools that are hazard resistant and do not require the use of electronic components; (4) development of propulsion and direction control tools that are hazard resistant and do not require the use of electronic components; (5) development of mathematical models and calibration techniques that will allow the off-line planning and control of complex operations.

Computer Assisted Orthopaedic Surgery Metrology and Calibration Needs  50.82.31.B6233

Dagalakis, Nicholas (301) 975-5845
dagalakis@cme.nist.gov

Approximately one million arthroplasty (joint reconstruction) operations are performed per year throughout the world. Sources indicate that 8.8% of revision hip surgery could be attributed to malpositioning of the implant (http://www.isd.mel.nist.gov/medical_devices/). This includes dislocation (5.8%) and technical error (3.0%). A revision orthopaedic surgery is significantly more risky and painful than the original operation. Properly performed computer assisted orthopaedic surgeries (CAOS) reduce the malpositioning rate and thus have a better success potential. We have developed a new generation of artifacts, which resemble the shape of human bone joints. The great advantage of this type of artifacts is that they will allow the surgeon to practice the surgical moves of the operation without shedding a drop of blood, measure the performance of the CAOS system and then proceed with the operation. The objectives of this work could be to explore the following: : (1) design and fabrication of best practice recommended artifacts, (2) collaboration with CAOS surgery medical groups for the testing of the artifacts(3) development of innovative CAOS artifacts dimensional metrology calibration techniques.(4) Expand this work to other human joints, like knee and shoulder.

Next Generation Robot Safety

Dagalakis, Nicholas (301) 975-5845
dagalakis@cme.nist.gov

The Next Generation Robot (NGR) is envisioned as a machine incorporating inherent safety design and benigh operating features, which enable and promote lean manufacturing (http://www.isd.mel.nist.gov/next_generation_robots/index.html). The current state of robot technology has not changed significantly for the last ten years and there are an increasing number of applications that could benefit from collaborative human-robot interaction. One serious impediment of technological progress is the potential for robots to cause serious injury when they come in close proximity to humans. In order to address this problem, a significant portion of valuable manufacturing production floor space is restricted from human access and the cost of protective equipment is approaching the cost of robots. The objectives of this work could be to explore the development of metrology technology and MEMS micro scale safety sensors, which will facilitate the safe operation of the Next Generation Robots.

Three-Dimensional CAD-Based Inspection Using Multiple Sensors 50.82.31.B4025

Hong, Tsai-Hong (301) 975-3444
hongt@cme.nist.gov

The Next Generation Inspection System (NGIS) is a multi-year effort to develop a Coordinate Measuring Machine (CMM) testbed for testing and developing advanced sensing and control technology. NGIS uses research from many diverse fields including robot control, sensor processing, tracking, object recognition, pose determination, and metrology.

Automation of visual inspection of machined parts is difficult. Although special-purpose machine vision systems have been developed for some inspection operations, very few general-purpose visual inspection systems have been developed. Robust inspection systems are required to be able to segment and recognize part features and to determine their pose. One possible approach might be to create an inspection system for any part for which a CAD model exists. The use of CAD models to guide vision processing for pose localization will allow more flexible fixturing in the workplace and might lead to more automated inspection.

Available resources include CMM, charge-coupled device, range, FLIR, polarization, image-intensified, and multispectral cameras, and two- and three-dimensional probes.

Integrating Prior Knowledge with Real-Time Sensing for Autonomous Navigation 50.82.31.B4515

Shneier, Michael O. (301) 975-3421
michael.shneier@nist.gov

A robotic vehicle that is required to traverse unknown terrain, must be able to build a model of the local environment suitable for avoiding obstacles and forbidden areas. Currently, most vehicles, including the NIST HMMWV, rely on local sensing to construct their models, and update the model continuously in real time as they move. While this allows near-term planning, it does not take advantage of prior knowledge from maps or previous experience, and may result in poor driving behavior and non-optimal routes. Establishing a two-way link between the sensory processing subsystem and the world modeling and knowledge representation system would have a number of benefits. The sensory system would be able to use information already in the world model (from maps, previous traversals of the terrain, and prior expectations) to reduce ambiguities, bound and name regions and objects, and predict what will be seen in the future. The world model would use sensory data to expand its knowledge, correct prior assumptions, and maintain correspondence between the internal representation and the real world. Overcoming the problems of building and maintaining these links would greatly enhance the capabilities of autonomous vehicles.

Available equipment for this project includes a HMMWV instrumented for autonomous driving, area- and line-scan Ladars, cameras, pan-tilt head, on-board, real-time computing capabilities, and workstations for algorithm development.

Return to top

Learning in Computer Vision 50.82.31.B4784

Shneier, Michael O. (301) 975-3421
michael.shneier@nist.gov

Machine learning has the potential to help develop robust and flexible computer vision systems that will improve performance in a number of areas. These include automating the acquisition of visual models, adapting task parameters and representations, transforming signals to symbols, and developing windows of attention over target objects. We are interested in research to determine which machine learning approaches are best suited to the computer vision domain, and how information should be represented to best take advantage of learning. We are also interested in methods of automatically transferring experience in one domain to another. Other issues include evaluating the performance of learning systems and the quality of the learned knowledge, how to deal with noisy and inconsistent data, how to prevent overlearning, and how to develop unsupervised learning systems that can extract novel features or objects from large sequences of images. Applications include part inspection and perception to enable navigation in autonomous vehicles.

Real/Virtual Environment for Autonomous Mobile Robots   50.82.31.B5897

Balakirsky, Stephen B. (301) 975-4791
stephen.balakirsky@nist.gov

Recent advances in simulation engines have allowed the construction of real/virtual environments for robotic platform research, development, and implementation. Through the addition of external data channels, it is possible to have real robotic platforms interact with the simulation. The real-robots are able to "perceive" the events, entities, and environment of the simulation, and the simulation is able to accept the real robotic entity as a simulation participant. The Intelligent Systems Division of the National Institute of Standards and Technology is interested in expanding the existing capabilities of its real/virtual environment. This includes simulation work in modeling additional sensors, ground and air platforms, and events as well as real-vehicle work in developing planning algorithms to take advantage of this additional information and tools to allow for the evaluation of the real systems. Research issues include, but are not limited to: Modeling the uncertainty associated with data (for example from sensors), both in the real and the virtual world, methods of fusing and/or reconciling the real and virtual information, and dealing with different resolution qualities in the models.

melwebmaster@nist.gov

Date created: Feb. 24, 2001
Last updated: May. 06, 2008

Return to top

Manufacturing Engineering Laboratory Skip navigation