Current Areas of Research

 


 

Computational Sensor Networks

 

                                                                                 Leadership Algorithm Result on 250 Nodes

 

AFOSR: Bayesian Computational Sensor Networks

 

The major specific objectives of this work are to:

 

1.     Develop Bayesian Computational Sensor Networks (BCSN) which detect and identify structural damage.  We will quantify physical phenomenon and sensor models; e.g., develop piezoelectric and other computational models to reconstruct physical phenomena and characterize uncertainties due to environmental factors. Note that we are modeling the physics of the signal and sensor, and if other mechanics models are needed, will use existing.

2.     Develop an active feedback methodology using model-based sampling regime (rates, locations and types of data) realized with embedded sensors and active sensor placement. This will allow on-line sensor model validation, and the use of on-demand complimentary sensors.

3.     Develop a rigorous model-based systematic treatment of the following uncertainty models: (1) stochastic uncertainties of system states, (2) unknown model parameters, (3) dynamic parameters of sensor nodes, and (4) material damage assessments (viewed as source input parameters).

4.     Perform validation experiments on metal and composite structures.

 

These address three major research issues in DDDAS: (1) quantify sensing capability, (2) develop new qualitative modeling approaches, and (3) develop adequate experimental methods.

 

(also see SNET Papers and Data)




 

               Symmetry as a Basis for Cognition

                                                                                Symmetry

 

Imagine a robot that (1) when first powered up spends a few days learning about its own physical structure, including sensing and actuation capabilities, (2) then connects to the internet and can find appropriately encoded knowledge useful to its current environment, (3) next asks humans to teach it tasks of interest to them, and (4) finally enters its life cycle in its designated role, creating its own knowledge that can be shared.  Such a scenario depends upon shared semantic grounding of the embodied agents' concepts, as well as computationally effective and efficient conceptualization processes. We exploit symmetry as a general framework to achieve this grounding using a specific set of symmetry operators for the recognition, representation and exploitation of sensorimotor data streams to achieve robust, autonomous robot behavior.  This is a new approach to robot architecture which uses a set of innate symmetry theories to parse sensorimotor data into constructs which coordinate the simultaneous control of actuator/sensor sequences in order to bootstrap affordances from exploratory actions.

 

Specifically, we are developing (1) robust symmetry representations and associated detectors for 1D, 2D, and 3D data, (2) symmetry-controlled actuators for physical robots, (3) combined sensorimotor symmetry operators which define desired robot behaviors, and (4) a symbolic language for robots to share representations and behaviors.  These behaviors are expressed in such a way as to allow interpretation on a variety of platforms for which the semantics is defined.  Such conceptualizations represent and maintain robust invariances of the robot with respect to the environment (e.g., upright pose, forward motion).  Finally, we propose to validate these ideas on a sequence of Symbots - robots whose design and construction is based on these principles and to measure their performance on real-world scenarios.

 

 (also see SE(3); )




 

RobotShare: Robot Knowledge Sharing

 

                                                                                  RobotShare

Knowledge representation is a traditional field in artificial intelligence. Researchers have developed various ways to represent and share information among intelligent agents. Agents that share resources, data, information, and knowledge perform better than agents working alone. However, previous research also reveals that sharing knowledge among a large number of entities in an open environment is a problem yet to be solved. Intelligent robots are designed and produced by different manufacturers. They have various physical attributes and employ different knowledge representations. Therefore, any non-standard or non-widely-adopted technology is unsuitable to provide a satisfactory solution to the knowledge sharing problem. In this research, we pose robot knowledge sharing as an activity to be developed in an open environment - the World Wide Web. Just as search engines like Google provide enormous power for information exchange and sharing for humans, we believe a searching mechanism designed for intelligent agents can provide a robust approach for sharing knowledge among robots. We have developed: (1) a knowledge representation for robots that allows Internet access, (2) a knowledge organization and search indexing engine, and (3) a query/reply mechanism between robots and the search engine.


 (also see RobotShare)




 

                Technical Drawing Analysis

 

                                                                                   Engineering Drawing

 

Engineering drawing analysis involves the automatic semantic interpretation of scanned images of engineering drawings; this involves text extraction and interpretation, dimension set analysis, graphics extraction, etc. Raster map image analysis involves the automatic semantic interpretation of map images; in this case, it is necessary to extract roads, road types, road intersections, waterways, elevation lines, land types, etc.

 

(also see Viper)