PRISM Intelligent System
Introduction
Good afternoon everybody and welcome to my presentation on the PRISM intelligence systems. I’m Sudha Sivashanmugam and I’m doing my masters in computer science. I’ve been associated with ITTC for the last 2 years. I’ve been working as a grad research assistant under Dr. Costas Tsatsoulis.
Let’s get started with the presentation.
PRISM stands for Polar Radar for Ice Sheet Measurements, so from the title of the project itself you get an idea about what the project is about. We are developing a polar radar for ice sheet measurements. It’s not going to be not just one radar but a set of radars, and not simple radars, but mobile, autonomous, intelligent radars.
Mobile because the radar sensors are going to be placed on autonomous robotic vehicles, autonomous because they’re going to be independent and intelligent because they know what to measure, how to measure and where exactly to measure.
Mass Balance
What exactly do we measure? These radar sensors would measure certain glacier properties that would contribute to what is called the determination of mass balance." Glaciologists say that mass balance is a net gain or loss of ice.
You know that recent analysis has shown that sea level has been rising, and scientists postulate that melting of polar ice sheets has contributed to sea level rise. But there's insufficient data to confirm these theories. Actually there is a large degree of uncertainty associated with whether the polar ice sheets are actually shrinking or growing.
So if we determine the mass balance (the net gain or loss) of ice over a certain period of time, the data would help us actually determine that.
There are more than 40 students and faculty members involved with the PRISM project, and they are trying to develop radar sensors that would measure three different properties of ice. I will say what those properties are in the next slide.
They are measuring three different properties of ice and with those data they would provide information to the scientific community. And the scientific community would use this information to construct ice sheet models.
So PRISM involves research in radars and remote sensing, intelligence systems, mobile robotics and communication systems. Communication systems because we are using two vehicles and these vehicles talk with each other via a wireless connection. And people who go to the experiment site communicate with the KU campus via an Iridium satellite link.That's where the communication systems come into the picture.
Different Radars Used By PRISM
There are basically three different radars that are developed as part of the project. There is a Synthetic Aperture Radar, which is abbreviated and popularly known, as SAR.This radar generates 2-D reflectivity maps of the bedrock to determine the basal conditions, including the presence and distribution of water.
We have Wideband Dual-mode Radar, which basically employs a radar depth sounder, and an accumulation radar. The radar depth sounder would give information about the ice thickness and it would be used to map deep internal layers to a depth of a few hundreds of kilometers. The accumulation radar would map near-surface internal layers, which are layers that are closer to the surface, to a few kilometers depth.
Of these, the most important is the SAR because the design is exclusive and the SAR is very unique in its own way. It has two operation modes, the monostatic and the bistatic, and it can operate at three different frequencies - 60, 150, and 350 mega-Hertz.
Monostatic vs Bistatic
So here is a diagrammatic illustration of the monostatic and bistatic measurements.
This illustrates a monostatic SAR measurement. You have the transmitter and receiver on one vehicle. When the ice is thick enough and the bedrock is rough, then most of the incident energy is reflected back. So in that case you put both the transmitter and receiver on a single unit and you go for the monostatic measurement.
In the bistatic mode when the bedrock is very smooth, what happens is that most of the incident energy is not reflected back. So we separate the transmitter and the receiver by a certain geographic distance and we increase the measurement spot. So this way, we place the receiver in the anticipated path of the reflected signal. The radar guys call it the "quasi-specular forward signal". I'm not sure that is. Probably if you ask any of the radar guys they would know it. So that's how the bistatic measurements are made.
So (pointing to diagram) this would be the monostatic receiver and in that picture the receiver that's on the other side would be called the bistatic receiver.
Vehicles
We have a manned vehicle that's going to host the SAR transmitter, the monostatic SAR receiver and it has one dual-mode radar.
We also have an autonomous rover that has the bistatic SAR receiver and another dual-mode radar.The autonomous robotic vehicle is not driven by a human operator obviously, and it's equipped with the necessary power supply system, the sensory systems like the vision finder, the bump sensors, the temperature sensor, the GPS sensor control, the different control and computing devices, and outreach equipment like the cameras.
Rover Movement
The rover movement is going to be different in the bistatic mode and in the monostatic mode. In the monostatic mode, the rover would move in nearly straight-line paths parallel to the direction of the base vehicle.
In the case of bistatic measurement, the rover would do a zig-zag motion. The quasi-specular signal could be anywhere in between the vehicles. So that's why the rover does a zig-zag motion as illustrated in the figure.
It does what is called cross-track and along-track transects. Cross-track when it is perpendicular to the direction and along-track when it's parallel.
Onboard Intelligent Systems
So I already mentioned that the PRISM radars and SARs are intelligent. How are they intelligent? The rover is equipped with an onboard intelligence system.
The on-board intelligence system allows autonomous and semi-autonomous control of the rover. Semi-autonomous control in the sense that the rover can be controlled by the human operator who's sitting on the base vehicle.
And it must dynamically select the optimum sensor configuration for imaging the bed (bedrock under the glacier). That is, depending on the data -- measurements that are recently made by the radar sensors, and other information such as that from satellite sensors and other preexisting information. So it has to combine all this information in real time and come up with the operation parameters to control the radar and the rover.
I've listed the different operation parameters it must primarily determine:
- The SAR operation mode: Monostatic or bistatic?
- The operation frequency which is going to be a combination of three different frequencies;
- The distance of the vehicles from center of the measurement spot. We saw that there were two vehicles and we have a measurement swathe. So what is the distance of the vehicles in this swathe?
- Then the cross-track and along-track spacing of the rover. You know, does it zig-zag?
- What does the cross-track and along-track spacing of the bistatic SAR movement need to be?
- What is the desired resolution for the SAR image?
- And the speed of the rover.
Now all these decisions are actually dependent on real-time measurements. It could be influenced by bedrock conditions (whether the bedrock is smooth or rough), whether there is basal water or not, the signal strength of the radar at a particular site, and the level of interest that's shown by scientists in the site.
So it is governed by a lot of information and all this information has to be fused in real time to come up with these decisions.
Responsibilities
There are some additional responsibilities that the intelligent system has. It must act as an intelligent interface between the radar's image analysis system and the rover. It must ensure overall system integrity. It should keep monitoring the rover health, e.g., the computing power that's available. And all that.
It must do planning and resource scheduling, hazard avoidance, health monitoring and fault tolerance. It must determine the patterns of the rover movements relative to the base vehicle. And it must maintain precise coordination so both the radar units have to be synchronized. So the intelligent system has to take care of the coordination between the two radar units.
And it must also combine any commands issued by the remote human operators, who are on the base vehicle or at the KU campus, when it is in semi-autonomous mode.
It must be able to combine these commands with its local intelligence and come up with a decision.
Decisions
As you see, the decision-making process of PRISM is actually complex and it has to be done in real time. So we need to analyze and fuse any recently collected sensor data with a priori information, for example data from satellites like RADARSAT. RADARSAT is basically a satellite that NASA proposes to launch in the near future, and it will have a set of radar sensors and take pictures of the ice-bed.
Then, there is uncertainty associated with the environment and with our sensors. What you observe is not exactly what it is. So how confident can you be in the sensor inputs?
When I say operate in the bistatic mode, how confident am I? So there is reasoning under uncertainty involved. Plus there are several multiple criteria that need to be considered, like the potential scientific benefits, the fuel level, the computing resources, the wear and tear and the quality of the judgment.
Basically the approach that we adopt is a decision theory approach, that determines the potential tradeoff between the quality of data collected and the cost of collecting it. Is it really worth it to do the measurements? Is it actually worth it to spend such resources?
So these are the four aspects you should remember about the PRISM decision-making: a) it’s real time, b) data has to be combined in real time, c) reasoning has to be done under uncertainty, and d) the decisions depend on several criteria.
Keywords
Before I start with intelligence system, let me introduce you with some keywords. Those of you who attended Dr. Tsatsoulis’ lecture on Tuesday may already be familiar with this but for the benefit of the others I will just tell you what I mean when I say agent or multi-agent systems.
An agent can be in the real world, as Dr. Tsatsoulis said. You have agent 007 who is a secret agent. You have travel agents who book tickets for you. You have agents on Ebay. So basically an agent can be a human being, a robot, or it could be a computer program.
An agent has certain goals to be satisfied. It acts on your behalf, as your representative, and can act autonomously. It can interact with others and come up with decisions to meet the desired goals.
An agent has sensory systems which allow it to sense the environment. It also has a local intelligence. In other words, it knows about what the environment it’s sensing and it comes up with actions that can change the environment. When I say agents during the course of this lecture I am referring to intelligent agents, agents that can act independently and can reason on their own.
A multi-agent system would be a system of such agents. Agents in a multi-agent system can be collaborative or they could be competitive. For example, take a game of soccer where you consider each player as an agent. In this case, agents in one team collaborate but agents of opposite teams compete with each other. Agents have their own self interests and they collaborate or compete to satisfy those interests.
Next we have Matchmaking. Matchmaking is basically a type of multi-agent system architecture that we’ve applied. I will talk about matchmaking in detail in the future slides but for now just know that it dynamically matches agents the capabilities and needs. Basically you have a set of producers and a set of consumers and it matches needs with the capabilities and comes up with a suitable match.
Then we have an Inference Engine. What is meant by inference? Inference is basically something like reasoning, like A implies B. So the inference engine would be the reasoning component of a system.
Agents
The PRISM intelligence system is a distributed computing framework of agents. We have a multi-agent system of intelligent collaborative agents and a Bayesian decision agent.
The Bayesian decision agent is actually the inference engine, and it’s probabilistic because the decision making is based on the axioms of probability theory.
So we have these different agents, which would collect data from the different sources, analyze and distribute it in real time. And we have certain agents, which control the radar and the rover, and those agents would consult the inference engine, the Bayesian decision agent, and would do the reasoning.
When I say reasoning, it has to reason under uncertainty based on several criteria. It also has to be done in real time, and data has to be fused. It’s called a Bayesian Decision Engine because it’s based on what is called a Bayes Rule, which we will talk about in later slides.
The type of architecture we have here (PRISM) would be a multi-agent collaborative architecture where agents going to interact to achieve some common goal.
Multi-agents
So the multi-agent system has, as I said, multiple agents.
We have data provider agents which are going to talk with the rover sensors, radar sensors or satellites and grab information. And what we do is write wrapper agents around the data sources. The data sources could be the software or it could be hardware. So we just write a wrapper around them. Decision agents would consult the Bayesian decision engine and come up with radar and rover control decisions. The control agents would be the agents which would directly talk with the radar and rover actuators and set the different operation parameters. The data processing agents perform services for other agents. So when an agent is not able to do something, such as understand some data, then it could consult data processing agents and they would come up with useful intermediate decisions or they would process the data for other agents. Then we have a matchmaker which is basically a component that allows dynamic matching of agent capabilities and needs.
Diagram of Agents In the System
So here is like a diagrammatic illustration, so we have four types of agents here. The agents in green, that you see, would be the data provider agents. For example, these (points to the diagram) are the rover agents and there (points again) are the radar agents. You have a position agent which will talk with a GPS sensor and would give you the location. A heading agent would give you the direction in which the vehicle is heading.
So as you can see, we have different data provider agents.
Then we have the ones that you see in a beige color or a brown color. These would be the Bayesian Decision Agents. These agents would consult the decision engine and would come up with decisions to control the radar and the rover.
The ones you see in yellow would be the actual control agents. You have one for the rover control and one for the radar control. Then we have a data processor agent that is shown there (points to diagram) that is going to process the SAR data.
Then we have the Matchmaker shown in the pink color.
Matchmaking
So, what is Matchmaking? You have a set of producers and you have a set of consumers. So the matchmaker does a dynamic matching of agent capabilities.Consumers can register for data using time-driven or event-driven subscriptions.
Here is an example. You have a Matchmaker. You have a Producer Agent, a position agent, which can give you information about the GPS location. And then you have a Rover Bayesian Decision Engine.
So when you switch on the GPS sensor, this agent would pop up and it would register its capabilities with the Matchmaker. It would say, "Hey, I'm here. This is my information. You can look at me with this information. And I can provide GPS information. "
Then, when a producer agent pops up it would advertise its needs. It would say, "Hey, can you show me somebody who would provide me with the GPS information? I want to know where I am exactly. Can you tell me?"
And what the Matchmaker does is inform the consumer about the availability of the producer. Then the consumer agent subscribes to the provider agent for data. It asks, "Hey, I want GPS information, can you provide it?" And then the provider agent replies to the consumer agent.
Now, coming back to the previous slide. I said consumers can register for data with the providers using two different subscriptions. A time-driven or event-driven subscription.
A time-driven subscription would be a periodic subscription. It's something like, "Hey, I want data every 2000 milliseconds and I want every 2000 milliseconds for the next hour."
Java Objects
We have actually implemented agents as remote Java objects. We chose Java because it’s platform independent. Otherwise Java does not offer any unique features. I mean, you could write your system in C or C++ but we chose Java because we need to communicate with different systems and we wanted a platform-independent language.
The system provides for finer-grain control of subscriptions than other Java-based architectures. Java offers a package called JADE ( Java Agent Development Environment), but it wasn’t very flexible so we came up with our own framework.
Our agent messaging confirms to FIPA standards. FIPA (Foundation for Intelligence and Physical Agents) has standards for ACL agent communication. They have their own specifications for an agent communication language.
We’ve also used RDF (Resource Description Framework). It’s based on an entity relationship model and it’s a way to describe resources, their properties and values.
So, we used RDF and conformed to FIPA standards and came up with our own messaging language.
Example Of A Message (diagram)
For example, this is a time-driven subscription. This would be a message that a consumer agent would send to the temperature agent, that can provide information about the temperature.
So here, if you see, we have something called a Subscribe Periodic. That would be the function that it would invoke on that agent.
- Temperature - Celsius: tells us what it needs. It needs the temperature relayed in Celsius!
- 2,000 - 20,000 means "send me the temperature in Celsius every 2000 milliseconds until it's 20,000 milliseconds".
This is a time-driven subscription. And every RDF conversation, a conversation that you have, would be associated with an RDF ID. That's like a conversation ID. When that guy responds he has to use the same conversation ID. And this is an example of an event-driven subscription message.
So we have the actors. Again, a temperature agent. Subscribe event would be the function that would be involved. And this would be the comparator. Again these specify the function name here. So when I say, "When the temperature goes about 35 degree Celsius, then send me the information,” it will do so.
Bayesian Decision Engine
Here is a data response (points to drawing), so this would be something like a description about the temperature (or some other value), and it would say, “TRUE,” meaning I’ve got some temperature data. This is an example for the time-driven. This doesn't apply for the event-driven.
Since we have a single matchmaker, there is a single point of failure. If the matchmaker goes, then the agents don’t know what to do. This isn't relevant in our case because if it goes, we have time to start the agent again. We could have a back-up matchmaker, but we actually don’t have one right now. The system is still being developed and there are a lot of design issues are still pending. But in this case I don’t think we need a back-up because if the matchmaker goes down we could just raise a flag and start the agent once again.
Bayesian Network
What is an expert system? An expert system is basically a system that acts and plays like an expert, it could be anything, a medical expert.
It could be written in different ways, it could be written using probabilities. It could be written using simple rules, if-then rules; it's still an expert system.
A Bayesian network is a very popular expert system model. It's used to model a domain containing uncertainty in some manner, so this adds value to our project. Basically it is a directed acyclic graph of a set of variables and directed edges.
So you have a set of variables and then you have directive edges connecting them, and its acyclic because from A1 to An you just have one path, you don't have the reverse path.
And the edges between the nodes reflect dependency, the cause-effect relations within the domain, and the strength of the effect is modeled as a probability. For example, if you have a variable A with parents B1 to Bn, meaning B1 through Bn influence A. Then there is attached a conditional probability table, P(A), the five symbol indicates given (given the values of B1, B2, B3, etc).
- For example, what is the probability that the ground is wet given it has rained? That's going to be 100%.
- What is the probability that the ground is wet given that the sprinklers are on? Again that's 100%.
If A has no parents, then the table reduces to unconditional probabilities. It's just the prior distribution, that's P(A).
Probability
(Refers to drawing) OK, let's say this denotes electricity failure. And this is a light and say this is a computer.
And this (draws) could be a malfunction. OK. So the light could be on or off. These are the two different states. A computer could be powered on or it could be powered off. And the electricity failure could be a yes or a no. And the malfunction with the computer could be a yes or a no. For example, we have prior probabilities with this and the probability table of this would be something like…
What is the probability of a computer failure (draws)? A computer could fail because of an electricity failure.
- Electricity: Yes or No
- Malfunction: Yes or No
Now this is the on state or this is the off state. So when there is an electricity failure it is obviously going to be powered off and if there is no electricity failure, it's going to be powered off only if there is a malfunction.
So we actually have prior probabilities associated with this. We fill these tables and then say, if the electricity has failed, this is a Yes. Then what happens is when we do a probabilistic inference, meaning we determine the probability of the other nodes in the network. That is what is meant by a "probabilistic inference."
On an evidence propagation we see evidence, we propagate it via the network, and come up with the probabilities of the other variables of interest.
PRISM Bayesian Network
So if you take the PRISM Bayesian network, we have the parent nodes representing the decision inputs, the leaf nodes representing the decision outputs, and the other nodes would represent intermediate decision parameters.
And the links, the agents between the networks, would identify the cause-effect relationships and would be quantified by the use of conditional probability tables.
OK, you might have a question here. "Where do we come up with these values?" It's purely by expert opinion. We consult them, we ask experts and ask questions like, "What do you think would be the probability that this could happen given that this is true?" We talk with the different experts and then we come up with values. And then, the conditional probability tables would specify the prior joint distribution of the nodes, and the way that we actually code or set the preferences over different inputs.
Preferences
We have a case where if the scientists show a greater level of interest in the area, irrespective of the measurements made by the radar, the radar has to operate in bistatic mode.
So that's a preference. So say we have a case where there is a variable X and we have different inputs. Let's call them: X1, X2, X3, X4. If maybe this one has a greater preference over this other one and you need to set preferences. So when we write these conditional probability tables, we will come up with the numbers in such a way that if X1 is true, then that carries more weight than the others.
Then irrespective of the values of the other variables, X2, X3, X4, it would say something like, 90% so this has to be bistatic . So we would set preferences and we would assign confidence values while combining them.
Algorithms Propagation
A Bayesian Network is built upon what is called Hugin Software, it’s a probabilistic engine, so once we set all these inputs, we define all these cues in Hugin, and it’s basically a software package.
When we set everything and then you propagate evidence it will propagate and then it will use certain algorithms. There’s something called a sequential propagating algorithm, that’s popularly used in Bayesian Networks for probabilistic inference and evidence propagation. It uses a algorithm and comes up with values. If you don’t do the probabilistic inference part, you just tell the Hugin Software, "Here are the probability values, run it through the network, and give me the probability of the output."
Process
So how does the process take place? We have these decision engines which are going to reason about the radar and the rover parameters. So they receive and propagate the input evidence as follows: First they analyze and transform source data into what are called decision states.
The source data cannot be used raw, it has to be processed. So it does the processing and it transforms them into what are called these decision states. What we actually do is we have measurements about the inputs, for example the bedrock thickness, it could be mapped to a real number from zero to...that's where the radar guys would say, "Hey, the roughness here is point 5 (.5)."
So what we do is map the continuous variables into discrete states, like say, high, low, medium or something like that. We map them into decision states then we identify the corresponding input node in the network.
And we populate the node's Conditional Probability Table by setting a value of one to the decision state observed and zero to the others. For example if the bedrock roughness is "high," then we would set a 1 to the rough state and zero to the rest of it.
And then we would call Hugin functions to do the evidence propagation.
After the evidence is propagated, based on the probability values, we use the probability distribution of the output nodes, the leaf nodes like this (points to diagram), and we come up with a rank, or a risk, value.This risk value will help us choose the best possible decision alternative.
For example after the propagation if XY denotes the actual level of interest in the site, then you know, XY could have different states like high, low, medium or it could be interesting, moderately interesting, and uninteresting. And each state would have a probability value with it and we use that value to assess which decision alternative has to be taken.
Software
Here's a screen shot of the Hugin software. We've defined these different nodes and we've defined the tables that are not shown here and we have a GUI (graphic user interface) for the Hugin software. The GUI would actually show you the different nodes and the different states and the probabilities associated with them.
I don't know if you are able to read this, but it says, “There's an Overall Level of Interest node here which has states: high, medium and low.” And it would show probability of each state.
So you can actually do the provocation on Linux using Java and API functions but you could also use the GUI, and then you know, you can click on and define the probability here and run it. When you run it, and it will show you the probabilities of the other variables of interest.
Conclusion
So the results and the conclusion, we have implemented the matchmaker, basically the agent architecture. The type of architecture we have here (PRISM) would be a multi-agent collaborative architecture where agents going to interact to achieve some common goal.
We’ve implemented the probability network, we’ve tested it. We’ve wrapped available sources, and simulated sources that are not available.
For example, the SAR or any of the radars are actually getting built, they are not ready, so we have simulated those radar inputs. Some simulations are basically random numbers in the defined range. So we’ve simulated some inputs, and the existing inputs such as the temperature sensor, the GPS, and all that has been wrapped, and we’ve tested this architecture.
This year there is a Greenland experiment that is going to be conducted. We're going to have the software on the rover and we’re going to test it in the real operation environment and determine potential transmission bottle-necks. For example, does the synchronization take place properly? Do the different messages code properly?
There’s a lot of timing issues involved, we’ve used a lot of treads, so we want to make sure everything works in the real environment.
In the future, plans would be to wrap all the sources, double up different analysis algorithms, and start developing reflection for the radar and rover to deal with.
We haven’t done any resource scheduling and planning as of now.
It’s all threads and then we call them. And we haven’t set priorities over the threads, and we haven’t defined any scheduling algorithms. All that has to be done. All those rules have to be defined.
|