Multi-agent Systems
Presentation to Undergraduates and High School Students
Dr. Costas Tsatsoulis, University of Kansas,
Summer, 2004
Introduction
What's the topic today?
Audience: Multi-agent systems.
Multi-agent systems. All right. What does "multi" mean?
Audience: Many, more than one
Many. OK. These are not hard. By the way, I don't ask hard questions. The questions will be easy to the level of stupidity. What is an agent?
Audience: A program.
A program? If you opened the dictionary and went for the definition of "agent, " I can guarantee you it doesn't say "program."What's an agent. What does 007 tell you? 007 is a secret agent. Right? So, an agent could be a spy. How about if you're an athlete or a performer. What is an agent? What does an agent mean in professional athletics?
Audience: The guy behind the scenes who give you (inaudible)
Guy who takes 10 percent and represents you. He's your representative. A secret agent is a secret representative of the government. Somebody who goes and collects information and so on. An agent is somebody who represents you. So an agent is your representative. That's what an agent is.
Multi-agent Systems
So an agent is a representative. Now we're coming to the program part. Now some of you are running Unix. And you have that little mailbox icon in the corner and every time you get new mail it lifts a little flag. So you don't have to go and check constantly if you've got new mail. Or it beeps or it says, "You've got mail," or something like this.
That's a very simple agent. It's a representative of you. You have empowered that program to do something for you so you don't have to check. Obviously you are thinking this is a very stupid agent. It doesn't do much. It's just a little demon. As a matter of fact, it is a demon. It's a Unix demon and by agreement Unix demons will not be called agents.
Can you think of a sophisticated agent that could do stuff for you? Or not so sophisticated. It doesn't have to be sophisticated. A nonhuman entity that has the ability to do something for you and represent you, like on E-bay.
Audience: In a search engine?
On E-bay you can go and set and say, "Let me know when the price of this item that I want to bid on goes up in value." You can go to Travelocity and say, "I want you to find tickets from A to B that are less than this price." and it will search for you and checks. Very simple agent, but still it's a real representative. So an agent is a representative of a human that does simple or very complex work. We won't be interested in the Travelocity agent or the e-mail demon or anything like that. We'll be interested in intelligent agents.
Intelligent Agents
So we are interested in systems of multiple agents, but the agent must be intelligent. And if you look at some of the early work on multi-agent systems, they have nothing to do with computer programs. All the agents are humans. They're interested in the dynamics and the psychology of very large human organizations, because they're multi-agent systems. Now we can argue if humans are usually intelligent or not, but, at least for the benefit of this discussion we will assume that.
As computer technology moved, some of agents became computerized agents. And then some of them are, arguably, intelligent. But a multi-agent system, in our case, will be a system of multiple intelligent agents, some of them human and some nonhuman.
I also heard you say that an agent is a computer program. That's partially true. An agent could be a disembodied computer program. In other words, it can be just software. Or it could be a computer program that has a body, like a robot, or a manipulator or a robotic arm, and so on. Now, not every program is an agent, and as we said, not every agent's a program. So there are some minimal requirements that a program must satisfy in order to be considered an agent.
First of all, it must have the ability to sense the world. OK?
Then, it must have the ability to manipulate or affect the world. So it must be able to sense what's going on around it and must be able to change something of the world.
Use of Multi-Agent Systems
Now,give me an example of a sensor. GPS would be a sensor. It senses specifically location.
Anything else? A camera would be a sensor.
You're thinking of the world. I know we're talking about the world, clearly. We want to sense the world, but the definition of the world is different. Let's go into something again -- the little flag on your mailbox. What is the world according to the e-mail demon? Does the e-mail demon care about how many people are in this room right now? Or the GPS location? Or our faces? What does it care about?
The world according to the e-mail demon is the mail spool. It only needs to be able to sense that.So the definition of the world which is out here is not THE world, but it's the part of the world that's of any interest to the agent.
So we need to have the ability to sense it and we said it could be real sensors like GPS and cameras, but it could be something that senses, for example, user input. It could be just reading off the keyboard or reading a query in Google. As long as it’s something that can be sensed.
Now that we've kind of defined the world to be a little more flexible than the real world, or maybe more constrained, depending on your view of it. Here's the world again (he draws a picture on the whiteboard). We need our manipulators to be able to change the world.
Again, don't think of manipulators as actual physical entities. You can change the world by providing the user with a message, that says, "Core meltdown. Run away." That hopefully is going to change the world because they're going to get up and leave.
So the way that you affect the world does not have to be a physical effect, although it could be. You could have an actual robot; you could have an autonomous vehicle that drives around and changes its location. You could have, as in the PRISM project, something that changes the configuration of the radars. So you could actually effect changes to the world.
But it could be something as simple as printing something on the screen and providing somebody with information.
Agents
So one reason that you have multiple agent systems is because they are representatives of different entities. Any other reasons?
Let me give you another example. I am a company and I have a factory floor, and I need to schedule different machines. And I have an intelligent scheduling system that finds out what's needed. It does predictions, goes back to warehouse, orders stuff and so on.
Why not have a single system that does all this?
Audience: Sometimes it's not possible to accomplish that using just one.
Yeah, it was. Appears to be.
Audience: It can have a specializing ___?
So you can have each one of the machines on your shop floor be represented by a single agent.Why not just have one big program? Here is that big program (He draws a circle on the board).
Audience: If something goes wrong in a part of that program, then it all goes down.
So you want distribution, sometimes because this is the nature of the problem. If you have multiple programs collaborating, you may want distribution because it gives you robustness. You don't have a single point of failure. One of them fails, you still have the others running.
Different Multi-Agent Systems
How can you have homogeneous, heterogeneous, collaborative and adversarial multi-agent systems? And what are the issues associated with having them?
The reason I kept this drawing is because it's a good example of a distributed, collaborative, homogeneous system.
Now you're trying to triangulate and track these planes in order to achieve solution to this problem.In order to achieve collaboration, what do you need? What do these agents, these radar agents, need to do in order to be able to achieve solution to the problem?
Audience: They need to share the data.
Share the data?
Audience: Communication
Communication?
You said “coherence?” Very good.
Communication and Collaboration
Let's start with communication. Obviously in order for them to be able to collaborate they need to have some level of communication. I'm sorry. I take it back. I said "obviously." Nothing is obvious.
You're all claiming that in order to achieve some level of collaboration, it requires communication. Why? What are you communicating?
Audience: If they don't communicate, then they're just separate entities.
(He tosses something to a student). I didn't tell her anything. She still caught it. I didn't transmit any information. I just threw something at her. If I'd thrown it hard enough, she would have ducked. You don't need communication to achieve collaboration, necessarily. I just gave you an example.
Can you imagine playing a soccer game or playing hockey game and talking all the time? "I will kick you the ball. Run this way." (Audience laughs.) So, don't assume that communication is absolutely necessary. It's useful, but I want to find out from you what levels of communication...and what are you communicating.
Let's assume the following: these radars can only see a limited area. Now they have some areas where they overlap. But they can't all see the whole thing.
One important thing when you have a distributed system, especially if it's geographically distributed, or if it's heterogeneous, is that each agent doesn't have the same view of the world. Each one of them has a limited view of the world.
So let's say that the two of us are tracking this aircraft and we want your help with this one. We say, "you know, if you stop tracking this one and help with us with this one when he gets inside your area when you can see, we got him." So I'm providing you with information because there is a conflict.
Negotiation
Let's assume the following: these radars can only see a limited area. Now they have some areas where they overlap. But they can't all see the whole thing.
One important thing when you have a distributed system, especially if it's geographically distributed, or if it's heterogeneous, is that each agent doesn't have the same view of the world. Each one of them has a limited view of the world.
So let's say that the two of us are tracking this aircraft and we want your help with this one. We say, "you know, if you stop tracking this one and help with us with this one when he gets inside your area when you can see, we got him."
So I'm providing you with information because there is a conflict. The conflict is that I am trying to do one thing and you ask me to do something else. So communication may be required for sharing of local viewpoints.
I'm giving you what I know. Why do I need to give you that? Well, one of the reasons is for negotiation. One way to achieve collaboration is to negotiate. If you have more than one request for a resource, where do you give that resource? If you are a collaborative system you want to give the resource there where it will maximize the effectiveness of the whole solution.
So if you told me that this plane is armed, while the plane I'm tracking is not armed, and an armed plane is of more danger. I may agree to go and look at that instead of the one I'm searching for. But how do I know that it's armed?
You have to give me that information because you are the one with the local viewpoint. So then we can negotiate about resources.
Coherence
If we don't come to an agreement and we keep moving the same resources, undoing what the other one does, we can never achieve coherence.
The reality is that in a very large system, you will never achieve perfect coherence. Perfect coherence requires perfect communication. Perfect communication is equivalent to having a centralized system. If I tell you everything, if I share everything, I do, I'll spend most of my time talking instead of doing.
But what you want is the system to be converging toward a solution instead of oscillating.
So if I were to graph it and let's say this is the solution (he draws a graph with an endpoint to the right). I want to reach that point.
A bad system would be this (he draws an up and down graph that doesn’t reach it). OK? It keeps no coherence.
Another bad system would be this (he draws a slower oscillation that touches the end point). I'm exponentially reaching it, but it's going to take forever. OK?
What I want is a system that will fairly quickly converge to a solution. And it might not be the perfect solution. You know, it might be like this. But it's generally going to be a good solution.
Communication Cost
Communication has a cost. The cost is a computational cost, it's a delay cost, it's a transmission line cost. It's also a cost in that it could be wrong communication.
It could be for some reason corrupted. You're getting the wrong data.
So communication has a cost and we try to minimize it, but sometimes it's necessary. And again, sharing the local viewpoint is what is important. But communication is not always absolutely necessary.
I put down coherence. I'll give you an example of coherence. We have both decided, the two of us, to take this really heavy object, (he walks out to audience and works with a student.)
"Please grab one end of it and move it and now pull it your way."
This is not a coherent solution because we're pulling in opposite directions. We both want to move it but we haven't agreed on where to move it. We're trying to push in the wrong direction. Coherence means that as you're performing problem-solving, all the agents' actions are leading toward a solution. An incoherent system is where the agents' actions are undoing other agent's actions.
Self-Interested Systems
So the major issues we are talking about are collaboration and coherence and how we deal with communication. Then we're talking about collaborative systems, systems who want to do the same thing.
The other thing is adversarial systems. You won't find a lot of work on real adversarial systems in the literature. One of the adversarial systems that they use is what's called "The Fox and The Hounds." There is a fox and there are hounds. And the hounds try to get the fox. And the fox tries to run away. There's very few adversarial systems other than if you go into war gaming or soccer or other games. Adversarial used to be pretty good in the 80s and 90s, but there's not as much interest in them now.
Most of what you will see now is not so much adversarial systems, but is something people call self-interested, which I think are a lot more interesting. Self-interested systems are systems who both want to gain something out of the interaction between them.
For example, a self-interested system is a system of a seller and a buyer. The buyer wants to buy, the seller wants to sell. But in both cases, the buyer wants to buy at the minimum cost and the seller wants to sell at the maximum cost. They're self-interested. But they're not adversarial because they're both interested in finishing the transaction.
BlackBoards, Contract Nets and Matchmakers
In our case we're going to focus on multi-agent systems that are collaborative, and we're going to focus on three basic architectures: 1) Blackboards, 2) Contract nets, and 3) Matchmaking systems.
Just focus on these three architectures. Just to give you an idea of some of the work that is going on. This is actually completed work. Blackboards: this is 80s stuff and 90s. Contract Nets is also 80s. And Matchmaking is 90s.
So this is fairly well understood work. This is nothing completely new, but it gives you some idea of the work that has come previously.
Blackboard With Agents
These are agents and this is the blackboard (he points to a drawing he has made). The agents can read from the blackboard and they can write to the blackboard.
So an agent is, as we said, a representative. They need to communicate, they need to exchange some information. And the question is, how do they exchange the information.
That's what the blackboard does. The blackboard is a common area where they post information that they want to share. So one way of resolving the communication aspect is instead of having each one of them talk to all others, and have a fully connected network, you just say, here is a place. You go post stuff and then occasionally you go and read it and see if there's anything interesting.
Blackboard Coherence
So, agents post information and agents read information from the blackboard.
In addition to this, the blackboard does something else. These are all heterogeneous agents, meaning each one of them can solve a different problem. Now one of the issues with intelligent systems, or multi-agent systems, as we said, is the issue of coherence -that you are moving toward a solution.
If each one of these agents is allowed to work whenever it feels like it and change the blackboard in any way it considers logical, you may not have coherence.
So, what the blackboard does, it provides a level of indirection between the agents. We'll call this "control." When things appear in the blackboard, when data appears in the blackboard, the control determines which one of these agents should work.
So you have a centralized location that knows the state of the problem and then decides which of the agents should work on it.
Blackboard Theory
As an example of this, it's like that movie about that math genius in Harvard, "Good Will Hunting" where a couple of grad students and their advisors sit there and solve these complex integral equations and then they come to a point where they can’t figure something out and they put a question mark and they say, "I don't know how to solve it" and they go away.
But this problem has been posted on the blackboard. So another expert comes by and says, "Oh, no, no! Why a question mark? You should do this." And as more and more experts walk by this blackboard and have access to the information, they add slowly to the solution.
In the same way, these agents slowly add to the solution from the data they see on the virtual blackboard. So that's why it's called the Blackboard.
Issues with Decomposing the Problem
After you've decomposed the problem, you have individual solutions for each component. Then, what do you need to do? Take the solutions, the partial solutions, and create one solution.
You cannot do that in all problems. What I'm saying is, all problems are decomposable, but not in all problems the decomposed solution can be composed into one without problems.
Here is a simple example (he draws labeled squares on the whiteboard). These are little toy blocks. They are blocks A, B, C. C's on top of A. I've stacked them. Can you imagine that? Stacking the blocks? Now I want you to stack them and unstack them and come up with this arrangement.
"C is on B and B is on A." All right?
These are two different tasks. I've decomposed my overall task into tasks. So here comes the robotic arm and says, "Oh, C on top of B. That's easy."I've satisfied the first goal.
Let's do the second one. Well you can't! You need to now unstack B from C to move B. So there are some problems. It's a very simple example. Satisfying one goal doesn't mean you can satisfy the second one. You may need to undo the first one, so combining the solution of partial goals to solve the whole problem is not always possible.
Learning (Contract Nets)
I assign a contract to you. OK? And you promise me. I signed it because you bid to do it with so much cost, in so much time, and such quality. And it comes back and it's late and it's bad work.
Then another task comes in and you bid again. Now in reality, if I were a human I would think twice about giving you the contract.
So one of the things in order to make an intelligent system intelligent is, it must learn. So a lot of work is going now into "Can the systems remember the performance of their bidders; how well they did the work?"And to learn how to assign the bids in the future.
So there is a lot of stuff going on in contract nets.
Matchmaking
Finally, just briefly Matchmaking. Matchmaking is a fairly simple concept. Imagine the World Wide Web and agents spring up all the time. Services get created all the time. You don't know all the web pages that are out there, all the software that is out there. It's impossible.
(He draws a couple of figures on the whiteboard.) Let's call these people the "Suppliers" and these would be, let's say, "Users." These agents supply a service. The service could be the ability to order tickets, the ability to find stock quotes, the ability to solve differential equations, or target airplanes. They are just suppliers of a specific ability.
These agents (Users) need these abilities. Now they can't all know of each other. So what we do. In the middle we put a Matchmaker.
And what happens is that all the suppliers and all the users know the Matchmaker. They don't know of each other, but everybody knows where the Matchmaker is. It's a single address you need to know. There's only one Matchmaker. Look at it like the Yellow Pages. OK? So the suppliers go and advertise capabilities. They say things like " I can solve differential equations in three seconds, " and, "No, I can solve it in two seconds." "I can give you GPS data." They advertise what their capabilities are.
Then the Users, when they show up and they find out, "You know, I need to solve this differential equation. " They're going to go to the Matchmaker and advertise their need and say, "Do you have anybody who can do this for me?" And that’s what the Matchmaker does - it matches capabilities with needs.
Matchmaker Capabilities
It's a fairly simple idea, but it is very powerful because it allows you to have multiple agents show up in an environment dynamically, not having to know about the existence of everybody else, because they're only interested in who can supply them with specific capabilities.
A lot of issues associated with that. For example, I advertise capabilities and then an aircraft comes over me, drops a bomb and kills me. I'm gone. As a Matchmaker, I need to know that you're gone. But you can't always say "goodbye" if a bomb was dropped on you.
So there are a lot of things that need to deal with the logistics of this. For example, the Matchmaker is used a lot (remember we talked about self-interested systems) in selling and buying. They will use it as an intermediary, because Seller A doesn't want the Buyer B to know who they are. And the same thing with the buyer, because I may agree to contract with you because I really need to sell this stuff, so I agree to sell it for $100. I don't want this to spread around the internet because I want to sell to the other people for $150. So they might use the Matchmaker as an intermediary to basically hide who they are.
Another thing. As I said these programs are automated programs and they're smart enough that if you start bidding and negotiating for bids, they can figure out what methodology you're using. What percentage you go up the bid or down the bid. So instead they use the intermediary. You send a bid and they just say, "Here. Yes or No?” So the intermediary is the one that accepts or rejects the bid. So they do a lot of different things to hide a lot of the strategy.
Conclusion
We'll talk a little bit about this because we're using this in the PRISM project, to allow dynamically appearing data sources on the rovers and the radars to communicate with the intelligent systems that require these data. So Sudha is going to talk a little more about this next week.
Anyway these are just some very general ideas to tell you where the general research directions are. A lot of them have to do with coherence and collaboration, being able to solve a problem together when you have a lot of them is really difficult.
And another thing that's very interesting is learning. Learning has been used in intelligent systems to make a system smarter. And what do I mean by smarter? So it can either solve the same problem better, meaning faster, cheaper, better quality, or it can solve problems it couldn't solve before. OK? But that's learning as an individual. When you have a multi-agent system, each one of these agents has its own sphere of influence and its own viewpoint, so it's learning different things since it views different things in the world, has different experiences.
Can all these systems that learn individually somehow exchange some of the learned knowledge and learn better? That's one issue.
And another issue is whether they can learn how to communicate, how to collaborate. Can they learn strategies for collaboration?
So there's some interesting issues involved in coherence, learning and collaborative behavior.
|