Sent From: [log in to unmask]

At 08:23 AM 12/17/96 -0500, Dave Payne wrote: >The concern my colleagues and I have long heard from model builders and >owners is that a "naive" user can not be guaranteed correct answers from >a model, because they don't appreciate or understand the underlying >assumptions or how to interpret the output. Some modelers even contend >that the user has to know when and how to "adjust" the operation of the >model based on the situation and emerging results! I often counter that >a well designed, debugged, and VV&A'd model should not require much >"tweaking", but the real world constraint is that the model owner (often >an Agency, not a person) holds the keys. Is it possible that this is a symptom of the differences between the characteristics of the distributed simulation architecture and the characteristics of what is being simulated? In other words the differences that exist between the two impose a need for assumptions to be made going into the simulation, tweaking during the simulation, and interpretation of the results after the simulation. The difference in characteristics that I am referring to is the difference between the use of RPC to achieve distribution in the simulation and the concepts of mobility, migration, and locality that are found in the real world of distributed objects. In an attempt to provide an architecture that maps more closely to our distributed reality, remote programming is based on the concepts of mobility, migration, and locality. So it would seem to me that the similarities between our own reality and the characteristics of systems using remote programming techniques would lead to a need for fewer assumptions, less tweaking, and less interpretation. Implementing distributed simulations through RPC exclusively is like a game of BattleShip while implementing these systems using RP is more like WYSIWYG. >I can't imagine a model owner >(whether it's a math programming model, a stochastic or heuristic model, >a simulation, a rule-based system, or an adaptive system) would happily >allow an agent for some other user to start up their model, and then >augment that model with a modified algorithm or an additional rulebase. If I am right in my hypothesis suggested above and remote programming allows for the execution of models that map more closely to what is being simulated resulting in less assumptions, tweaking, and interpretation then why not? Yes, remote programming is a paradigm shift. And yes it is being met with some resistance. But then again so was Christopher Columbus met with resistance when he suggested that the world was not flat. OK, admittedly an extreme example, but I am trying to make a point. Certainly there is a greater need for security when processes are allowed to migrate to other systems but in the real world it happens all the time. We go to the mall, we go to work, we go to school, we go to all kinds of places that allow us to walk right on in and do certain things. There is risk, but it happens anyway. Sometimes people steal, cheat, and even kill other people. But that is the real world, and we are trying to model the real world, right? Some places in the real world are very secure. If you walk into a stranger's home they might shoot you, so you don't do that. The same thing should happen in simulations. Remote programming is a very appropriate technique for the modeling and simulation of things in the real world. If a model owner is part of a distributed simulation and some part of the simulation attempts to migrate to his part of the simulation it seems silly to me that he would set his simulation up to disallow the migration and only allow interaction through RPC even if the migration is what would happen in the real world. This is like saying "I don't care what happens in the real world, in my simulation you will have to just tell me through some RPC mechanism what you are doing or want to do and I will do it for you." This is really close to the game of BattleShip and it is in my opinion this attitude that results in the need for so much interpretation. >Even if that was allowed, what are the implications for VV&A of the >results? The implications are significant. The closer the simulation maps to the reality the easier it is to verify and validate. >One additional thought is that the Telescript remote programming could >work very well on specially designated servers that serve as >well-regulated markets and colleges for peoples' agents. Agents would >'meet' at the servers their owners trust, and carry on negotiations with >other agents. Then the agents would return 'home' or call other agents >working for the same user to operate the models and transaction >processors in the agreed upon manner. The agent would return to the >regulated server to deliver the results. Obviously, this is not a >microsecond real time exchange, but would be timely enough for most >aggregate level modeling requirements, and could augment real time >visual simulations as a pre-exercise (or intra-exercise) data gathering >effort. This is a good model. It is close to how things work in the real world. My paper _Islands In The Net_ ( uses scientists in a similar scenario. --->CBB Chris Bloom Mobile Agent Specialist [log in to unmask]

To unsubscribe from the Z-ARCHIVE-SIW-CFI list, click the following link: