Saturday, March 13, 2021

Notes on "An Introduction to Multi-Paradigm Modelling and Simulation"

 [Experiment: I'm taking active notes when reading some papers of real interest to me. I may as well make these notes public too. The notes are taken in an as-I-read fashion, and should in theory be very long annotations to the paper itself, but I couldn't publish that.]

As a first set of notes, it makes sense to first give context: I'm trying to understand "modelling". Abstractly I already do. But I'm trying to get a slightly more formal idea of things like "what is a model". Of course, that's likely to remain an informal notion. Nevertheless, I'm trying.

Also, it's important to remember that I bring my own context, knowledge and biases to my understanding. So the kinds of math that I know, the things in CS that I prefer (PLs, type theory, symbolic computation, etc) will colour my view of everything.

Good start: defining concepts! Establishes that modelling is with respect to reality, i.e. model M of a system S. There is an implicit assumption that we're in a dynamic situation, i.e. that System S changes over time, and thus that the model M is something that can be 'run' over time too, here called 'simulation'. The use of 'simulation' instead of 'run' makes sense in that 'simulation' brings in the additional baggage that it is somehow tracking something else (which it is).

Furthermore, the definition of an Experimental Frame (i.e. context of use, constraints) is quite interesting. That is definitely a strong enabler of (valid) abstraction/approximation. Then the point that both the Real World is assumed to be measurable (observable), and so is the Simulation, is a rather important point to make. And, of course, the whole point is to validate these against each other, i.e. there must be some kind of coherence relation between the observables on both sides.

The definition of simulation makes it clear that indeed 'model' is supposed to be very lose, and yet is supposed to also be something that can be run. Two goals are worth repeating (verbatim from the paper):

  • the goal of modelling is to meaningfully describe a system, presenting information in an understandable, re-usable way,
  • the aim of simulation is to be fast and accurate,

where the emphasis was in the original as well. So here models are both from a wide class (they mention Petri nets and DAEs, but surely all sorts of state machines would be fine too) but also quite narrow because they have to be runnable. 

There are some interesting comments about symbolic methods being better than numeric methods that are quite interesting, way ahead of their time, and still largely unrealized in practice, even though the paper is from 2002.

The whole point of a model is to have a "good enough" approximation that not only validates some measurements (within the given experimental frame, crucial), but also that could predict new behaviour. Falsification becomes important as a feedback mechanism.

Section 2 presents an interesting example. The first model via DEs is obvious. The second model, a discretization via a 9 state FSA is less so. It corresponds to an analysis of the measurements (of which there are 2 discrete ones) that can each be in 3 states each - thus the 9. The transition function then reflects the underlying geometry of the continuous state space. The claim of there being a model homomorphism is more interesting (it neglects sensor hysteresis and 'bounce' issues of values exactly on the boundary). A priori, it is also not clear, at least in general, that all transitions in the discrete model are realizable. So an actual proof of homomorphism might be quite non-trivial.

It then goes on to present Systems Theory and Formalism Classification. All makes sense.

[Side comment: somebody wrote $off$ in LaTeX. And no one caught it. Sigh.]

[Side comment 2: all of this seems to not allow UML as a 'model' in this view. I approve.]

Section 3, Multi-Formalism Modelling. A PL person (such as me) would recognize this as the same as the interacting DSL problem, and the language integration problem. Then the FSD formalism is presented. If I understand well, the idea is that many other formalisms can be embedded into ODEs and algebraic constraints. Whether this is a good idea is not questioned, interestingly. [Simulating these kinds of ODEs + algebraic constraints is hard, and doesn't scale all that well.]

Section 3.3 talks about coupled models (and transformation). It even talks about types! It is also quite clear about interfaces and abstraction.

Buried here is a very important remark, that goes back to the goal of modeling: "Certain questions about a system can only be answered in certain formalisms, ..."  I have seen people lose sight of the point of modelling and fall in love with a particular formalism. The point of modelling is to answer certain questions - if your formalism can't, you are wasting everyone's time.

Section 4 introduces meta-modelling. Models about models. What the authors don't seem to notice is that this is incoherent with most of their previous definitions! These meta-models are of a rather different nature because they are not really 'dynamic'. What does it mean to simulate the model of Figure 12?  Does Figure 11 represent something in the "real world"? What are the allowed observations / measurements?

A meta-model, in some sense, is 'just a model', with a change of level. The picture of Figure 1 is shifted, where meta-models should have the same relation (Reality | Model) but at the level (Model | Meta-Model). But it doesn't, not really. Yes, the meta-model can here be run 'generatively' to produce models -- but that's not the (Reality | Model) relation, is it? No! The relation here is akin to the (term | type) relation, where many terms are proofs that a type is inhabited. Certainly the whole 'simulation' aspect has been thrown out the window.

The fundamental aspect of modelling has not: meta-models can indeed be quite useful for answering certain questions. But it seems that the character of the questions has changed non-trivially, and this is simply not addressed.

Section 4.1 then talks about meta-model transformations. It does mention 'semantics' as attached to this... but does not mention that what they seem to mean is 'operational semantics' only. Let me be clear: what is not really mentioned is either denotational or axiomatic semantics.

I learned a lot from reading that paper. So it has certainly achieved a purpose, include the purpose for which I was reading it for: getting better acquainted with a certain set of definitions.

Yes, I am rather unhappy with it once things go meta. That's where I'm hoping future papers will do better.

Link to PDF of paper 

@inproceedings{vangheluwe2002introduction,
  title={An introduction to multi-paradigm modelling and simulation},
  author={Vangheluwe, Hans and De Lara, Juan and Mosterman, Pieter J},
  booktitle={Proceedings of the AIS’2002 conference (AI, Simulation and Planning in High Autonomy Systems), Lisboa, Portugal},
  pages={9--20},
  year={2002}
}

No comments: