Diplomacy
|
Home > Diplomacy > Cogitations > Representing Beliefs
Cogitations on AI in the Game of Diplomacy: Representing Beliefs
Here is a brain dump of full details my current Cogitations about Representing Beliefs . (Later Cogitations override earlier ones if contradictory.)
[This is an extended version of what I originally presented in DipAi post #8046, with the [corrected in DipAi post #8052] Code Samples I presented in DipAi post #8050, followed further General Notes.]
I call any intelligent entity a "mind". So (the player of) a power has a mind, but so can a cooperating collection of powers (which I call a "camp" [renamed from "cabal" in my original post] – a committee can "make up its mind", has a "will", and so forth), as can an autonomous component of a power (which I call an "advisor" – each concentrating on a different aspect or goal of the game, maybe varying in number, having to cooperate with other advisors). A mind at each level has its own agenda; the lower levels cooperating better than higher ones! So; for example, "trust" could exist between minds at any level, albeit only a power can issue press or orders. However, in the context of Diplomacy, this is probably overkill, and any mind considered would probably be that of a power; nevertheless, to retain the generality, here I just consider minds in the abstract.
Each mind maintains a mental model of the real world (the real game) – as is surely done by even the simplest bot or human player. But if a power is to properly model other powers his mind needs to model the different (egocentric) viewpoints or worlds of each mind, as it believes each mind (including itself) perceives them. But all these minds are assumed also to model all minds likewise, and so on, recursively. So a logically infinite tree of minds must be represented. However, only the egocentric beliefs of any mental world – the ego – need actually be in the ego-tree; beliefs that are assumed to be necessarily common to all minds (such as positions of units) can be held globally.
I call the data object that represents and models an ego an Ego – in my bot it would be an instance of a C++ Ego class. An Ego contains data members to represent (directly, or indirectly via pointers) all the beliefs of its corresponding ego, except those that are assumed necessarily common to all Ego~s, which are held globally. (For precision, I use "~" to avoid corrupting any formal identifier by prefixes or suffixes, and to distinguish an identifier from a normal word if at the start of a sentence.) Each mind contains its own (private) ego (and a corresponding Ego) – this is its root ego (and Ego), at depth 0. Since the root ego of any mind has beliefs about any root egos (amongst other beliefs), each depth 0 Ego may contain pointers to child Ego~s, at depth 1, to represent such beliefs. Each depth 1 Ego may then point to Ego~s at depth 2 to represent the depth 1 egos of any mind, and so on. An ego always knows itself (and the best model of itself is itself), so the pointer in any Ego corresponding to the same mind always points itself to represent that. No depth n+1 Ego is ever realized for such a branch; a depth n Ego then also represents a depth n ego, and an infinite chain of deeper egos of the same mind. But if parent and child are different minds, a depth n+1 Ego is needed to represent a depth n ego. However, to avoid generating a physically infinite tree, Ego~s must only realized when needed or convenient to do so. Until then no pointer is even needed, though I would always allocate a pointer for each mind, being null if its Ego is not yet realized. As well as the distinction between ego and Ego, the distinction between mind (or agent or player or camp) and ego (or model or world) is subtle but vital! In particular, a given mind can have many egos (and Ego~s), even at a given depth in the ego-tree (and Ego-tree) of a given bot (if different parents) – deeper ones being further from reality, so less reliable, but all working in the same way.
Typically many beliefs are probabilistic. Generally, any world (real or mental) is just assumed to be one of an ensemble of possibilities (a multiverse), with some probability distribution, albeit typically only determinable empirically from general experience and observations (of press and orders in Dip). At any instant, an Ego holds a sample from the ensemble of possible sets of beliefs that its ego could plausibly hold, given what it knows – representing what it is currently considering. (This is done because an ensemble can usually only be analysed properly in terms of specific cases.) A fresh sample is selected from time to time (some reselection being done at each cycle of a Monte Carlo (MC) procedure, but otherwise fixed for consistency), according to the currently (explicitly or implicitly) expected probability distribution. So an Ego sequentially realizes (over a period of time) its currently expected ensemble, embodied by the distribution of its recent samples. (There might typically be no more than one occurrence of any specific sample of an Ego, but the distribution of component belief assumptions (see below) should be close to the expected probability distribution, including any appropriate correlations.) A probabilistic ensemble of possibilities of a given Ego implies, probabilistically, what its ego and its ancestor egos (nearer the root) currently expect it will do. (Any ego knows itself and all its descendents but none of its ancestors.) When a real power decides to make a play (issue press or orders) he plays according to the latest sample of his root Ego (assuming it has had time to stabilize adequately – has low enough "temperature").
For convenience, any world (real or mental) may be split into sub-worlds of more specific kinds (and corresponding data classes), such as currently selected operations (as used in my GT method). But any sub-worlds are considered to be part of the ego/Ego concerned, even if only pointed to. A belief that is necessarily common to all egos I call a "fact" (unrelated to FCT). (A "common" belief is not just "shared" by the egos in which it is common; those egos also all believe that all those other egos hold that same belief in common, and so on.) Facts are stored globally; other data are held (contained or pointed to) by the associated Ego. A belief that directly affects the probability distribution of a world I call an "axiom"; a belief that is currently being considered according to that distribution (assumed "for the sake of argument") I call an "assumption". As the MC proceeds, axioms should stabilize; assumptions may not – but at any instant any belief is treated as true. Axioms and assumptions in one ego/Ego may be undefined, or even contradicted, in another, even in different egos/Ego~s of the same mind; they could even be counterfactual; for example, when analysing what would have been better play for future reference. A member variable that is unknown to the ego that it is modelling, but necessary or useful in its implementation, is not a belief, but what I call a "property". For example, for convenience, an Ego may hold a pointer to its parent Ego, but the ego its is modelling would not know anything of, or have any belief about, its parent ego – an important distinction when modelling possible deductions. (I use "knowledge" to mean just "accessible information", rather than "justified true belief", which is not useful here as truth and justification often depend on context.)
Where practicable, shareable beliefs (whether or not common) should be represented by pointers to shareable data objects (more to minimize updating costs and complexity than to save space); for example, the set of plausible operations for a given power held by any of its egos/Ego~s. By Occam's Razor, any belief held by a child ego/Ego should be equal to that of its parent ego/Ego, unless and until there is sufficient reason to believe they should differ. If shareable, a belief should record the topmost Ego that points to it as is "owner". Only the owner can update it, thereby instantly making the updated version available to any descendents that use the (dynamically) inheritable default. A non-owner must take a copy (of which it is then the owner) before doing any update. The owner (only) must finally delete it, after first deleting any child Ego~s (so that it still exists when they check ownership).
A new (child) Ego is always initialized to be a copy of all and only those parts of its parent that its parent knows it must know (copying just the pointers where sharing is appropriate). Clearly it must not be assigned beliefs that are private to its parent (such as press between the parent and other powers); but it should not be deprived of beliefs that its parent would expect it to have in the absence of contrary indications (such as press between the parent and that child power, or general heuristic values that the parent uses, such as propensity to hold – even if the root would consider them misguided). The child Ego pointers in a new Ego should be null (if realized later they will be distinct branches on the Ego-tree), except that the one that corresponds to itself should point to itself (the best model of itself is itself, and it simplifies other logic to preset it). Once realized, any Ego persists for the duration of the game, and once set, a pointer to a child Ego is never changed.
A very simple bot, even when using press, need not have or model any ego, even his own, so needs no Ego-tree at all – it just is his (unconscious) mind! But an Ego-tree (consciousness!?) is needed to more fully model (in more expanded form) press, the MC of my GT method and/or deep explanations of observed press and orders. A simple no-press bot using my GT method would need to realize 2 levels of Ego: the root that directly models itself (as always), with a child Ego for each other power. Direct press (non-nested FRM messages) can be fully represented, in a compact form, as facts, without any Ego-tree, but if an Ego-tree is to be used it would only require 1 level to represent such press within it from the viewpoint of the bot itself. However, if and when viewpoints of other minds were required (for instance, to analyse press from the viewpoints of other minds), 3 or 4 potentially unique levels could be realized: a root modelling itself, with a child Ego for every mind with whom press has been "observed" (sent or received), each with a grandchild Ego representing what the root knows that that child knows about the root. Further unique grandchildren and corresponding great-grandchildren could also be realized if there was more than one recipient of some press. (Even when representing the same mind, the grandchild Ego generally has different beliefs from the root, as the grandchild does not know about press between the root and any other child – only what the grandchild's parent knows it knows! As always, Ego~s could be realized at any deeper levels, but each would only hold a copy of the beliefs from the most recent ancestor of the same mind (many beliefs being the same for all minds), including any deductions they may hold (albeit child pointers initially merely point to potential (lazy) copies, and properties may vary, such as depth and parent). But they only currently share beliefs: they are equal but not the identical Ego~s, as their beliefs may diverge later.) Indirect "hearsay" press (nested FRM messages) or sophisticated explanations require still more levels to represent the beliefs in expanded form, depending on the depth of beliefs about beliefs (meta-beliefs) involved. However, the confidence of any ego in any descendent ego it models is likely to fall rapidly with depth in the Ego-tree.
To avoid undue costs, even if logically needed, an Ego should only be realized if expected to be cost-effective to do so (though probably usually cheap in space and time) and only used in any analysis if expected to be cost-effective for the given purpose – expectations being empirical (heuristic). However, even if my GT method, say, deeply explores the Ego-tree, hopefully many beliefs would be shared by several Ego~s so that relatively few distinct operations, say, need be considered. (Even if evaluated from within a deep Ego, the same data would be updated and converged. Indeed, as stated above, beyond a certain depth, Ego~s merely duplicate the beliefs of the closest ancestor of the same mind.) Or else differences would hopefully often be too small or uncertain to be worth treating as different. A very sophisticated bot could also represent explanations of any observed press and orders, in terms of Ego~s nested to any depth; for example, to model enemy alliances, whom the enemy believe are allied against them, and so forth. Any belief – normally a fact or an axiom, rather than an assumption, since the latter are adjusted automatically by the MC – could be replaced temporarily by another to explore hypothetical, even counterfactual, worlds, including look-ahead and might-have-been (to improve future play).
If a bot directly observes a given press "document" (FRM message), knowledge of it is common to all its observer egos (its "insiders" or "camp") and unknown to all other egos (its "outsiders"). A sequence of documents within the same camp and any deductions based solely upon it (a "discussion") should be believed by all minds that observed it. A pointer to the discussion belief is added to each realized child Ego of the owner (hence itself, since, as stated above, it is one of its own children!) – immediately if the child is already realized, or later when it is realized. So when realized, any Ego always knows about (points to) all and only discussion belief of camps it is in. It will also know about distinct child Ego~s (when realized) representing other partners of the camp, but these will be bespoke Ego~s that only know about the discussion beliefs that the parent knows they know about, not any others that parent or any more distant ancestor Ego may know. Absence of belief does not imply belief of absence; for example, by hearsay, albeit only what is believed is recorded.) Any deductions from press, such as an "agreement" (contemporaneous common assent to some statement) should be made once and stored in the associated discussion belief (unless there is good reason to believe that some partner would not be able to make the deduction – which in the case of an agreement would invalidate the deduction for all partners anyway!).
Receipt of hearsay (a nested FRM message) requires a similar procedure for each level of its nesting. {Set the current Ego to the root and the current message to the whole message. While the current message is a FRM, do {process the current message with respect to the current Ego, as above; set the current Ego to the child corresponding to the current sender and the current message to the current content (press_message/reply field).}} (As always, new Ego~s are realized on demand.) Each level of Ego then holds the document in its discussion belief that it purportedly observed, albeit a given ego is only 100% confident of the one that it purportedly observed; deeper ones being evermore more dubious to it. And even if a document certainly existed, its content may not represent the truth – again deeper ones being evermore more dubious. Strictly, the sender of hearsay is only saying (possibly lying) that he sent, received or deduced the document – not that he believes its content to be true. (A deduction – possibly flawed – is implied when the sender of the hearsay was not the sender or a recipient of the immediately inner document.) Furthermore, there is no guarantee that, even if all hearsay received were true, that it represents the whole truth: in principle there could be omissions (in the absence of honoured FWD and BCC agreements), reordering and/or undue delays. (However, although (in the absence of FWD and BCC agreements) an honourable power need not (and usually would not) forward all documents he observes, I would recommend that, to avoid confusion, if he sends any document in a "discussion" on a given "topic" (that is, a statement and its replies) he should also send all documents on that topic by the partners of that camp, timely and in sequence (albeit arbitrarily interspersed with other discussions), including old documents where necessary (where a new message is sent to comply with a new FWD or BCC agreement). This is essential to allow the recipients to determine what was the latest opinion purportedly expressed on the topic within the hearsay camp, and hence any purported implied agreement.) Since a given ego could receive – possibly different – hearsay about the same discussion from any other power, what it actually believes about the discussion is generally different again (and generally not common knowledge, so not appropriate to store in any the discussion belief). (It could piece together what discussion it believes really took place, but probably best only (tentatively) to decide the "gist" – such as expected latest opinions and agreements – from all available evidence – as a separate belief.)
Here is a sketch of the main classes for an ego-tree that I have described or alluded to. Not tested, but compiles in BlabBot – which already has the missing classes. See notes below.
//////////////////////////
template <class t, class k>
class Map {
// Container of elements of type {t}, with internal
key of type {k}.
public:
t& Obtain(k key); // find, or create if not found
//...
};
class Operation {
// An operation, as described in my GT method.
//...
};
class Discussion {
// A discussion: a sequence of press of a specific camp.
// Implemented in BB.
//...
};
class Power {
// A game power.
// Implemented in BB.
//...
};
class Camp {
// A set of Power~s.
// Implemented in BB.
//...
};
class Ego;
class Belief {
// Base class of any belief.
public:
Ego* Owner;
Belief() {}
Belief(Ego* owner): Owner(owner) {}
};
class RealBelief: public Belief {
// A belief about a Real quantity.
public:
Real Val;
RealBelief() {}
RealBelief(Ego* owner, RealBelief* x): Belief(owner), Val(x->Val) {};
};
class OperationSetBelief: public Belief {
// A belief about a set of Operation~s, as in my GT method.
Vec<Operation*> MyOperationVec;
//...
};
class DiscussionBelief: public Belief {
// A belief about a Discussion.
Discussion MyDiscussion;
//...
};
template <class slotT>class SlotM {
// Locates data in any Ego that can be specified by
member.
// {slotT} is type of pointer.
public:
typedef slotT SlotT;
slotT* Ego::* Loc1;
SlotM() {}
SlotM(slotT* Ego::* loc1): Loc1(loc1) {} // loc1 is
member in Ego
slotT*& operator() (Ego* ego) {return
ego->*Loc1;}
};
template <class slotT, class loc1T>
// Locates data in any Ego that
can be specified by member and inner member.
class SlotMM {
public:
typedef slotT SlotT;
loc1T Ego::* Loc1;
slotT* loc1T::* Loc2;
SlotMM() {}
SlotMM(loc1T Ego::* loc1, slotT* loc1T::* loc2):
Loc1(loc1), Loc2(loc2) {}
// loc1 is member in Ego; loc2
is member in loc1
slotT*& operator() (Ego* ego) {return
ego->*Loc1.*Loc2;}
};
template <class slotT, class loc1T>
class SlotMI {
// Locates data in any Ego that can be specified by
member and index.
public:
typedef slotT SlotT;
loc1T Ego::* Loc1;
int Loc2;
SlotMI() {}
SlotMI(loc1T Ego::* loc1, int loc2): Loc1(loc1),
Loc2(loc2) {}
// loc1 is member in Ego; loc2
is index in loc1
slotT*& operator() (Ego* ego) {return
(ego->*Loc1)[Loc2];}
};
template <class slotT, class loc1T, class loc2T>
class SlotMK {
// Locates data in any Ego that can be specified by
member and key.
public:
typedef slotT SlotT;
loc1T Ego::* Loc1;
loc2T Loc2;
SlotMK() {}
SlotMK(loc1T Ego::* loc1, loc2T loc2): Loc1(loc1),
Loc2(loc2) {}
// loc1 is member in Ego; loc2
is key in loc1
slotT*& operator() (Ego* ego) {return
(ego->*Loc1).Obtain(Loc2);}
// Obtain = find, or create if
not found
};
class CampWorld: public Belief {
// Common beliefs of camp {Key}.
// My private beliefs about the camp must be stored
separately.
public:
Camp* Key;
DiscussionBelief* MyDiscussionBelief;
CampWorld(Ego* owner, CampWorld* x): Belief(owner), Key(x->Key),
MyDiscussionBelief(0) {};
//...
};
class ExampleWorld {
// Example only.
public:
RealBelief* Example;
//...
};
class Ego {
public:
Power* Key;
Nat Depth;
Ego* Parent;
Vec<Ego*> ChildVec;
Map<CampWorld*, Camp*> CampMap;
OperationSetBelief* PlausibleOperationSet;
OperationSetBelief* SampleOperationSet;
RealBelief* Aggression;
RealBelief* MetaDebt;
ExampleWorld MyExampleWorld;
Vec<RealBelief*> ExampleVec;
//...
template <class var>
void PropagateDefaultBelief(Ego* top, var& v, var::SlotT* b) {
// Propagate new Belief {b} down ego-tree.
// Replace SlotT in var {s} of {this} Ego,
// and that of each descendent Ego if previously owned
by {top}.
// No need to search deeper if bespoke belief found.
v(this) = b;
Vec<Ego*>::Iter it;
for (it = ChildVec.Begin(); it != ChildVec.End();
++it) {
Ego* e = *it;
if (v(this)->Owner == top)
e->PropagateDefaultBelief(top, v, b);
}
}
template <class var>
void EnsureOwnBelief(var& v) {
// Call before updating Belief in var {s} unless sure
Ego {this} owns it.
var::SlotT* b = v(this);
if (!b || b->Owner != this) { // not owner; rare
PropagateDefaultBelief(this,
v, new var::SlotT(this, b));
}
}
void Example() {
// Examples of ensuring Belief~s exist and are owned
by {this}
// before updating them.
SlotM<RealBelief> s;
s.Loc1 = &Ego::Aggression;
EnsureOwnBelief(s);
Aggression->Val = 1.23;
s(this)->Val =
1.23;
EnsureOwnBelief(SlotM<RealBelief>(&Ego::Aggression));
Aggression->Val = 1.23;
Ego* e = new Ego;
e->EnsureOwnBelief(SlotM<RealBelief>(&Ego::MetaDebt));
e->MetaDebt->Val = 1.23;
SlotMM<RealBelief, ExampleWorld> sj;
sj.Loc1 = &Ego::MyExampleWorld;
sj.Loc2 = &ExampleWorld::Example;
EnsureOwnBelief(s);
MyExampleWorld.Example->Val = 1.23;
sj(this)->Val = 1.23;
EnsureOwnBelief(SlotMM<RealBelief, ExampleWorld>(&Ego::MyExampleWorld,
&ExampleWorld::Example));
EnsureOwnBelief(SlotMI<RealBelief, Vec<RealBelief*>
>(&Ego::ExampleVec, 10));
Camp* camp = 0; // should be desired camp!
EnsureOwnBelief(SlotMK<CampWorld, Map<CampWorld*,
Camp*>, Camp*>(&Ego::CampMap, camp));
}
}; // Ego
//////////////////////////
In BlabBot, a Real is a double; a Vec is derived form std::vec. A Map provides fast lookup give a sparse key in the object – it could be a sorted Vec (since it is small and insertions are infrequent) or a hash table – or a std::map could be used with a little adjustment; a Camp object is unique for a given set of powers (any order, ignoring duplicates); a Discussion contains Declaration~s, each of which records press on a given topic, the latest opinion by each partner and any agreement.
Ego::Key is the power that has this ego; it is unique within Parent; used to lookup within ChildVec.
All Belief* must be null if not realized. Simplest if all Belief* are direct members of Ego. (See extra complexity of "slots" needed to locate beliefs within ExampleWorld, ExampleVec and CampWorld in Example function.)
Ego::PlausibleOperationSet and SampleOperationSet are beliefs about sets of operations, as used in my GT method.
Ego::Aggression is an example of belief about an observed bias – a measure of aggression, against camps in general, which could by used by the heuristic functions of my GT method.
Ego::MetaDebt is an example of a meta-belief. A meta-belief is a belief of the parent Ego about the child – in this case a measure of what the parent believes the child owes it.
An Ego~s destructor must delete all its child's Ego~s then all Belief~s that it owns (not the other way round or Belief~s would be out of scope when child Ego~s checked ownership).
[0] An identical object could often serve in multiple locations; for example, in deeper Ego~s of the same mind. By pointing to a shared object, space, and more importantly, time can be saved – especially as only one copy need be updated, and the updated version is automatically available everywhere it is needed. See Value [8].
[1] Power, player (and the world of a player, a.k.a. the running program when a bot) are almost, but not quite, synonymous as far as DAIDE is concerned (since a player is only identified by the power that he is playing in the current game, and we only care about his ego). (Anthropomorphically and lazily, I normally refer to a power as "he", rather than a PC-style "she, he or it"; no sexual implications implied – or insult intended to anyone or anything! This allows an easier mental slip to "current player", who may be human, and anyway reminds us that any power needs special care, since the associated player (or world) is an intelligent, reactive agent, unlike the other components of the game, which are more straightforwardly mechanical.) In other contexts, "power" just means his tangible units, SCs, HCs, their topology, and so forth, on the board. But this can be considered to just represent resources of, or constraints on, the player, so no need to distinguish (if we bear in mind that different beliefs are common to different sets of powers – these data just happen to be common to all – see below). So for convenience, when clear from context, I normally only use the term "power"; otherwise, or for emphasis, "board-power" or "player's-power". I also often say "I", when I mean what I would do if I were my bot!" Similarly, it is convenient to use the same data object for both.
However, there is a subtle difference between powers and players when considering multiple games. For example, it might be better for (the board-power) ENG (or its current player) in this game to build armies, whereas averaged over many games it might be better to build fleets when playing ENG. On the other hand, different properties of different types of player might be discovered, though I would not probably attempt to recognise specific humans or bot-types (certainly not clones of self, which is against Etiquette!). Instead, I might observe, and take advantage of the fact that some powers consistently tend to order a hold when I would not in the same circumstances; that is, have apparently undue weight for a hold, say – but I would probably not look for a HoldBot or Mr Bean, say, as such, as it would probably be too specific, for long-term use, to be worth coding. Similarly, I would also not associate any Diplomacy game-independent beliefs with a specific board-power (even within a specific variant, whether identified by its name or, better, its formal description). Instead, I would associate such beliefs with various properties (such as relative amount of coast), such that, if necessary, it would only apply to ENG (in that variant) but may be applicable to other – possibly yet unseen – variants.
[2] I use "common" in its most usual technical sense, meaning the belief is "shared"; that is, believed by all partners in the camp, but also that each partner believes that all other partners believe that the others believe this, recursively. (That is not to say that all partners represent given belief the same way.) Pinker uses the term "mutual", because "common" is used ambiguously colloquially, but also says some specialists use "mutual" to mean just "shared"! I assume any technical term takes precedence in a technical context!
[3] For clarity, I capitalize the initial letter in any non-local (global or member) identifier that I invent (and synonym others that do not conform, where practicable), but not the word or phrase of the concept it represents, nor for local identifiers (dummy arguments and local variables). To avoid ambiguity, I never start a sentence with a plain identifier. It can be convenient in a comment to inflect an identifier; for example, for a plurals or conjugation, but for precision I separate any prefix or suffix by a tilde (~) thereby avoiding corrupting the identifier itself. For example, the real-world "reply", a local identifier "reply" and the non-local identifier "Reply", could be inflected to "replies", "reply~s" and "Reply~s", respectively.
In contexts where it may be at all ambiguous whether I am referring to a formal symbol (or expression) I enclose it in back-primes; for example, "`Reply` is a used to ...." C++ keywords invariably need back-primes for clarity, `this` being a common case. I generally enclose local identifiers in back-primes in comments, even when unambiguous, for consistency, and to avoid them accidentally becoming ambiguous after renaming by a repetitive edit. The following are equivalent: "`x` = 0" and "`x == 0`"; the latter being more formal.
In C++ source, I generally stick to documenting and discussing the program-level concepts, rather than the corresponding real-world concepts that they model. For example, within the source, "Power" refers to the class, and "power" may refer to an instance of, an object that represents a real-world (that is, Diplomacy) power; it being assumed in the source that the real-world meaning is understood (or is documented externally unless trivial). This simplifies searching and repetitive edits, and anyway it is generally the definition of the implementation that most need documenting – the real-world concept being taken as read. Outside the source I use "type" or (usually, more specifically) "class" for the program-level entity that corresponds to a real-world "kind" of entity. Which I talk about directly depends on which aspect I want to emphasise, but the meaning of other is implicit, as are their instances.
[4] In principle, even facts and axioms have some uncertainly (from possible hardware, software or conceptual/convention errors), but this should be negligible and not worth modelling.
[5] By "hearsay" I mean an inner FRM message (that is, FRM press). Receipt of a (whole message) FRM(s1)(r1)(FRM(s0)(r0)(c0)), where recipients r0 contains sender s1, indicates to all recipients r1 that sender s1 has received message FRM(s0)(r0)(c0). But there is no implication about how confident r1 should be that r0 did, indeed, receive that inner message, nor if received, the confidence of s1 (or, independently, the rest of r0) about c0 – it might even be zero. So to determine their confidence about c0, r1 need to judge the honesties of s1 and s0 within the camps concerned. Similarly, if s0 = s1 then r1 are being informed that s0 (that is, s1) sent c0 to r0, but no indication of how confident r1 should be about whether the inner message was, indeed, sent to r0, nor the validity of c0, nor expected confidence of r0 about c0.
If neither s0 nor r0 contain s1, s1 cannot have sent or received c0 directly. It could be interpreted as necessarily a lie, but then it would be silly and serve no purpose – maybe even brand s1 as a liar and/or idiot. So it would be more useful to interpret it as hearsay, albeit embedded within an undefined directly observed message or otherwise deduced. (The Syntax Document does not explicitly say an embedded FRM must be directly observed, even though that might well have been assumed! It says "sent or received" but does not explicitly say "directly".) If such a message can mean indirect, FRM(s0)(r0)(c0) must be hearsay or otherwise deduced as far as s1 is concerned, but with the implication that, inasmuch as s1 is honest, he has reasonably high confidence that it was sent as stated – not just > 50%, but enough to risk some reputation upon!
[6] Apart from hearsay, the only purpose I can think of that could need deeper levels of Ego to be realized than for direct press is for representing deep reasons for possible hypothetical or counterfactual worlds that could explain observed/non-observed, hypothetical or counterfactual messages – such as possible friendships between my enemies and their guesses at my friendships. Needless to say, due to the complexities, doubts and probable small returns of hearsay and explanations, they, and their deeper worlds, will be well down my priority list! But it is an example of where deeper worlds could be needed, and it is easy and efficient enough to allow for in the general belief mechanism to allow immediate use if need be.
[7] BlabBot (BB), the source and documentation of which I shall release, contains a Forum class, which records a sequence of non-empty Session~s for a given camp. A Session records all press messages (with times) between a given camp in a given turn, together with various summaries, including: "discussions" on different "topics", the latest "opinion" of each partner power on each topic, if any, and any resultant "agreements" ("treaties" if the topic is a PRP or INS, else "accords") [11]. (A full log of all press observed between all powers is also available in BB as a "fact".) I may also include the Ego-tree classes in BB (but probably not in the initial release).
[8] In BB, a Nat is an unsigned int; a Real is a double; a Vec is derived from std::vector; Power [9], Camp [10], Forum and Session [7] are more complex classes, which would typically have bot-specific extensions. There is only one Power object for each power. There is no more than one Camp object for each canonical (sorted) set of powers – created when first needed. A Map is any suitable container that provides fast access for a unique key stored within the object. (A sorted Vec would probably be best here, thereby allowing binary search; insertions being rare and normally with few elements to shift. BB also has hash table classes.) An Operation represents a combination of one or more orders, as outlined in my GT method.
[9] A Power provides its TLA and a unique index from a compact set (0...n-1), suitable for fast table lookup of Power, or associated data, in vectors.
[10] Each distinct Camp represents a distinct set of Powers (stored in a canonical (address) order, no duplicates) for any purpose (such as a camp, a set of recipients, or the powers yet to reply to some press). A "camp" is a camp that has access to some data that is common knowledge to all and only its partner powers. (Whether other, possibly overlapping, camps have (common or shared) knowledge is undefined. The concept is used in BB to label press observed by (exchanged between) a given set of powers (irrespective of the which of them was the sender), and any other camp-specific data, such as any agreements. I say that all powers in a given camp "observe" the same press message, whether they were the sender or a recipient, making press analogous to any other kind of common belief, such as what alliances or informal friendships I believe they have, even in the absence of press. There is a sparse set of camps, but each distinct Camp has a unique address (itself findable in a hash table from its components Power~s), suitable for lookup in a Map.
[11] A few press messages, such as TRY, are dealt with specially, as they have no meaningful response (which is not necessarily a "reply"), so they are not considered to be part of any discussion. Other approaches could be used instead (but I probably would not), such as having a derived press class for just a few kinds of press that may be of interest.
[12] Conceptually, any ego/Ego has a unique name, comprising the TLAs of its powers, in alphabetic order, separated by hyphens, preceded by the name of its parent ego, if any, separated by a dot, thereby denoting its position in an infinite logical ego-tree. For example, the root ego named AUS (corresponding to the real player concerned) is represented by an Ego containing pointers to Ego~s named AUS.AUS, AUS.ENG, AUS.FRA, AUS.AUS-ENG, and so on. (AUS-ENG represents a camp of AUS and ENG.) Conceptually, a corresponding ego has infinitely many names; for example, here, AUS is also named AUS.AUS because its (one) corresponding child pointer points to the same Ego; it is also named AUS.AUS.AUS, and so on. AUS.ENG must point to a distinct Ego, because it represents a different power, ENG. (Additional consecutive identical TLAs are always redundant and should be removed when canonicalizing.) The shortest (and canonical) name is that obtained via successive parents.
[13] Data for inactive (eliminated or disconnected) powers should be retained, as they may be needed to record, directly of indirectly, things such as explanations for the current Ego-tree. However, data for such powers should be ignored for most purposes; for example, during my GT method.
[14] "Realized" conveniently means both "explicitly thought about" and "explicitly represented as a real (that is, physical) data object" (hence also "reified" if the entity being represented is abstract). In both cases this is opposed to a mere potential to do so when required. A null pointer to a belief can conveniently represent belief that is not realized; other data would merely be undefined, which must be clear from context, flags, or similar.
[Originally presented in DipAi post #8047.]
I said that, beyond a certain depth in the Ego-tree, an Ego merely duplicates the beliefs of its closest ancestor of the same power. But how can a parent realize such a child when an ego does not know any of its ancestors (that is, an Ego, when simulating its ego, is not allowed to access any ancestor Ego, even though, for convenience, the Ego may know its parent, and so forth)? However, I also said that an Ego should be a copy of its parent, unless there are good reasons to do otherwise. There is, of course, good reason for any ego to believe what its own power is, which the parent can easily assign! In theory, this could be the only difference between parent and child in all Ego~s below some depth. Each corresponding ego could believe itself to be in the same world as its siblings, but that it was a different power.
Alternatively, other differences must be held explicitly by the parent, as "meta-beliefs", representing its beliefs about default beliefs of potential child egos of specific powers or sets of powers. (These are the dynamic default beliefs I alluded to: any changes to such beliefs by the parent would be instantaneously propagated to all descendents that point to them, but descendents could override them by pointing to bespoke beliefs when, exceptionally, there was evidence for divergence of belief. Such bespoke beliefs would become default beliefs for any deeper descendents. (When an Ego first changes from inherited to a bespoke belief, any of its descendents sharing the original default must also be changed to point to the new default.) An example could be that any ego would typically inherit the ancestral sets of plausible and sampled operations specific to its power – albeit possibly overriding them – so each ego/Ego needs to know the sets for each power, not just itself. An example of a meta-belief that is not specific to a single power could be where the root might belief (from experience in previous games) that its opponents tend to be more aggressive than it believes is optimal for its own play. Rather than replicating such beliefs, and to capture the semantics most powerfully, any meta-belief should be associated with a camp (a specific set of powers) rather than a power.
Although implied by my analysis of hearsay (nested FRM) press, I did not emphasise that a camp (a specific set of powers with some common beliefs) of some ego (or its corresponding CampWorld) need not include its own power as a partner. It is an "outsider" of such a camp, which represent a (partially) coordinated potential enemy (set of powers).
Less obvious is that a camp could, at least conceptually (whether or not allowed in an implementation), comprise a single power, which is synonymous with a power's private beliefs. Conversely – it has just struck me – conceptually at least, egos could, and perhaps should, be associated with camps rather than powers – single-power camps being just a special, albeit very common and important, case. A camp would then analogously put "corporate identity" on the same scale as "personal identity". So just as one power could believe that another power was trustworthy or had certain propensities, say, more generally a camp could believe that another camp was trustworthy or had certain propensities. Of course, adding more powers to a camp tends to make its actions much less controllable (with a very big jump from one to two!), but going from one to many is, perhaps, just a matter of degree rather than kind. I will give the idea more thought and may implement it in that more general way.