Diplomacy
Cogitations

Low Discrepancy

John Newbury       23 September 2008

Home > Diplomacy > Cogitations > Low Discrepancy

Cogitations on AI in the Game of Diplomacy: Low Discrepancy


Here is a brain dump of full details my current Cogitations about Low Discrepancy . (Later Cogitations override earlier ones if contradictory.)

Contents

2008-02-02

[Originally presented in DipAi post #8027.]

See http://en.wikipedia.org/wiki/Low-discrepancy_sequence that a deliberate balance distribution, better matching desired probability (especially for small samples) is called "low discrepancy" (as in a low discrepancy sequence, compared to a random or pseudorandom sequence). (Example 2-d plots show similarity to how artists, especially on a stage, say, often fail to make a realistic distribution of stars on the sky, say – too regular; not enough clumping.) Typically such sequences would be calculated by some quick algorithm (as would a pseudorandom number), but I see no reason why this should not be some loosely coupled mechanism, which I would have to use to generate my "boost" for "intensity" (and hence "achieved probability").

But, it says, as I suggested and would expect, low discrepancy can apparently speed convergence (since, presumably, unless random clumping is important, it more rapidly represents the desired distribution). Such a sequence then makes it a "quasi-Monte Carlo" (simulation), though a combination is possible (apparently with advantage for large samples).

So I now think that low-discrepancy should not just be seen as a bonus in my mechanism to remove bias due to differences in difficulty of placing different operations; its use for both purposes should be considered carefully.

I always worry about plausible pathological cases – even if I later decide they should be too rare to bother checking (who checks every possible overflow?) – it just needs not to be significant compared to total reliability of the rest of the program, data, operating system, hardware and so forth. Especially so with heuristic method, and especially when they may be affected in unexpected ways, since, possibly long after I had thought them dependable part of the foundations, to minimize the number that unexpectedly bite me later! My worry about pathological interactions with boost updating may be unfounded in practice, but, as an extreme example in principle, an over-sensitive boost adjustment could produce a strict fixed or alternating pattern, except where perturbation is needed to correct the exact proportion. It seems plausible that such almost-deterministic sequences in different interacting models of powers could get phase-locked (or almost so), and end up with long repeated sequences that explore only a relatively few combinations. (Its the sort of thing simple algorithms (even when coupled) tend to do – they get stuck in simple loops! Hence the difficulty and care needed when designing a good pseudorandom number generator!)


Tracking, including use of cookies, is used by this website: see Logging.
Comments about this page are welcome: please post to DipAi or email to me.