Due to unexpected failures of github's LaTeX parsing (which were not evident until I published this, but have persisted afterwards), and since the mathematical parts are important in this, I have migrated this proposal to a blog post with identical content, but correctly formatted equations.
Please continue to put any comments here.
right, the whole fork issue (which always amused me because they're very reminiscent of the mechanics of the security reduction arguments used to guarantee soundness in crypto proofs .. rewind time and play back the same algorithm with the same starting state but inject fresh randomness, extract the secret information!) seems not to apply here, but thinking about it firms up the content of what @chris-belcher was saying and makes me rethink my response there. Look at it from two perspectives: if you are only allowed to use your utxo once (so counter value, as discussed in doc, is kept to always be 1), then double spend is your own fault as a user. But if we allow counters > 1 for more flexibility, as discussed, we need to ensure that there is no linkage between the rings chosen for counters 1,2,3.. etc. It's publically verifiable what counter value is being used (the verifier plugs in j=2 for example, for every key in the ring). So anyway the long and short of it is that I think the right approach is deterministic random generation of decoy/ring set based on public info, i.e. verifier policy: btc amount, utxo age and counter value.
Another way to say it, that the set of decoys is disjoint between different runs is only a problem if there is a linkage between two runs of the protocol. With counters we don't get that, only a "real" "double spend" gets that, which means same counter value, which we mustn't do.
Enforcement is clearly a bit trickier here, but I agree. Not being on a consensus ledger makes it different. But the concept still applies since there's interaction. Verifiers individually can enforce a rule, and a protocol spec can clarify what the rule MUST be, which gets us most of the way to enforcement (if both sides, signer and verifier, choose not to apply it, then they're just doing their own thing, and we can't help).
Yeah this one looks like a very interesting read to consider some details. Some overlap, but some differences. I see that there is a deterministic random generated from key images of inputs (and other stuff). That makes sense there but not here; we are not chaining these ring sigs so we have no concept of 'input'.