Probability & Randomness
Weighted RNG, Poisson disk sampling, and blue noise distributions
System Overview
Weighted random number generation allows selecting outcomes with different probabilities. Unlike uniform random, weighted RNG gives control over distribution. The histogram shows actual vs expected distribution, revealing bias and fairness.
Poisson disk sampling creates evenly-spaced points with guaranteed minimum distance. This prevents clustering and creates visually pleasing, blue noise distributions ideal for procedural placement.
Why Games Use This
- Loot Tables: Weighted drops with controlled rarity
- Procedural Placement: Even distribution of objects, enemies, resources
- Fairness: Ensure players get expected outcomes over time
- Visual Quality: Blue noise prevents visible patterns
- Performance: Simple algorithms, fast execution
Key Parameters
- Weights: Relative probabilities of each outcome
- Poisson Radius: Minimum distance between points
- Samples: Number of random calls per frame
- Seed: Determines sequence (affects fairness perception)
Failure Modes
- Poor seeding: Bad seeds create visible patterns
- Small sample size: Distribution doesn't converge to expected
- Extreme weights: Very small weights rarely trigger
- Poisson rejection: High density causes many failed attempts
- Perception bias: Players see streaks even with fair RNG
Scaling Behavior
Weighted RNG is O(n) for n outcomes, but can be optimized to O(log n) with binary search on cumulative distribution. Poisson disk sampling is O(n²) naive, but spatial hashing reduces to O(n).
Memory is O(n) for storing samples and distributions. For real-time, limit history size.
Related Algorithms
- Alias Method: O(1) weighted sampling after O(n) setup
- Blue Noise: Optimal Poisson disk for visual quality
- Stratified Sampling: Divide space into equal regions
- Low-discrepancy Sequences: Quasi-random for better coverage
- Pseudo-random: Deterministic sequences that appear random
Free Tools & Libraries
- d3-random: Various random distributions
- poisson-disk-sampling: JavaScript implementation
System-Thinking Prompts
- What happens with extreme weights? 1% vs 99% probability?
- Where does randomness break? Visible patterns, poor seeds?
- How do players perceive fairness? Streaks vs actual distribution?
- Which parameter dominates? Weights or sample size?
- What's the minimum sample size? When does distribution converge?
- How does Poisson radius affect density? Too small vs too large?
- Can we guarantee fairness? Or is perception more important?