Weather ≈ toy model
Newspaper “chance of rain” tables from the 1950s were literally 3-state Markov chains—exactly what this tool’s example preset reproduces.
Tips: Use Normalize to fix row sums. Press Enter to run. Keep probabilities between 0 and 1.
A Markov chain is a simple model for systems that hop between a finite set of states—like Sunny, Cloudy, and Rainy—where the next state depends only on the current state. The behavior is encoded in a transition matrix \(P\). Each row of \(P\) corresponds to a “from” state and lists the probabilities of moving to the “to” states on the next step. The probabilities in each row must add up to 1, so \(P\) is a row-stochastic matrix. Entry \(P_{ij}\) means “if I’m in state \(i\) now, I move to state \(j\) next with probability \(P_{ij}\).”
Sunny, Cloudy, Rainy).
The tool creates a square table with the same labels on rows and columns.
Pressing Run performs a random walk. If your current state index is \(i\), the
next index \(j\) is sampled according to row \(i\) of \(P\). Repeating this for the number of steps
you asked for produces a path like:
Sunny → Sunny → Cloudy → Cloudy → Rainy → …
When you run multiple simulations (“Runs”), the tool also reports visit counts and an empirical distribution: the fraction of time spent in each state. By the law of large numbers, if the chain is ergodic (irreducible and aperiodic), these frequencies converge to the chain’s stationary distribution \(\pi\), which satisfies \(\pi = \pi P\). In plain language, \(\pi\) is the long-run proportion of time the chain spends in each state, regardless of where it starts.
Summary: build a valid row-stochastic table \(P\), choose \(\pi_0\), and simulate. The shown path is a single realization; the bar chart displays aggregate behavior across steps and runs—your window into the chain’s underlying dynamics.
Newspaper “chance of rain” tables from the 1950s were literally 3-state Markov chains—exactly what this tool’s example preset reproduces.
A chain can forget the past and still create rich stories: casino craps outcomes, vowel/consonant text generators, even board-game AI all rely on Markov hops.
The stationary distribution isn’t a state but a probability vector. Once you reach it, the histogram stops drifting even though the walker keeps moving.
Google PageRank is just a giant Markov chain with random “teleport” jumps to guarantee ergodicity—swap in your own teleport probability via the matrix.
Run the chain multiple times from different starts. If the visit bars agree quickly, you’ve likely got a fast-mixing chain; if not, there may be nearly disconnected regions.