Markov Chain — Random Walk Generator

Define states and a transition matrix, then simulate random walks. Private by design—everything runs locally in your browser.

Inputs

Press Enter or click Build Table to create the transition matrix.
If “Custom distribution” is selected, fill the row labelled π₀ below.
Steps per run · Runs (for Monte Carlo stats)

Result

Result: Enter states, build the table, and click Run.

Tips: Use Normalize to fix row sums. Press Enter to run. Keep probabilities between 0 and 1.

Empirical distribution (visits)

About Markov chains (and how this table works)

A Markov chain is a simple model for systems that hop between a finite set of states—like Sunny, Cloudy, and Rainy—where the next state depends only on the current state. The behavior is encoded in a transition matrix \(P\). Each row of \(P\) corresponds to a “from” state and lists the probabilities of moving to the “to” states on the next step. The probabilities in each row must add up to 1, so \(P\) is a row-stochastic matrix. Entry \(P_{ij}\) means “if I’m in state \(i\) now, I move to state \(j\) next with probability \(P_{ij}\).”

Building your transition table

  1. List states: Enter a comma-separated list (e.g., Sunny, Cloudy, Rainy). The tool creates a square table with the same labels on rows and columns.
  2. Fill rows with probabilities: Each cell is a number between 0 and 1. A row like \( [0.7, 0.2, 0.1] \) for Sunny says: stay Sunny with 0.7, go to Cloudy with 0.2, or Rainy with 0.1 on the next step.
  3. Ensure row sums = 1: Click Normalize to automatically scale a row if your entries are close but not exact. The tool validates rows before running.
  4. Choose a starting point: Pick a single state (a “one-hot” start) or provide a custom initial distribution \(\pi_0\) (a row of nonnegative values that the tool normalizes to sum to 1).

What the “Run” button actually does

Pressing Run performs a random walk. If your current state index is \(i\), the next index \(j\) is sampled according to row \(i\) of \(P\). Repeating this for the number of steps you asked for produces a path like:
Sunny → Sunny → Cloudy → Cloudy → Rainy → …

When you run multiple simulations (“Runs”), the tool also reports visit counts and an empirical distribution: the fraction of time spent in each state. By the law of large numbers, if the chain is ergodic (irreducible and aperiodic), these frequencies converge to the chain’s stationary distribution \(\pi\), which satisfies \(\pi = \pi P\). In plain language, \(\pi\) is the long-run proportion of time the chain spends in each state, regardless of where it starts.

Tips, edge cases, and good practice

  • Absorbing states: A row with a 1 on the diagonal (e.g., \([1,0,0]\)) never leaves that state; paths eventually get “stuck” if they reach it.
  • Sparsity: Zeros in a row mean impossible jumps. This can split the state space into components; the long-run behavior then depends on where you start.
  • Rounding: Small rounding errors are common; Normalize cleans them up.
  • Interpretation: Short runs show variability; increase steps and runs to see stable, meaningful visit frequencies.

Summary: build a valid row-stochastic table \(P\), choose \(\pi_0\), and simulate. The shown path is a single realization; the bar chart displays aggregate behavior across steps and runs—your window into the chain’s underlying dynamics.

Explore more tools