**Advisory:** We only operate services from the RANDOM.ORG domain. Other sites that claim to be operated by us are impostors. If in doubt, contact us.

RANDOM.ORG generates true randomness via atmospheric noise. This page shows the statistics for the Runs Test.

(Each graph is from a different radio. Click on the graphs to enlarge them.)

The focus of the Runs Test is the number of *runs* in a
given block of random numbers, where a run is an uninterrupted
series of identical bits bounded on both sides by bits of opposite
values (or the beginning or end of the block). The Runs Test
measures whether the number runs of ones and zeroes of various
lengths are as would be expected for a truly random sequence. Each
graph shows how the numbers produced by a given radio performed on a
particular day. New graphs are generated automatically shortly
after midnight (UTC) every day. Each radio has its own name (e.g.,
copenhagen-hw0), and each graph is labelled with the name of the
radio to which it belongs. Not all radios are active on all
days.

The Runs Test works by examining the stream of numbers as a
series of blocks. For each block, we iterate over the bits and
count the number of transitions between values (i.e., 0→1 or
1→0). This value is used to compute a *P-value*, which
indicates whether the number of runs in the block is as we would
expect. A block fails the test if its P-value is too small, meaning
that there were fewer or more runs than we would expect.

The graphs show the distribution of P-values across the range. In the configuration used here, blocks with P-values less than 0.01 failed the test. For a truly random sequence, we expect a relatively even distribution of P-values across the range. Remember that a good random number generator will also produce blocks that don't look random, so we expect some of the blocks to fail the test. (In fact, we should be suspicious if all blocks passed the test.) You will find more details about this on the Statistical Analysis page.

Full details of the Runs Test are given on page K.4 of Charmaine Kenny's Analysis of RANDOM.ORG and on page 18 of NIST Special Publication 800-22 (PDF, 1.4 MB). Note that there is a newer version of SP800-22b (2008 revision, PDF, 7.1 MB) available.

This test measures whether the number of 0s and 1s produced by the generator are approximately the same as would be expected for a truly random sequence. Each graph shows how the numbers produced by a given radio performed on a particular day. New graphs are generated automatically shortly after midnight (UTC) every day. Each radio has its own name (e.g., copenhagen-hw0), and each graph is labelled with the name of the radio to which it belongs. Not all radios are active on all days.

The test works by examining the stream of numbers as a series
of blocks. For each block, we compute a *P-value*, which
indicates whether the ratio of 0s to 1s is as close to 0.5 as we
would expect for a truly random sequence. A block fails the
test if its P-value is too small, meaning that the ratio of 0s
to 1s is further from 0.5 than we would expect. The frequency
(monobit) test is a pretty basic test, and if a given block of
numbers fails it, we can expect that block also to fail many of
the other tests.

The graphs show the distribution of P-values across the range. In the configuration used here, blocks with P-values less than 0.01 failed the test. For a truly random sequence, we expect a relatively even distribution of P-values across the range. Remember that a good random number generator will also produce blocks that don't look random, so we expect some of the blocks to fail the test. (In fact, we should be suspicious if all blocks passed the test.) You will find more details about this on the Statistical Analysis page.

Full details about the Frequency (Monobit) Test are given on page K.1 of Charmaine Kenny's Analysis of RANDOM.ORG and on page 14 of NIST Special Publication 800-22 (2001 revision, PDF, 1.4 MB). Note that there is a newer version of SP800-22b (2008 revision, PDF, 7.1 MB) available.