Randomness forms a vital backbone of modern society, providing fairness in lotteries and jury pool selections. It is also used in digital communication to ensure that the private keys used in encryption are secret and unpredictable. In organizations, randomness is also required for implementation of differentially private machine learning protocols.
As deterministic systems, classical computers cannot create true randomness on demand. As a result, to offer true randomness in classical computing, we often resort to specialized hardware that harvests entropy from unpredictable physical sources, for instance, by looking at mouse movements, observing fluctuations in temperature, monitoring the movement of lava lamps or, in extreme cases, detecting cosmic radiation. These measures are unwieldy, difficult to scale and lack rigorous guarantees, limiting our ability to verify whether their outputs are truly random.
Compounding the challenge is the fact that there exists no way to test if a sequence of bits is truly random. Given the difficulties in sourcing and verifying randomness, we resort to trust: we simply must trust that the hardware generates fresh randomness.
The requirement of trust, however, becomes an issue when randomness is used to make decisions among parties distrustful of each other, such as when competitors resort to a coin toss to settle a dispute. Who gets to toss the coin? If the parties are participating remotely, how can one verify that there was a coin toss at all?
An ideal solution would be a kind of randomness with the following three characteristics:
This kind of randomness is called Certified Randomness.
One possible way of verifying that a sequence of numbers truly came from a random source is by demanding some kind of signature or proof, perhaps embedded in those numbers itself, that could not have been faked using predictable sources. You could, for example, ask that your randomness provider only draw numbers from a specific probability distribution that is challenging to imitate using non-random sources. You could then verify that the numbers you receive came from your chosen distribution and therefore must have been truly random.
As it turns out, such a protocol is impossible to realize using conventional computers but can be accomplished using a quantum computer.
Randomness is intrinsic to a quantum computer: a quantum bit can be in a superposition of zero and one, and its measurement is an intrinsically random process. Furthermore, since quantum computers are still subject to computational complexity and theoretical constraints, their outputs can be rigorously analyzed. Together, these two properties provide a path to use a quantum computer to generate numbers that can be mathematically proven to be random.
Concretely, a quantum computer executing a quantum program (also called a circuit) produces outputs that are intrinsically random and unique to the executed program. While an honest remote randomness provider can quickly run circuits on a quantum computer to supply numbers corresponding to quantum circuits, a malicious server would struggle to make non-random bits compatible with the submitted circuits. It is extremely difficult for a conventional computer to anticipate the likely outputs of quantum programs because quantum programs take an exponentially long time to be executed classically, even on the most powerful supercomputer.
In our latest work, published in Nature, we demonstrate an implementation of such a protocol with the help of our collaborators at Quantinuum, Oak Ridge National Laboratory, Argonne National Laboratory and The University of Texas at Austin. We treat the 56-qubit Quantinuum System Model H2 trapped-ion quantum computer as an untrusted server to whom we send challenge quantum circuits one by one.
These circuits produce probability distributions extremely difficult to emulate using conventional computers: while we expected the quantum computer to run each circuit in about two seconds, we observed that it would take the largest supercomputer in the world about one hundred seconds to simulate the execution of the same circuit. For each such circuit, we requested Quantinuum to send us a number within two and a half seconds. Finally, for verification of randomness, we calculate the correlation of the numbers we receive and the circuits we started with, and we mathematically verify that at least a certain fraction of the bits received from Quantinuum were obtained from fundamentally random quantum measurements of the circuits that were originally submitted.
Performing this verification is computationally demanding. In our demonstration, to determine the correlation between our circuits and the data we receive, we use four supercomputers including Frontier, the largest supercomputer in the world at the time of the experiment, owned by the U.S. Department of Energy and hosted at the Oak Ridge National Laboratory. The key insight that makes this protocol scalable is that, although we only need to occasionally verify our randomness, a malicious agent would need to continually expend an unsustainable amount of resources to evade detection.
From our demonstration, we verify that at least 71,313 bits of entropy are secure against an experimental malicious adversary at least four times more powerful than the world’s largest supercomputer. Crucially, we are guaranteed randomness even if the quantum computer was acting maliciously, compromised by a third party or impersonated. The randomness our protocol generates, therefore, does not require trusting any external entity.
Such a verifiable quantum randomness is enabled by quantum computers capable of running programs much faster than a conventional computer. Due to its simplicity, the task of sampling from quantum circuits has established itself as a benchmark to showcase the power of quantum computers and has been repeated in many experiments in laboratories. Our work is the first to leverage this task of sampling to implement a potential useful cryptographic primitive.