Entanglement is a remarkable example of quantum weirdness. Each particle in an entangled pair contains information that is totally random, yet perfectly correlated with that of its partner. These quantum correlations violate local realism, which can be described as two assumptions:
In our lab, we can make a series of measurements on a source of entangled photons to demonstrate the violation of local realism. This experiment is called a Bell test. The illustration above is from “The Mystery of the Quantum Cakes,” which explains the idea of a Bell test in an intuitive way.
The catch is that some assumptions go into this experiment, leaving loopholes for local realism. The two main loopholes are “detection” and “timing.” Single-photon detectors are not perfect, and even the best ones will miss some fraction of incoming photons. To close the detection loophole, this fraction must be small enough to ensure that the photons we didn’t detect can’t change the outcome. To close the timing loophole, the measurement devices that analyze each photon in an entangled pair must be far enough apart that a signal traveling between them at the speed of light–the fastest that any information can travel–could not change the outcome.
Both of these loopholes had been closed independently as of 2015. Several experiments, including one by a collaboration between NIST, our group, and several other institutions, first closed both loopholes at the same time in 2015. (See Recent Publications & News, above.)
The data from our 2013 detection-loophole-free C-H Bell test (B. G. Christensen et al. Phys. Rev. Lett. 111, 130406 (2013)) are publicly available and can be downloaded from the link below. The data organization is explained in the accompanying text file. The data file size is approximately 280 MB. Please direct any questions or comments to Bradley Christensen (bgchris2@illinois.edu).
Many quantum information protocols require two communicating parties (Alice and Bob) to share entanglement, often in the form of pure, maximally entangled pairs. In a future quantum communication network, these entangled pairs might be produced at a central location and distributed to users. The quality of this shared entanglement might be degraded by transmission through noisy channels, which would affect the performance of the system. Before using the resulting noisy entanglement for quantum information tasks, Alice and Bob will want to distill it: recover as many maximally entangled pairs as possible using only local operations (they can perform any operations on their respective qubits) and classical communication (they can communicate with each other about operations they perform and measurement outcomes). If they can extract any maximal entanglement this way, the noisy state is said to be distillable.
Distillability offers one example of how entanglement becomes increasingly complex at higher dimensions: all entangled 2⊗2 (two-qubit) states and 2⊗3 (one qubit and one three-level system) states can be distilled. However, at higher dimensions (more particles or more degrees of freedom per particle) there exist undistillable entangled states. This phenomenon, called bound entanglement, has been observed in trapped ion systems and simulated in linear optics systems. Bound-entangled states are clearly unsuitable for direct use in the most well known quantum information tasks, but they do have a number of other potential applications. More importantly, they are of fundamental interest.
The Smolin state is a four-qubit bound-entangled state. We have built a source (see Sources of Entanglement for background) to produce this state by encoding qubits on the polarizations and orbital angular momenta of photon pairs. The figure on the right shows how we can use can use these degrees of freedom to define logical 0 and 1 states for the qubits. Once complete, this would be the first true demonstration of bound entanglement in a photonic system.
If you were to set a classical pot of water on a stovetop and watch it, it would eventually boil; however that wouldn’t be the case with a quantum pot. The quantum Zeno effect is the paradoxical phenomena that you can “freeze” a quantum state in its current configuration by repeatedly measuring it, i.e. once a quantum state is measured and assumes a definite state, you can prevent it from evolving beyond that definite state by constantly projecting it back via measurement.
We are experimentally demonstrating the quantum Zeno effect by showing that there will be less state decay when repeated, projective measurements are introduced to a decaying system, compared to when no measurements are present. In doing so, we can potentially learn more about using the idea of measurement to limit the severity of state decay when transmitting quantum states.
To get an idea of how we are demonstrating the effect, consider the following example:
Suppose we have a beam of light that traverses space and hits a target.
If we add some pieces of refractive material throughout the path, the spatial mode of this beam will gradually evolve from it’s original position. After a sufficient number of perturbations, it will no longer hit the target!
Now suppose we center a pinhole in the location of the unperturbed beam after the first refractive slab. Although the beam has evolved beyond its original position, a little bit of light will pass through the pinhole and will be projected back to its initial spatial state! If we repeatedly project the spatial mode of the beam back to its original state, a fraction of the original light will eventually hit the target. Hence, the quantum Zeno effect!
There are about a hundred billion photons in a well-lit room. To get just one photon, you might imagine dimming a light source until the photons come out one at a time. But light sources such as bulbs, LEDs, lasers, and the sun all have something in common: they emit photons randomly. If you want to predict exactly how many photons will be emitted in one second, the best you can do is to calculate a probability distribution. Even if the source is very dim, it’s impossible to guarantee that only one photon will be emitted at a time. In many quantum information experiments, a “photon gun” - a source that only emits one photon at a time - would be more useful.
Even better, and more challenging to build, is a photon gun that can be fired at will. We are engineering a high-efficiency source of single photons that can be used for quantum information applications including quantum cryptography and optical quantum computing.
A common technique for creating single photons is to utilize spontaneous parametric downconversion (SPDC) which involves pumping a nonlinear crystal with a laser. A nonlinear process then converts the higher-energy pump photons into pairs of lower-energy daughter photons (“signal” and “idler”). Conservation laws require the daughter photons to be always produced in pairs, so if we detect the presence of one, we know with near certainty that its partner has also been created. So, while the signal photon is destroyed upon detection, the idler photon is now “heralded” and available for use in quantum information applications. This source is an example of a “heralded single-photon source” (HSPS).
However, this technique has significant drawbacks that prevent its adoption for large scale optical quantum information processing. Photon-pair creation via downconversion is a probabilistic process so only a certain fraction of pump pulses will give rise to photon pairs, preventing the on-demand creation of single photons.
This is further compounded by the fact that the probability of photon-pair generation is dependent on the pumping power. The harder one pumps a SPDC source, the more likely a photon pair is generated. However, this also increases the probability of creating multiple photon pairs at once. Since we only want single pairs, we suppress the multi-pair probability by pumping the crystal at lower powers. This limits the single-photon generation rate which prevents this technique from being deployed at large scales.
To address these main drawbacks of conventional HSPSs, we utilize a time-multiplexing technique. Each time-multiplexing cycle consists of pumping a SPDC crystal with a fixed number of low-energy pulses such that the multiple-pair probability is suppressed. So even though the corresponding probability of generating a single pair is low, there is a high probability that at least one of the pump pulses in the multiplexing cycle generated a photon pair. When a signal photon is detected within a multiplexing cycle, we know that its partner (the idler photon) has also been created. We then use the heralding signal to activate an optical switch to divert the idler photon into a variable-length storage loop. Since SPDC is a probabilistic process, the photon will appear at different times within each multiplexing cycle. With our adjustable loop, we can choose the photon storage time so that the photon is always released at the end of each time-multiplexing cycle. This yields a high probability of producing a single photon at a predetermined time while simultaneously minimizing the probability of producing more than one photon - a photon gun that can be fired at will.
We have demonstrated up to 66.7 +/- 2.4% presence probability of single-photon states collected into a single-mode optical fiber by multiplexing 40 periodic pumping events, a 10x enhancement over a non-multiplexed HSPS. We are currently focusing on enhancing the performance of our source by increasing the multiplexing repetition rate with faster optical switches and improving heralding performance with high-efficiency superconducting nanowire single-photon detectors. We have also begun developing auxiliary hardware for demultiplexing the single photons from our source (which are collected in a single spatial mode) into individual spatial modes for quantum information processing applications.
Entanglement, the non-classical, non-local “connection” that can exist between particles, is the foundation of many quantum information applications. We have built and continue to improve one of the world’s purest and brightest sources of entangled photons, using the process of spontaneous parametric downconversion in a nonlinear optical crystal.
Our polarization-entanglement source (left) uses two such nonlinear crystals back-to-back. A high-energy ultraviolet pump photon entering one of these crystals has a small (about one in a billion) chance of splitting into two low-energy red photons. The first crystal transforms single horizontally polarized parent photons into two vertically polarized daughter photons; the second transforms single vertically polarized parent photons into two horizontally polarized daughter photons. By sending a superposition of horizontal and vertical light (light at 45 degrees) into these crystals, we obtain a superposition of two horizontal and two vertical downconverted photons: entanglement. In this state, neither photon has any definite polarization at all–but as soon as the polarization of either photon is measured, the polarization of the other is immediately determined, no matter how far away it is.
Our paper on optimizing type-I polarization-entangled photons (Radhika Rangarajan, Michael Goggin and Paul Kwiat, Optics Express (2009)) discusses spatial and temporal phase compensation techniques to improve the quality of entanglement sources. The original paper and the necessary files for spatial-temporal phase compensation calculations are contained in this .zip file. See the Read_ME file and the included paper for details.
As we move towards real implementations of these sources and other quantum information protocols, bulk optics can be resource-intensive and unstable outside of the laboratory. Instead, integrated optics on nonlinear waveguides allow us to take these large, complicated setups and miniaturize them onto a single chip that is light enough to be easily implemented on a drone, and stable enough to be shot into space. In our group, we are developing a highly nondegenerate source of polarization-entangled photon pairs on a periodically-poled KTP waveguide chip. Our current design takes a horizontally polarized 532-nm parent photon and downconverts it into a 810-nm daughter photon and a 1550-nm daughter photon. This source also uses type-II phase-matching, which simply means that our daughter photons are produced in orthogonal polarizations; if the 810-nm photon is vertically polarized, then the 1550-nm photon will be horizontally polarized. Future designs may also include a PPLN-based chip and/or degenerate downconversion pairs.
We have also expanded the scope of how photons can be entangled. By focusing a pump laser on nonlinear crystals, we demonstrated the first example of hyperentanglement–photons entangled in every degree of freedom. These photons are entangled in polarization, orbital angular momentum and emission time.
More entanglement isn’t always better for quantum information applications. A downconversion source is typically used to produce photons entangled in polarization, but it may also create unwanted correlations in energy and spatial mode. These unwanted correlations degrade the purity of entangled pairs, and also cause heralded single photons to be in a mixed state, preventing the interference between photons from different sources that quantum information applications rely on.
Using group velocity matching and a broad bandwidth pump, we have developed an “engineered” pure source that is brighter than sources that fight unwanted entanglement with spectral filtering. The plot to the right shows an example measured joint spectrum of the photon pairs from an engineered source, which exhibit weak spectral correlations.
Quantum information protocols and computational algorithms require certain input states, and researchers need a way to verify the input state as they investigate a particular application.
In digital electronics, the input state of a logic circuit could be verified by measuring voltages at a few points. This method won’t work for a quantum input state. For example, measuring one of a pair of entangled photons would reveal the state of that photon after the measurement, but would not indicate the entanglement. Such a measurement would also give different results when repeated many times. It is possible to identify the quantum state of a system by making enough of the right measurements. By looking at a given quantum state from several different “directions” (projecting into several different bases), we are able to implement a quantum tomography, and learn how that state looks from every direction.
Quantum computing works through precise operations on qubits. These operations can be characterized using a quantum process tomography (QPT). One approach to QPT is to apply a process to many known and identical quantum states, then use quantum state tomography to identify the output states. However, the number of measurements needed for this method grows exponentially as 16N, where N is the number of qubits. Entangled photon pairs can be used to reduce the number of measurements needed for quantum process tomography, because each measurement made on an entangled photon reveals extra information about its partner. Direct Characterization of Quantum Dynamics (DCQD) uses a specific set of optimized entangled states as inputs to a process. By measuring the output, it is possible to reconstruct a process without performing a complete state tomography, reducing the number of measurements to 4N.
Unfortunately, the 4N measurements needed for DCQD (a full Bell-state analysis of the output) are known to be impossible for standard optical qubits. However, the use of qubits entangled in more than one degree of freedom (hyperentangled) overcomes this limitation and allows the full implementation of DCQD. We have obtained the first experimental results using photons entangled in multiple degrees of freedom to characterize single-photon processes by DCQD. We plan to extend this experimental setup to characterize various multi-qubit processes, to demonstrate the ability to characterize certain decohering processes in a single measurement.
section currently under development
Many quantum information applications have need for a low-loss, state-preserving photon storage device. For example, quantum memory is a fundamental component of the Quantum Repeater, which makes quantum communication robust against imperfect transmission channels over arbitrary distances. A memory can also be used as a component of single-photon sources (such as the ones we build in our laboratory).
The storage system we are developing relies on a series of three free-space storage cavities. A cavity consists of a loop with a fixed optical path length, governed by a Pockels cell “switch” and a polarizing beam splitter. When a photon is polarized horizontally, it reflects off the polarizing beam-splitter and is stored in the loop; as soon as we switch the photon polarization to vertical, the photon transmits instead of reflects, and is subsequently released. The concept of a cavity is not new - optical cavities are often used for storage devices in various experimental apparatuses. However, the transmission of each loop is often limited by transmission losses in the Pockels cell (~3-4% lossy), so the maximum number of loops is limited.
To overcome transmission losses in our optical cavity, we have configured a system of three loops varying in length: 12.5 ns, 125 ns, and 1.25 μs. By using three loops, we can store single photons with high efficiency over variable delays (N x 12.5 ns, 1 = N = 999). The theoretical bandwidth of our system rests at +/- 70 nm. If we assume an operating wavelength of 690 nm, this gives us an estimate of 5x10^8 (40 THz x 12.5 μs) for our time-bandwidth product, which is several orders of magnitude higher than typical memories based on atomic ensembles or solid-state approaches.
Since we want the ability to store optical quantum information in various degrees of freedom, i.e., polarization, timing, spatial mode, etc., we need a method of converting polarization qubits to time-bin qubits that can be stored in the above-described memory. We have built a polarization time-qubit transducer for this purpose, which allows us to convert superpositions of horizontal and vertical polarization to superpositions of two different time-bins.
We’ve also been exploring the ability to store orbital angular momentum (OAM) modes within our system. We use a hologram and spatial light modulator to produce and measure modes in all six bases.
Many optical experiments, including those conducted in our group, require high-speed low-loss optical switches to route photons through the system. For most applications in our lab, we use low-loss Pockels cells in conjugation with polarized light. These devices rely on the Pockels effect to induce birefringence within a crystal with an applied electrical field, enabling the polarization manipulation of photons as they travel through the crystal. This, combined with a polarizing beamspiltter at the output, allows one to control whether the photon is sent into the transmitted or reflected path.
However, it is challenging to simultaneously achieve low-loss and ultrafast switching rates with Pockels cells as the high voltages required introduce issues that degrade the switching performance, such as crystal heating. Since an ultrafast low-loss optical switch has many applications (e.g., our time-multiplexed single photon source), we are investigating the possibility of employing optical nonlinearities to achieve switching rates in the GHz range while maintaining less than 2% transmission loss. Promising techniques include the optical Kerr effect, in which a laser is used to alter birefringence of a nonlinear material in a manner similar to how an external electric field is used to alter the birefringence of a crystal in a Pockels cell.
A typical quantum information lab is a jungle of mirrors, lenses, polarizers and fiber optic cables. In a real quantum computer, quantum logic will need to be contained in a “chip”–a micro-optic or waveguide integrated optic system.
A drawback of this approach is that one can typically never transfer more than 90% of light from an optical fiber into a waveguide, because light gets lost on the way in. Even though the fiber and the waveguide may both be single-mode devices, they are not necessarily in the same mode. An adaptive optics system can help match the two modes and decrease the loss of light. Adaptive optics can also help collect more light from various types of qubits, including single ions, atoms, and quantum dots. To understand the practical limits for optimized mode-matching, we are currently using a computer-controlled deformable mirror running a genetic algorithm to optimize coupling between a simulated ion and a single-mode fiber.
Quantum teleportation allows information such as the quantum state of a single photon or atom to be perfectly transferred between two locations without passing through the space between them. It relies on a classical communication channel like an internet connection between the two locations, and therefore does not permit information to be transmitted faster than the speed of light. The two locations must also share some quantum entanglement–for example, one photon each from an entangled pair.
Standard quantum teleportation can only be used reliably to transmit quantum states with two parameters. For more than two, the process becomes probabilistic and only works on a fraction of attempts. Other strategies, such as remote state preparation, can be used to reliably transmit more than two parameters, but they quickly require a large number (growing with the square of the number of parameters) of expensive single-photon detectors and complex measurement techniques.
Superdense teleportation is a scheme that can accomplish reliable transmission of quantum information with fewer resources than remote state preparation. It works by encoding the parameters one wishes to transmit on a special class of quantum states called equimodular states. These states are analogous to points on the surface of a hypertorus.
We have experimentally demonstrated superdense teleportation of a four-parameter state using photons hyperentangled in polarization and spatial mode. (See sources of entanglement and multipartite entanglement for more about hyperentangled states.)
In the case of polarization and time bins, we see that system as a prime candidate for deployment in a space-to-earth channel. To that end, we characterized the protocol’s performance while simulating in the lab several conditions that would be experienced in the space-to-earth channel. The analysis showed that we should be able to successfully execute this protocol in a space-to-earth channel.
Quantum cryptography uses entanglement to enable secure communication that immediately alerts its users to eavesdropping. Click on the figure to the right for an animation showing an example quantum cryptography protocol called BB84.
Entangled photons are produced and sent to two parties who wish to communicate. This is effectively the transfer of two sets of random bits, each bit perfectly correlated with its partner in the other set. After appropriate post-processing, these bits comprise a secret key that allows the two parties to securely encrypt a message. Any intermediate measurement by an eavesdropper will immediately corrupt the correlations between the entangled photons and alert the users.
In our lab we are developing a novel ultra-high speed quantum cryptography system. Previous quantum cryptography systems have used entanglement in a single degree of freedom to distribute the key, typically sending less than one bit per photon. We use entanglement in multiple degrees of freedom to send over 10 random bits per photon. Using this method we hope to achieve a secure data rate exceeding 1 GB/s, 100 to 1000 times faster than existing systems.
Using hyperentanglement in polarization and time bins, we have demonstrated a higher dimensional QKD protocol. Using 4 different bases, we have 12 basis combinations that generate 1-2 bits of key and 4 basis combinations that are mutually unbiased. This protocol is implemented with same system that executes SDT, as described above. We found that this protocol would also be successful during deployment in a space-to-earth channel. Intrinsic error rates are <5% allowing for operation in the finite key regime, even during deployment in a space-to-earth channel where the transmission can be quite low.
We are seeking to establish a quantum secure network between drones in flight using Quantum Key Distribution (QKD). The primary motivations include solving the “last mile” problem of long-distance quantum networks, as well as establishing practical wireless quantum networks, and extending quantum security to cover the emerging applications of drones. We view this research effort as a first step in providing more advanced quantum technology on drones such as entanglement generation and distribution via optical waveguides.
While quantum cryptography in principle guarantees secure key transfer, practical implementations are often subject to various side-channel attacks that exploit various engineering and technical imperfections of QKD hardware. The detectors of a QKD system are particularly susceptible to attacks since many detectors have non-ideal properties such as deadtimes and sensitivity to blinding. As a solution, measurement-device-independent QKD (MDI-QKD) was introduced to as a version of QKD resistant to detector side-channel attacks.
A general MDI-QKD implementation involves two individuals (Alice and Bob) who desire to share secure cryptographic keys with each other. Alice and Bob send qubit-encoded photons to a third party (Charlie) who identifies the correlation between Alice’s and Bob’s qubits via a Bell State Measurement (BSM) involving both photons. Since this requires Alice and Bob to generate photons simultaneously, implementing this with conventional SPDC-based heralded single photon sources is not very scalable due to the low probability of obtaining a coincident event between two random, non-deterministic sources which limits the key generation rate.
To address this issue, we use a modified version of our time-multiplexed heralded single photon source to serve as a quantum memory, allowing Charlie to synchronize Alice’s and Bob’s HSPSs. Whenever Charlie receives an early-arrival photon from Alice (Bob), he stores it in the quantum memory. He releases it when Bob’s (Alice’s) late-arrival photon arrives such that Charlie now has two coincident photons and may make the Bell State Measurement.
Using this configuration, we have demonstrated a factor of 30 enhancement for the rate of simultaneous photons from Alice and Bob. This enabled us to make the first demonstration of HSPS-based MDI-QKD where we achieved a secure key rate of 0.851 bit/s. This is a significant improvement over the no-memory case where we were unable to generate secure keys due to very low coincidence rates.
We have also used hyperentanglement to encode a record amount of information on a binary property of a single photon, beating the channel capacity limit for linear photonic superdense coding.
In information theory, one of the defining properties of a noisy communication channel is its channel capacity, or the maximum rate at which information can be faithfully transmitted. Classically this is found via the mutual information of the channel, a measure of the information shared by source and receiver. The quantum analogue is the coherent information of the channel, which also takes into account entanglement effects.
The coherent information of a quantum channel has the peculiar property of “superadditivity.” This is in stark contrast to classical communication, where mutual information is additive: if a source sends twice as much information though the channel, a receiver should be able to recover twice as much. Since the mutual information scales linearly with channel use, a single use of the channel defines its properties. However, for quantum channels, coherent information can scale nonlinearly. That is, the recoverable information from multiple channel uses can exceed - albeit only slightly - the amount we would expect in classical communication. This allows for better scaling of information transmission when only a finite number of channel uses is possible. In the extreme case, quantum channels can even exhibit “superactivation,” whereby repeated channel use allows information to be recovered from otherwise unusable channels.
Superadditivity of coherent information has been established theoretically in the case of the dephrasure channel, which combines erasure and dephasing. The dephrasure channel exhibits superadditivity for as few as two channel uses; this, combined with the channel’s simple form, makes it a good candidate for experimental investigation. We are currently in the process of developing an optical system to simulate the dephrasure channel and test its properties of superadditivity and superactivation.
At the lower limit of vision, the eye needs just a few photons. No one knows exactly how few–psychological and physiological research have suggested that single-photon vision may be possible, but the question has never been directly tested. Rod cells in the human retina (see figure on right) respond to single photons, but it is not known whether these signals continue through the visual pathway and lead to the perception of light. All previous studies have used attenuated classical light sources, and have estimated the detection threshold with model-fitting methods.
Working with University of Illinois psychology professor Dr. Frances Wang, we are conducting several experiments to characterize the lower limit of human vision using a true source of single photons. If you are interested in participating as a volunteer observer, you can find more information and an interest form here.
Demonstrating single-photon vision will not only answer an open question in psychology, but could also make it possible to test the predictions of quantum mechanics in the visual system. On a submicroscopic scale, everything can be described with quantum mechanics. Particles like electrons and photons can diffract and interfere like waves, and they show strange behavior such as entanglement and quantum tunnelling. Everything around us is made of quantum particles, but you won’t find your spoon entangled with your breakfast cereal any time soon, or see a soccer ball diffract around the net. Exactly why quantum weirdness seems to disappear in the macroscopic world remains a deep mystery.
Making quantum effects available to direct human perception addresses fundamental questions about how strange quantum rules produce the familiar world around us. We may be able to investigate whether a human observer perceives a difference between a photon in a statistical mixture of two polarization states, and one that is in a quantum superposition of two polarizations. Eventually we might even be able to demonstrate quantum non-locality in a Bell test with human observers instead of single-photon detectors.
Unlike in classical physics, where measurements can be thought of as independent observations that do not affect the outcome of an experiment, measurements on quantum systems can change the state of the system. In a “weak” quantum measurement, the measured system is linked to the measuring device by only a very weak connection–for example, a small amount of light may “leak” out of an experiment onto a sensor. This allows us to measure the system without disturbing it.
Measuring this weak value can also amplify small effects so we can measure them with greater precision. The absolute resolution of the measurement, however, typically remains unchanged compared to a conventional measurement. We are currently experimenting with “recycling” the light that is not measured (for example, the light that does not leak onto a sensor) and repeating the weak measurement many times to improve the signal-to-noise ratio. Using a preliminary version of this method, we have increased the signal-to-noise ratio and measured the tilt of a mirror by just 50 picoradians–about three billionths of a degree! If a laser beam changed its direction by 50 picoradians, it would be displaced by less than the width of a human hair after traveling the distance across the continental United States.
In the past, we have also used weak values to make the first measurement of the spin Hall effect of light, in which oppositely circularly polarized light beams experience opposite (but tiny) transverse spatial shifts when passing through an air-glass interface at an angle (see animation on left). With our weak-measurement amplification scheme, we achieved displacement resolutions on the order of 1 nm.
section currently under development
Photon detectors with high efficiencies and low error probabilities are essential for optical quantum computing, scalable quantum information protocols, and fundamental physical studies such as loophole-free Bell tests. Conventional single-photon detectors, such as avalanche photodiodes (APDs) and photomultiplier tubes (PMTs), only have quantum efficiencies of up to 75% and 10-20%, respectively.
In contrast, VLPCs (Visible Light Photon Counters) and SSPMs (Solid-state Photomultipliers) are solid-state devices that utilize an avalanche multiplication effect, differing (from each other) only in their wavelength sensitivities. VLPCs are sensitive only in the visible range, while the SSPMs are sensitive from the visible to beyond 10 microns. The figure to the left shows a chip containing several VLPCs, and the custom-built copper block used in our system for cooling the VLPCs with liquid helium.
Both VLPCs and SSPMs feature unique capabilities that APDs and PMTs cannot offer, including high quantum efficiency and photon-number resolution–the ability to count multiple photons that arrive simultaneously. The VLPCs have an inferred internal quantum efficiency of 94% +/- 5% (at 694 nm) and SSPMs 96% +/-3% (at 660 nm). In the past, problems with delivering the light to the detector have limited measured efficiencies to less than 88%.
We are currently improving these detectors by a) using custom anti-reflection (AR) coatings for the detectors, b) reducing losses and background due to the optical coupling fibers, c) using improved low-noise electronics, and d) incorporating novel cryogenic cooling designs. With these improvements, we have recently achieved a record system efficiency of 91% for a VLPC.
Pick a number. Your choice may be hard to guess, but it’s far from truly random. Flipping a coin or rolling a die come close to true randomness, but they’re slow–and they could be predicted in advance if the exact “launch conditions” of the coin or die were known. Computer algorithms are often used to simulate random numbers when many of them are needed–for example, in an online casino–but imperfect randomness can be disastrous. Clever blackjack players can exploit the fact that virtual card-shufflers don’t produce a truly random deck arrangement. A cryptography system that failed to generate truly random keys could be cracked.
Quantum mechanical uncertainty offers a source of truly random numbers. A 50-50 beamsplitter transmits half the light that hits it and reflects the other half. Because light is made up of individual photons, each photon has a 50% chance of being transmitted. The outcome for each photon is completely random and can’t be predicted beforehand. A simple quantum random number generator might produce binary numbers by assigning a “0” if a photon is transmitted and a “1” if it is reflected.
The rate of random number generation in such a system is limited by how quickly a single-photon detector saturates and loses the ability to count more photons. We have pursued a different strategy, using fast electronics to record the precise arrival time of each photon. Such a measurement can obtain multiple random bits for each detection event, so that it is possible to exceed the normal limits of quantum random number generation by at least an order of magnitude. We achieved a random number generation rate of 110 MB/s as of April 2010, which exceeded the previous best rate by a factor of 10. Graduate student Michael Wayne is pictured with Paul Kwiat and the record-breaking random number generator.
Frequency upconversion is the process by which two photons of different frequencies can join together in a non-linear crystal and create a new photon whose frequency is the sum of the first two. We use this process to convert single photons from 1550 nm to the visible spectrum where they can be detected by silicon avalanche photodiodes. These detectors have much higher efficiency than the InGaAs detectors normally used for detecting infrared (IR) photons, thereby allowing high efficiency detection of single IR photons. We also have demonstrated the coherence of this process by performing the upconversion process on two paths of an interferometer, and have observed fringes in the upconverted light. Preserving the coherence is a key requirement for handling qubits, thereby making this process suitable for applications such as quantum cryptography and quantum networking.
In a particularly unusual quantum information scheme called “counterfactual computation” (CFC), one can perform a measurement on a quantum computer and obtain information about the solution to a problem without the computer actually running. We demonstrated the first experimental realization of counterfactual computation. In addition to theoretically demonstrating some error-suppressing capabilities of a non-running quantum computer, we proposed a scheme which showed an apparent breakdown of the previously established bounds on how good of a CFC one can achieve. This initiated an ongoing debate on the meaning of counterfactuality in quantum processes.
Quantum key distribution (QKD) allows perfectly secure communication between distant parties. The perfect security of QKD, one of the first classically impossible information protocols to be implemented, relies upon the nonlocal nature of quantum mechanics. Using entangled photons and a low-loss quantum storage system, we realized an improvement to this quantum mechanical protocol through the addition of relativistic constraints. To our knowledge, this is the first experimental realization of an information protocol which simultaneously relies upon both quantum mechanics and special relativity.
Click the figure on the right for an animation showing how the relativistic protocol works.