QuaSci: Quantum ScienceQuaSci.com is a quasi-scholarly blog on quantum science. Posts are related to research as well as general research life.
https://www.quasci.com/
Classical vs Quantum Computation<p><em>by Eddie Schoute</em></p>
<p>The IBM ThinkQ conference was held recently in New York with a focus on near-term quantum computing applications.
It seems that businesses have successfully been developing larger quantum computers
—we’re at around 50 qubits now!— but are now looking for the “killer app” of small quantum computers.
There were some variations on the “what to do with your quantum computer” theme
and I will talk about some of the applications that were discussed.</p>
<p>All talks and recordings of them are available at the <a href="https://www.research.ibm.com/ibm-q/thinkq/agenda.html">online schedule</a>.</p>
<p>A <em>quantum advantage</em> refers to some applications where a quantum computer performs some computation
that a classical computer currently cannot perform.
Previously known as <em>quantum supremacy</em>, it has now been renamed after an internal discussion
within the community about its political correctness<sup id="fnref:supremacydiscussion"><a href="#fn:supremacydiscussion" class="footnote">1</a></sup>.</p>
<p>So far we actually do not know unconditionally if quantum computing is actually more powerful than classical
(i.e. $BQP \not\subseteq BPP$).
But through the problems of Boson Sampling<sup id="fnref:boson1"><a href="#fn:boson1" class="footnote">2</a></sup> and Instantaneous Quantum Polynomial-time (IQP) circuits<sup id="fnref:iqp1"><a href="#fn:iqp1" class="footnote">3</a></sup>
we do know that the polynomial hierarchy ($PH$) must collapse if classical computers can solve them efficiently.</p>
<h2 id="simulating-quantum-processes">Simulating Quantum Processes</h2>
<p>One side of the discussion looks at determining which quantum processes can be efficiently simulated
by a classical computer.
We recently had Hakop Pashayan visit <abbr title="Joint Center for Quantum Information and Computer Science">QuICS</abbr>,
who revealed to us some of the intricacies involved in this line of research.
In their paper, Hakop et al. explain the concept of ε-simulation<sup id="fnref:hakop1"><a href="#fn:hakop1" class="footnote">4</a></sup>.
For any quantum circuit $\mathcal C$ with fixed inputs there exists some probability distribution
$\mathcal P_\mathcal C$ over the outcomes $X=(X_1, X_2, …, X_k)$,
which is just a classical random variable.
Any noiseless circuit $\mathcal C$ can be described as starting with the <abbr title="Uncorrelated quantum state.">product state</abbr> $ρ_1 ⊗ … ⊗ ρ_n$
on $n$ qubits,
followed by some unitary operation $U$, and finally measuring qubits $1$ through $k$.</p>
<p>For example, one could ask the question what is the probability of measuring $(X_0, X_1) = (1,0)$ ignoring $X_2,…X_k$?
Or in other words, what is $\mathcal P(X_0 = 1, X_1 = 0)$?
An algorithm that can produce the answer to this (and similar) questions is called a <em>strong simulator</em>.
This is quite a powerful notion since it is more powerful than a theoretical quantum computer
which can only produce a sample from the output distribution.
A slightly weaker notion is that of <em>weak simulation</em>:
Instead of the exact probability,
output a sample in accordance with the output distribution $\mathcal P_\mathcal C$.
Even constructing a <em>weak simulator</em> is probably too lofty of a goal,
because no real quantum computer will be completely noiseless
and thus cannot sample exactly from $\mathcal P_\mathcal C$.
It is therefore maybe unreasonable to expect a classical computer to do either of these
(except for simple circuits)
and we will instead define ε-simulation that relaxes the constraints further.</p>
<p class="center"><img src="https://www.quasci.com/img/quantum-simulation-circuit.svg" alt="A general quantum circuit" /><br />
<em>A general quantum circuit, with $n$ <abbr title="Uncorrelated quantum state.">product state</abbr> inputs, a unitary evolution,
and then measurements on $k$ of the qubits.</em></p>
<h3 id="ε-simulation">ε-Simulation</h3>
<p>Hakop et al. deal with the notion of ε-simulation,
which allows the simulator to make some ε-sized error in the $\ell_1$ distance.</p>
<dl>
<dt><strong>Definition:</strong> $\ell_1$ norm and distance for vectors</dt>
<dd>For a vector $\mathbf v$ the $\ell_1$ norm is defined as
<script type="math/tex; mode=display">\norm{\mathbf v}_1 = \sum_{i \in \mathbf v} \abs{\mathbf v(i)}.</script>
<p>The $\ell_1$ distance between two discrete probability distributions $\mathcal P$ and $\mathcal Q$
(that are just vectors in some respects) is then</p>
<script type="math/tex; mode=display">\norm{\mathcal{P} - \mathcal{Q}}_1 = \sum_{i ∈ P} \abs{\mathcal P(i) - \mathcal Q(i)}</script>
<p>which just takes the absolute vector difference of the two probability distributions.
(This notion also generalises to other norms, such as the $\ell_2$ norm and $\ell_\infty$ norm.)</p>
</dd>
<dt><strong>Definition:</strong> ε-sampling <sup id="fnref:hakop1:1"><a href="#fn:hakop1" class="footnote">4</a></sup></dt>
<dd>Let $\mathcal P$ be a discrete probability distribution.
We say that an algorithm can ε-sample $\mathcal P$ iff for any $ε>0$
it can sample from a probability distribution $P^ε$ such that
$\norm{\mathcal P - \mathcal P^ε}_1 ≤ ε$.
In addition, its run-time should scale polynomially in $1/ε$.</dd>
</dl>
<p>We say that an algorithm can ε-simulate a quantum circuit $\mathcal C$
if it can ε-sample from the associated probability distribution $\mathcal P_\mathcal C$.
Basically, an ε-simulator is a weak simulator of a probability distribution that is
ε-close to the real probability distribution.
A result of Hakop et al.<sup id="fnref:hakop1:2"><a href="#fn:hakop1" class="footnote">4</a></sup> is that an ε-simulator of $\mathcal C$ is indistinguishable from
$\mathcal C$ and also is efficient due to the polynomial run-time constraints.
Not only that, but it is also <em>necessary</em> to be an ε-simulator for any kind of simulation scheme
to be efficient and indistinguishable from $\mathcal C$<sup id="fnref:hakopscenario"><a href="#fn:hakopscenario" class="footnote">5</a></sup>.</p>
<h3 id="poly-boxes-and-simulations">Poly-Boxes and Simulations</h3>
<p>To be able to ε-simulate a circuit $\mathcal C$ we can first estimate the probabilities
for some outcomes of its output probability distribution $\mathcal P_\mathcal C$.
A <em>poly-box</em> is a metaphorical device that estimates such probabilities in polynomial time.
It is (presumably) not possible to efficiently estimate probabilities for general quantum circuits using
a classical computer, but it may be possible to construct poly-boxes for certain restricted circuit
families.</p>
<dl>
<dt><strong>Definition:</strong> Poly-box</dt>
<dd>Given is a finite <abbr title="A set of characters. For example {0,1} is the binary alphabet. Usually combined with * to indicate zero or more repetitions of characters in the alphabet.">alphabet</abbr> $\Sigma$;
let $\Sigma^*$ be strings of characters from $\Sigma$ (including the empty string).
Then $\Sigma^*$ defines a family of quantum circuits
$\mathbb S = \set{\mathcal C_a \middle| a ∈ Σ^*}$.
The associated family of probability distributions is
$\mathbb P = \set{\mathcal P_\mathcal C \middle| \mathcal C ∈ \mathbb S}$.<br />
We want to be able to estimate probabilities for output strings $S ∈ \set{0,1,\bullet}^{n+1}$
with a “$\bullet$” meaning “don’t care”: Match both $0$ and $1$.
Then a poly-box is a classical algorithm that can estimate $\mathcal P(S)$
for all $\mathcal P ∈ \mathbb P$
efficiently in the number of qubits, $n$, and the inverse error, $ε^{-1}$.</dd>
</dl>
<p class="center"><em><img src="https://www.quasci.com/img/polybox.svg" alt="What does a polybox do" /></em><br />
<em>With a poly-box we are able to estimate the probability of outcomes for a quantum circuit
in polynomial time.
Additionally, we can estimate marginal probabilities for all strings $S ∈ \set{0,1,\bullet}^{n+1}$
where “$\bullet$” can represents a “don’t care”: It matches both $0$ and $1$.
The number of samples $s∈ℕ$ can be computed from the intended error $ε$.</em></p>
<h4 id="poly-boxes-are-not-sufficient-for-ε-simulation-a-counter-example">Poly-boxes are not sufficient for ε-simulation. A counter-example.</h4>
<p>Circuit families must admit a poly-box to be ε-simulable,
but it is not sufficient.
We will give a fairly simple example of a circuit that does admit a poly-box
but does not admit an ε-simulator (unless $BQP ⊆ BPP$).
Let us define a circuit $\mathcal C_e$ that takes in some quantum circuit description $a ∈ Σ^*$.
The circuit $\mathcal C_e$ samples a single bit $X$ from the quantum circuit
described by $a$, $\mathcal C_a$.
(Note that for general quantum circuits it
is already hard to efficiently produce this single bit classically, assuming $BPP ⊊ BQP$)
Finally, $\mathcal C_e$ samples a uniform string $Y ∈ \set{0,1}^n$ and outputs
$(X ⊕ \text{Parity}(Y), Y) ∈ \set{0,1}^{n+1}$.<sup id="fnref:parityandxor"><a href="#fn:parityandxor" class="footnote">6</a></sup>
Basically, we are obfuscating the hard-to-produce $X$ with a uniform $Y$,
but given the entire output it is easy to figure out $X$.
(Compute Parity$(Y)$ and XOR that together with the first output bit.)</p>
<p>The real kicker, however, is that $C_e$ is not ε-simulable
because if it were then it would be possible to sample $X$ (and that’s hard).
But it is actually easy to construct a poly-box for $C_e$ for any given error $0<ε≤1$:</p>
<ol>
<li>If there are $0<k≤n+1$ “don’t cares” in the string $S ∈ \set{0,1,\bullet}^{n+1}$
for which we need to estimate the probability $\mathcal P(S)$ then output $1/2^{n+1-k}$ as a guess.</li>
<li>Otherwise, if $ε < 1/2^n$, explicitly compute the probability $P(X=1)$ by brute force.
This will take time $O(2^n) ⊆ O(ε^{-1})$ so it is still efficient in $ε^{-1}$.</li>
<li>Large ε: if $ε ≥ 1/2^n$ simply output the probability $1/2^{n+1}$ as a guess.</li>
</ol>
<p>Now, through some straightforward computation, you can show that, in all three cases,
this does meet the requirements of a poly-box, as it is sufficiently close to the real $P(S)$.
The problem here is that we have thinned the probability of any one string occurring so much
that, for a sufficiently low error ε, it becomes easier to compute the quantum probability explicitly.</p>
<h4 id="poly-boxes--sparsity--ε-simulation">Poly-boxes + sparsity = ε-simulation</h4>
<p>If, instead, the circuit has only a polynomial number of outcomes with significant probability
then we can ε-simulate like we would want to.
We say that such outcome distributions are <em>poly-sparse</em>.
More specifically, there must be a polynomially-sized upper bound on the number of relevant outcomes,
$t = O\left(\text{poly}(n/ε)\right)$, with $n$ the size of the input string and $ε$ the error.
Poly-sparsity guarantees us that there exists a parameter $t$,
so that we can construct a distribution $\mathcal P^ε$
with only $t$ outcomes with non-zero probabilty such that</p>
<script type="math/tex; mode=display">\norm{\mathcal P - \mathcal P^ε}_1 \leq ε .</script>
<p class="center"><img src="https://www.quasci.com/img/epsilon-close-probability.svg" alt="Epsilon-close probability distribution" /><br />
<em>On the left is some probability distribution $\mathcal P$.
On the right we have approximated $\mathcal P$ by an ε-close distribution that is sparser:
We have fewer nonzero entries.</em></p>
<p>We can estimate the $t$ relevant outcomes with a poly-box for $\mathcal C$
and <em>explicitly</em> reconstruct $\mathcal P^ε$.
This distribution $\mathcal P^ε$ is ε-close to the real output distribution $\mathcal P_{\mathcal C}$
and thus suffices for ε-simulation of $\mathcal C$.</p>
<p><strong>Theorem 1</strong><sup id="fnref:hakop1:3"><a href="#fn:hakop1" class="footnote">4</a></sup> : “Let $\mathcal C$ be a family of quantum circuits with corresponding probability
distributions $\mathbb P$.
Supose there exists an efficient poly-box over $\mathcal C$, and $\mathbb P$ is poly-sparse.
Then, there exists an ε-simulator of $\mathcal C$.”</p>
<p><em>Proof</em>: Let $a \in \Sigma^*$ and $ε > 0$.
The poly-box over the circuit family $\mathcal C$
allows us to efficiently estimate probabilities from the probability distribution
$P_a(S)$ for $S \in \set{0,1,\bullet}^n$.
Using the poly-box construction above and some smart searching using “don’t care” values (“$\bullet$”),
it is possible to efficiently estimate probabilities from the
ε-close (in $\ell_1$ distance) distribution $P^ε_a(S)$.
And because of poly-sparsity of $\mathcal C$ there exists a $P^ε_a$
with a polynomial upper bound $t = O\left(\text{poly}(ε^{-1})\right)$
on relevant outputs.
So we construct an ε-simulator for $\mathcal C$ by reconstructing the probability distribution over the $t$ possible outcomes in the poly-sparse $P^ε_a$.
We can do this by recursively searching $S$ using “don’t cares” for the $t$ relevant outcomes (the rest has probability mass $0$) in polynomial time<sup id="fnref:schwarz2013"><a href="#fn:schwarz2013" class="footnote">7</a></sup>.
With $P^ε_a$ explicitly computed it is straightforward to sample from it.$\square$</p>
<h2 id="separating-quantum-from-classical">Separating Quantum from Classical</h2>
<p>A natural question to ask after looking at simulating quantum processes is
what <em>can’t</em> we simulate?
Where is there a <em>quantum advantage</em>?
Recently there has been a lot of work on trying to come up with such an algorithm that
is believed to be both hard to simulate classically (e.g. by some complexity results)
and also easy to implement on an existing quantum computer.
We will look at <em>Instantaneous Quantum Polynomial-time</em> (IQP)<sup id="fnref:iqp1:1"><a href="#fn:iqp1" class="footnote">3</a></sup>,
but there is multitude of approaches that may be covered in later blog posts.</p>
<h3 id="instantaneous-quantum-polynomial-time-iqp">Instantaneous Quantum Polynomial-time (IQP)</h3>
<p>An approach to showing a quantum advantage referred to as IQP is
to perform a diagonal unitary in the $X$-basis ($\ket 0 \pm \ket 1$)
on the all-zero input $\ket{00\dots 0}$.
Alternatively, we could describe a unitary $D$ diagonal in the $Z$-basis
and conjugate with $H^{\otimes n}$.
The strings $w \in \Sigma^{*}$ then describe diagonal elements of $D_w$ for the circuits</p>
<script type="math/tex; mode=display">\mathcal C_w = H^{\otimes n} D_w H^{\otimes n}</script>
<p>where the language $\Sigma^{*}$ describes circuits in the family $\set{C_w}$.
This turns out to be difficult to simulate for classical computers under suitable hardness assumptions <sup id="fnref:iqpnoise"><a href="#fn:iqpnoise" class="footnote">8</a></sup>.
We will show the main result from <sup id="fnref:iqp1:2"><a href="#fn:iqp1" class="footnote">3</a></sup>:
If it is possible to weakly classically simulate IQP circuits to within a constant multiplicative factor,
then the Polynomial Hierarchy would collapse to the third level.</p>
<p>The Polynomial Hierarchy is an infinite hierarchy of complexity classes
of increasing computational power.
Defining it requires the notion of an <em>oracle</em>,
a black box that can be queried in one time step for an answer in its complexity class.
For complexity classes $A$ and $B$, we have that $A^B$ is the set of languages
that can be decided by an algorithm in $A$ with access to an oracle for $B$,
i.e. the algorithm may decide any language in $B$ by querying the oracle in one time step.
Now let the polynomial hierarchy be defined as
$\Delta_{k+1} = P^{N\Delta_k}$, with $\Delta_1 = P$ and $N\Delta_k$ the nondeterministic class
associated to $\Delta_k$ (like $NP$ is associated to $P$).
We have that</p>
<script type="math/tex; mode=display">Δ_0 \subseteq Δ_1 \subseteq \dots</script>
<p>It is known that if $\Delta_i = \Delta_{i+1}$ for some $i$ then $\Delta_i = \Delta_j$ for all $j > i$.<sup id="fnref:aurorabarak"><a href="#fn:aurorabarak" class="footnote">9</a></sup>
This is referred to as a <em>collapse of the polynomial hierarchy</em> to the $i$-th level.
Such a collapse is not expected to be the case
and is often likened to $P = NP$ (a collapse to the first level)
though less extreme.</p>
<p>Another notion that we need is post-selection.
We can view this as running a classical or quantum circuit and asserting that the outcomes on
the post-selected wires will all be zero before looking at the output wires.
This is, of course, not a natural assumption since, if you were to run the circuit,
you are in no way guaranteed that the outputs on those wires will be zero.
Nonetheless, it is a useful notion as we will see later.
But first let us define post-selected circuits more formally.</p>
<dl>
<dt><strong>Definition:</strong> Post-selected Complexity Classes<sup id="fnref:iqp1:3"><a href="#fn:iqp1" class="footnote">3</a></sup></dt>
<dd>A language $L$ is in $\text{Post-}A$ for complexity class $A$ (either $BPP$, $BQP$, or $IQP$)
if and only if there is an error tolerance $0 < ε < 1/2$
and a family of circuits ${\mathcal C_w}$ of post-selected $A$ circuits
with output $\mathcal O_w$ and post-selection wires $\mathcal P_w$ such that
<ul>
<li>if $w \in L$ then $\Pr\left[\mathcal O_w = 1 \middle\vert \mathcal P_w = 0\ldots 0 \right] \geq 1 - ε$ and</li>
<li>if $w \not\in L$ then $\Pr\left[\mathcal O_w = 0 \middle\vert \mathcal P_w = 0\ldots 0 \right] \geq 1 - ε$.</li>
</ul>
</dd>
</dl>
<p>It is known that $\text{Post-}BPP \subseteq \Delta_3$.<sup id="fnref:han"><a href="#fn:han" class="footnote">10</a></sup>
And from $P^{P^A} = P^A$ we have</p>
<script type="math/tex; mode=display">P^{\text{Post-}BPP} \subseteq P^{\Delta_3} = P^{P^{NΔ_2}} = P^{NΔ_2} = \Delta_3.</script>
<p>Furthermore, by results of Aaronson and by Toda’s Theorem we get
that post-selected quantum decision problems contain the entire polynomial hierarchy, i.e</p>
<script type="math/tex; mode=display">PH \subseteq P^{\text{Post-}BQP}.</script>
<p>Bremner, Jozsa and Shepherd<sup id="fnref:iqp1:4"><a href="#fn:iqp1" class="footnote">3</a></sup> showed that $\text{Post-}IQP = \text{Post-}BQP$.
We will show that if $IQP$ circuits could be weakly simulated that this implies
$\text{Post-}IQP \subseteq \text{Post-}BPP$,
thus resulting in a collapse of the Polynomial Hierarchy to the third level.
Therefore, it is unlikely that $IQP$ circuits will ever be perfectly simulable by a classical algorithm.</p>
<p><strong>Theorem 2:</strong><sup id="fnref:iqp1:5"><a href="#fn:iqp1" class="footnote">3</a></sup> If the output distributions of families of $IQP$ circuits could be weakly simulated
to within multiplicative error $1\leq c < \sqrt{2}$, then $\text{Post-}IQP \subseteq \text{Post-}BPP$.</p>
<p><em>Proof:</em>
Let $L \in \text{Post-}IQP$ be decided by a post-selected circuit family $\set{C_w}$
where $w \in \Sigma^*$.
We can split the output into post-selection wires $\mathcal P_w$ and output wire $\mathcal O_w$.
From our definition of $\text{Post-}IQP$ we have</p>
<script type="math/tex; mode=display">% <![CDATA[
\begin{cases}\Pr\left[\mathcal O_w = 1 \middle\vert \mathcal P_w = 0\ldots 0\right] \geq 1 - ε & \text{if $w\in L$},\\
\Pr\left[\mathcal O_w = 1 \middle\vert \mathcal P_w = 0\ldots 0\right] \leq ε & \text{if $w\not\in L$}.
\end{cases} %]]></script>
<p>for some $0 < ε < 1/2$.
Now let $\mathcal Y_w$ be all $m$ output wires of $\mathcal C_w$.
We assumed that there exists a classical randomized weak simulator of $\mathcal C_w$,
called $\widetilde{\mathcal C}_w$,
with associated output wires $\widetilde{\mathcal Y}_w$ such that</p>
<script type="math/tex; mode=display">\frac{1}{c} \Pr\left[\mathcal Y_w = y_1\ldots y_m\right]
\leq \Pr\left[\widetilde{\mathcal Y}_w = y_1\ldots y_m\right]
\leq c \Pr\left[\mathcal Y_w = y_1 \ldots y_m\right].</script>
<p>This also holds for any subsets of registers of $\widetilde{\mathcal Y}_w$
such as the output wire $\widetilde{\mathcal O}_w$ and post-selection wires $\widetilde{\mathcal P}_w$.
Now we have for $x \in \set{0,1}$</p>
<script type="math/tex; mode=display">\Pr\left[\widetilde{\mathcal O}_w = x \middle\vert \widetilde{\mathcal P}_w = 0 \ldots 0\right]
= \frac{\Pr\left[\widetilde{\mathcal O}_w = x \land \widetilde{\mathcal P}_w = 0 \ldots 0\right]}{\Pr\left[\widetilde{\mathcal P}_w = 0 \ldots 0\right]}\\
\leq c^2 \Pr\left[\mathcal O_w = x \middle\vert \mathcal P_w = 0\ldots 0\right],</script>
<p>and a similar calculation shows</p>
<script type="math/tex; mode=display">\Pr\left[\widetilde{\mathcal O}_w = x \middle\vert \widetilde{\mathcal P}_w = 0 \ldots 0\right] \geq \frac{1}{c^2} \Pr\left[\mathcal O_w = x \middle\vert \mathcal P_w = 0\ldots 0\right].</script>
<p>We combine these two results and fill in $x=1$, together with the first equation in the proof, to get</p>
<script type="math/tex; mode=display">\begin{cases}
w\in L: \Pr\left[\widetilde{\mathcal O}_w = 1 \middle\vert \widetilde{\mathcal P}_w = 0 \ldots 0\right] \geq \frac{1}{c^2} \Pr\left[\mathcal O_w = 1 \middle\vert \mathcal P_w = 0\ldots 0\right] \geq \frac{1}{c^2}\left(1-ε\right),\\
w\not \in L: \Pr\left[\widetilde{\mathcal O}_w = 1 \middle\vert \widetilde{\mathcal P}_w = 0 \ldots 0\right] \leq {c^2} \Pr\left[\mathcal O_w = 1 \middle\vert \mathcal P_w = 0\ldots 0\right] \leq c^2 ε.\\
\end{cases}</script>
<p>We just need to adjust $c$ to make sure that $L$ can be decided in $\text{Post-}BPP$:
It must decide correctly more often than not,
and there needs to be constant-sized gap between $w\in L$ and $w\not \in L$ decisions,
So we get $1/c^2 (1-ε) > 1/2$ for $w \in L$, leading to $c^2/2 < 1-ε$.
Since $0 < ε < 1/2$, we have that $1 \leq c < \sqrt{2}$ meets these constraints
and are sufficient to show that $L \in \text{Post-}BPP$.$\square$</p>
<p>The main result follows directly from the previous Theorem and facts stated directly prior to it.</p>
<p><strong>Corollary 3:</strong><sup id="fnref:iqp1:6"><a href="#fn:iqp1" class="footnote">3</a></sup>
If there is a weak simulator of families of $IQP$ circuits to within
multiplicative error $1 \leq c < \sqrt{2}$ then
the Polynomial Hierarchy would collapse to the third level.</p>
<p><em>Proof</em>:
We have</p>
<script type="math/tex; mode=display">PH \subseteq P^{\text{Post-}BQP} = P^{\text{Post-}IQP} \subseteq P^{\text{Post-}BPP} \subseteq \Delta_3. \square</script>
<h2 id="conclusion">Conclusion</h2>
<p>We have shown that even for such limited quantum circuits as $IQP$ circuits,
it is unlikely that they could be weakly simulated classically.
We can base this on the fact that otherwise the Polynomial Hierarchy
would collapse to the third level.
And we also introduced the notion of ε-simulation and poly-boxes to more precisely capture
the notion of classically simulating quantum circuits.</p>
<p>There are follow-up results that show that sampling from $IQP$ circuits
is hard even within an additive error ($\ell_1$ norm),
asserting an average-case hardness conjecture<sup id="fnref:iqpadditive"><a href="#fn:iqpadditive" class="footnote">11</a></sup>.
In this blog post we only looked at sampling within a multiplicative distance.
Furthermore, later, the same authors show that noise can make it easy to simulate $IQP$
circuits classically<sup id="fnref:iqpnoise:1"><a href="#fn:iqpnoise" class="footnote">8</a></sup>.
But at the same time they introduce new notions of fault-tolerance to correct for this.
It is clear that the research is still looking for new ways to precisely define
what it means to have a <em>quantum advantage</em>.</p>
<p><em>Thanks to Andrew Guo and Abhinav Deshpande for their help in writing this post.</em></p>
<h2 id="references--notes">References / Notes</h2>
<div class="footnotes">
<ol>
<li id="fn:supremacydiscussion">
<p>See e.g. one of the most heated <a href="https://scirate.com/arxiv/1705.06768">discussions</a> I’ve seen on Scirate, which also touches on the Latin origin of the term <em>ancilla</em> (“housemaid”, colloquially: helper qubit). While almost certainly an internet troll, <em>ancilla the supremacist</em> has become somewhat of a joke in my environment so I guess it has served its purpose. <a href="#fnref:supremacydiscussion" class="reversefootnote">↩</a></p>
</li>
<li id="fn:boson1">
<p>Aaronson, Scott, and Alex Arkhipov. “The computational complexity of linear optics.” Proceedings of the forty-third annual ACM symposium on Theory of computing. ACM, 2011. <a href="https://doi.org/10.1145/1993636.1993682">doi:10.1145/1993636.1993682</a> <a href="#fnref:boson1" class="reversefootnote">↩</a></p>
</li>
<li id="fn:iqp1">
<p>Bremner, Michael J., Richard Jozsa, and Dan J. Shepherd. “Classical simulation of commuting quantum computations implies collapse of the polynomial hierarchy.” Proceedings of the Royal Society of London A: Mathematical, Physical and Engineering Sciences. The Royal Society, 2010. <a href="https://doi.org/10.1098/rspa.2010.0301">doi:10.1098/rspa.2010.0301</a> <a href="#fnref:iqp1" class="reversefootnote">↩</a> <a href="#fnref:iqp1:1" class="reversefootnote">↩<sup>2</sup></a> <a href="#fnref:iqp1:2" class="reversefootnote">↩<sup>3</sup></a> <a href="#fnref:iqp1:3" class="reversefootnote">↩<sup>4</sup></a> <a href="#fnref:iqp1:4" class="reversefootnote">↩<sup>5</sup></a> <a href="#fnref:iqp1:5" class="reversefootnote">↩<sup>6</sup></a> <a href="#fnref:iqp1:6" class="reversefootnote">↩<sup>7</sup></a></p>
</li>
<li id="fn:hakop1">
<p>Pashayan, Hakop, Stephen D. Bartlett, and David Gross. “From estimation of quantum probabilities to simulation of quantum circuits.” <a href="https://arxiv.org/abs/1712.02806">arXiv:1712.02806 [quant-ph]</a> (2017). <a href="#fnref:hakop1" class="reversefootnote">↩</a> <a href="#fnref:hakop1:1" class="reversefootnote">↩<sup>2</sup></a> <a href="#fnref:hakop1:2" class="reversefootnote">↩<sup>3</sup></a> <a href="#fnref:hakop1:3" class="reversefootnote">↩<sup>4</sup></a></p>
</li>
<li id="fn:hakopscenario">
<p>Hakop et al.<sup id="fnref:hakop1:4"><a href="#fn:hakop1" class="footnote">4</a></sup> describe a specific hypothesis testing scenario for which they show this two-way implication. <a href="#fnref:hakopscenario" class="reversefootnote">↩</a></p>
</li>
<li id="fn:parityandxor">
<p>$\text{Parity}(Y) = 1$ iff the number of ones in $Y$ is even. $\oplus$ is the exclusive or. <a href="#fnref:parityandxor" class="reversefootnote">↩</a></p>
</li>
<li id="fn:schwarz2013">
<p>Schwarz, Martin, and Maarten Van den Nest. “Simulating quantum circuits with sparse output distributions.” <a href="https://arxiv.org/abs/1310.6749">arXiv:1310.6749 [quant-ph]</a> (2013). <a href="#fnref:schwarz2013" class="reversefootnote">↩</a></p>
</li>
<li id="fn:iqpnoise">
<p>Bremner, Michael J., Ashley Montanaro, and Dan J. Shepherd. “Achieving quantum supremacy with sparse and noisy commuting quantum computations.” Quantum 1 (2017): 8. <a href="https://doi.org/10.22331/q-2017-04-25-8">doi:10.22331/q-2017-04-25-8</a> <a href="#fnref:iqpnoise" class="reversefootnote">↩</a> <a href="#fnref:iqpnoise:1" class="reversefootnote">↩<sup>2</sup></a></p>
</li>
<li id="fn:aurorabarak">
<p>Arora, Sanjeev, and Boaz Barak. Computational complexity: a modern approach. Cambridge University Press, 2009. <a href="#fnref:aurorabarak" class="reversefootnote">↩</a></p>
</li>
<li id="fn:han">
<p>Han, Yenjo, Lane A. Hemaspaandra, and Thomas Thierauf. “Threshold computation and cryptographic security.” SIAM Journal on Computing 26.1 (1997): 59-78. <a href="https://doi.org/10.1137/S0097539792240467">doi:10.1137/S0097539792240467</a> <a href="#fnref:han" class="reversefootnote">↩</a></p>
</li>
<li id="fn:iqpadditive">
<p>Bremner, Michael J., Ashley Montanaro, and Dan J. Shepherd. “Average-case complexity versus approximate simulation of commuting quantum computations.” Physical Review Letters 117.8 (2016): 080501. <a href="https://doi.org/10.1103/physrevlett.117.080501">doi:10.1103/physrevlett.117.080501</a> <a href="#fnref:iqpadditive" class="reversefootnote">↩</a></p>
</li>
</ol>
</div>
Fri, 01 Jun 2018 20:21:16 +0000
https://www.quasci.com/algorithms/advantage/complexity/2018/06/01/quantum-advantage/
https://www.quasci.com/algorithms/advantage/complexity/2018/06/01/quantum-advantage/Simulating quantum chemistry<p><em>By Andrew Guo</em></p>
<p>Day Two of the 2017 IBM ThinkQ Conference featured <a href="https://www.research.ibm.com/ibm-q/thinkq/agenda.html">talks</a> by an impressive array of experts in both the fields of quantum Hamiltonian simulation (Andrew Childs, Robin Kothari, Ryan Babbush, Nathan Wiebe) and quantum chemistry (Ryan Babbush, Garnet Chan). The following post aims to summarize the intersection of those fields and explain how they provide a “killer application” for a quantum computer.</p>
<h2 id="introduction">Introduction</h2>
<p>Much ink has been spilled regarding the quest to find flashy applications for the 50-qubit quantum computers (QCs) that will be available over the next few years. The goal in researching these problems has always been to find a task on which classical computers struggle to make headway, but that quantum computer could do in its sleep<sup id="fnref:scott"><a href="#fn:scott" class="footnote">1</a></sup>. Such a vaunted demonstration of a “quantum computational advantage” would cement the public image of quantum computing as “<a href="https://www.nytimes.com/2018/01/16/opinion/while-you-were-sleeping.html">Star Wars technology</a>,” and certainly merits further investigation.</p>
<p>But while some researchers are searching for problems for which near-term QCs would have a computational advantage, others are focusing their attention on a longer-term goal—one that’s been around since the very beginning of the field—the simulation of quantum chemistry and quantum materials. In fact, the idea of quantum simulation can be traced all the way back to the late (and great) physicist, Richard P. Feynman.</p>
<h2 id="simulation-of-quantum-mechanics">Simulation of quantum mechanics</h2>
<p>Feynman was one of the #GreatMinds of 20th-century physics. He shared the Nobel prize in 1965 for co-discovering quantum electrodynamics, gave his name to a half-dozen fundamental concepts in particle physics, and authored a trilogy of famous physics textbooks (as well as a pair of rambunctious, semi-autobiographical works). More relevantly to this blog post, he also anticipated the power of quantum computing in a 1982 talk titled “Simulating Physics with Computers.”<sup id="fnref:feynman"><a href="#fn:feynman" class="footnote">2</a></sup></p>
<p>In classical physics, one can simulate the dynamics of systems by solving the equations of motion. These systems of differential equations can be solved numerically for all but the most complicated of systems-such as turbulent fluids. While quantum computers won’t necessarily be able to help with intractable problems in fluid dynamics, they will be useful in situations where the <em>laws of classical mechanics</em> themselves break down. In Feynman’s own words:</p>
<blockquote>
<p>
Nature isn't classical, dammit, and if you want to make a simulation of nature, you'd better make it quantum mechanical, and by golly it's a wonderful problem, because it doesn't look so easy.
</p>
<footer><cite title="Richard P. Feynman">Richard P. Feynman</cite></footer>
</blockquote>
<p>Feynman foresaw that the challenge of simulating quantum mechanics would stump the world’s most powerful computers—including modern-day supercomputers. Thirty years after his death<sup id="fnref:feynman2"><a href="#fn:feynman2" class="footnote">3</a></sup>, Feynman’s words still feature in the motivating slides for many talks on quantum simulation<sup id="fnref:1"><a href="#fn:1" class="footnote">4</a></sup>.</p>
<p>So why is simulating quantum chemistry so hard for classical computers? Well, it goes back to the curse of dimensionality: the more particles you have in a system, the more parameters you need to describe its quantum state. In the worst case, the number of parameters can grow exponentially in the number of electrons! In practice, this makes it impractical to simulate molecular systems with more than a few atoms. And while such exacting quantum mechanical descriptions aren’t typically required to model most reactions<sup id="fnref:2"><a href="#fn:2" class="footnote">5</a></sup>, there are some important exceptions<sup id="fnref:except"><a href="#fn:except" class="footnote">6</a></sup>.</p>
<p>One example of a chemical phenomenon that skews more quantum than classical is the mechanism of catalysts—molecular agents used to speed up chemical reactions without themselves being consumed. The heart of a catalyst’s reactive power is in its active site, which in many heterogeneous catalysts—think enzymes—consists of a geometric arrangement of molecules centered around one (or more) transition metal atoms. The electrons of these metal atoms interact strongly with other electrons, and play an important role in stabilizing the transition states of a reaction. It is exactly these strongly-interacting systems that cannot be simulated with classical methods!</p>
<p>One such catalyst that researchers have focused on recently is nitrogenase, the enzyme responsible for nitrogen-fixation in bacteria<sup id="fnref:svore"><a href="#fn:svore" class="footnote">7</a></sup>. While nitrogen is ubiquitous in nature—comprising 78% by volume of the air we breathe and featuring in each of the twenty amino acids that make up our proteins—most of the nitrogen on earth takes the form of inert dinitrogen gas. The nitrogenase enzyme can chemically activate nitrogen at standard temperature and pressure, thereby <em>fixing</em> it.</p>
<p>In industry, an analogous reaction known as the Haber-Bosch process is used to fix nitrogen, specifically in the form of ammonia. The ammonia can be then be used to make fertilizer (or nitro-based explosives—a fact which likely helped extend the duration of WWI by a few years). The reaction consumes one nitrogen molecule and three hydrogen molecules to form two ammonia molecules:
\begin{align}
\text{N}_2 + 3 \text{H}_2 \rightarrow 2 \text{NH}_3 \quad (\Delta H^\circ = -45.8 \text{ kJ/mol})
\end{align}
Although this reaction is exothermic, the process requires temperatures of 400 $^\circ$C and pressures of 200 atm to proceed. So even though nitrogen fixation is thermodynamically favorable at room temperature, it proceeds slowly due to the immense activation energy required to break the triple bond. The extreme temperatures and pressures required to activate the metallic catalyst use up 1-2% of the world’s annual energy budget—an order magnitude more than is used to mine Bitcoin! The development of a better nitrogen-fixation catalyst would make a sizable impact on reducing the world’s energy consumption.</p>
<p>In order to find a better catalyst, we need to first understand the catalytic reaction mechanism, including all of the reaction intermediates and transition structures that may occur between the reactant and product stages. Phrased in physical terms, we must map out the potential energy surface that the system explores throughout the reaction. In this model, the transition structures and reaction intermediates are located at points of local maximum and minimum respectively. Here, quantum simulation can come in handy. By determining the ground-state energies of the chemical structures, quantum computers can help determine the optimal pathway through the reaction space.</p>
<h2 id="hybrid-quantum-classical-algorithms">Hybrid quantum-classical algorithms</h2>
<p>How do we go about mapping the potential energy surface of a chemical reaction? According to Ryan Babbush, we would need to map the potential energy surfaces to the chemical accuracy of 1 kcal/mol in order to get reaction coefficients on the correct order of magnitude<sup id="fnref:arrhenius"><a href="#fn:arrhenius" class="footnote">8</a></sup>. For comparison, the activation energy for the uncatalyzed nitrogen-fixation reaction is a little less than 100 kcal/mol<sup id="fnref:3"><a href="#fn:3" class="footnote">9</a></sup>. So if we could find the ground state energies for all of the transition states and reaction intermediates with an error of at most one part in a hundred, we could make nontrivial headway in finding new reaction pathways!</p>
<p>Unfortunately, finding ground state energies is a hard task in general—even for a quantum computer<sup id="fnref:4"><a href="#fn:4" class="footnote">10</a></sup>! And it could be a decade or more before QC’s will be powerful and robust enough to improve over current chemical simulation methods. But there is some hope for the near-term: researchers have developed some heuristic algorithms that can perform approximate quantum simulation using the first generation of quantum devices<sup id="fnref:vqe"><a href="#fn:vqe" class="footnote">11</a></sup>. While these algorithms do not guarantee any asymptotic quantum speed-up, they might perform suitably well in practice.</p>
<p>A simple way to find the ground state energy of a Hamiltonian $H$ is by preparing a system in its ground state $\ket{\psi}$ and measuring the expectation value of its energy: $\bra{\psi}H\ket{\psi}$. But this exact state preparation is still hard<sup id="fnref:qma"><a href="#fn:qma" class="footnote">12</a></sup>. So the next best thing would be to prepare a trial state that is close in energy to the ground state. Let’s say we have a quantum circuit that takes in a set of input parameters and applies a sequence of gates to prepare a specific quantum state. By measuring the expectation value of the energy of that state, we obtain an upper bound to the system’s true ground state energy. Then, we use classical optimization techniques to minimize the energy as a function of the input parameters. After feeding the optimized parameters back into the quantum circuit, we repeat the process all over again. If we’re smart about this, then we’ll end up getting close to the ground-state energy in a reasonable number of iterations.</p>
<p>This proposal for finding ground state energies goes by the name of a variational quantum eigensolver (VQE), and is an example of a so-called <em>hybrid quantum-classical algorithm</em>. Here, the classical part consists of a souped-up, descent-based optimizer, whereas the quantum part is responsible for the state preparation and measurement. Another way to think about VQE is that the measurement of the state’s energy produces data on which we can train the state preparation protocol. This may sound familiar to those of you with a machine learning background; indeed, Ryan Babbush has said that the idea of VQE and other hybrid algorithms is to “train shallow quantum circuits like a neural network.” Since these low-depth quantum circuits do not require error-correction, these algorithms are prime candidates for harnessing the power of near-term QCs.</p>
<h2 id="conclusion">Conclusion</h2>
<p>This wraps up our brief tour of quantum simulation. I hope that this post has elucidated why some people consider simulating chemistry to be one of <em>the</em> premier applications for a quantum computer. And there are numerous other practical implications; in addition to catalysts, QC’s could be used in the future to study the reactivity of chemical structures like proteins and crystals, or even probe exotic materials like high-temperature superconductors<sup id="fnref:5"><a href="#fn:5" class="footnote">13</a></sup>.</p>
<p>It’s important, however, to not to get ahead of ourselves: quantum computing isn’t the only revolutionary technology capable of transforming quantum chemistry. Artificial intelligence and machine learning in particular are poised to make a significant impact on the field — long before universal QCs are expected to take the stage<sup id="fnref:6"><a href="#fn:6" class="footnote">14</a></sup>. One challenge in the meantime is to find expert collaborators who can help identify materials or molecular systems for which quantum simulation befits, but who also have the patience to play the long game. As with other forms of “Star Wars technology,” quantum computers will have their day—in a spacetime far, far away.</p>
<p><em>Thanks to Leigh Martin and Stephen Ting for helpful discussions. Special thanks to Andrew Childs for his course on quantum algorithms at the University of Maryland, which planted the original inspiration for this post.</em></p>
<hr />
<div class="footnotes">
<ol>
<li id="fn:scott">
<p>Quote attributed to <a href="www.scottaaronson.com/talks/speedup.ppt">Scott Aaronson</a> (download warning: 1.3MB PPT file). <a href="#fnref:scott" class="reversefootnote">↩</a></p>
</li>
<li id="fn:feynman">
<p>Richard P. Feynman, <em>Simulating physics with computers</em>, International Journal of Theoretical Physics <strong>21</strong> (1982), no. 6-7, 467–488. <a href="https://doi.org/10.1007/BF02650179">doi:10.1007/BF02650179</a> <a href="#fnref:feynman" class="reversefootnote">↩</a></p>
</li>
<li id="fn:feynman2">
<p>Feynman passed away on February 15, 1988. Fun fact: the 100th anniversary of his birthday will take place on May 11, 2018. <a href="#fnref:feynman2" class="reversefootnote">↩</a></p>
</li>
<li id="fn:1">
<p>See, for example, the talks by <a href="https://www.youtube.com/watch?v=VSTzptzsNp0">Andrew Childs</a> and <a href="https://www.youtube.com/watch?v=OrRLB-fGS_U">Garnet Chan</a>. <a href="#fnref:1" class="reversefootnote">↩</a></p>
</li>
<li id="fn:2">
<p>To simulate the dynamics of biochemical systems like proteins, it usually suffices to model them using classical molecular mechanics—already a computationally-intensive task in and of itself. To compute molecular orbitals in solid-state systems and quantum chemistry, the mean-field method of density functional theory (DFT) has had many successes. Unsurprisingly, DFT still struggles to model systems of strongly-correlated electrons efficiently. <a href="#fnref:2" class="reversefootnote">↩</a></p>
</li>
<li id="fn:except">
<p>As is the case for most “rules” in chemistry. <a href="#fnref:except" class="reversefootnote">↩</a></p>
</li>
<li id="fn:svore">
<p>Reiher, M., Wiebe, N., Svore, K. M., Wecker, D., & Troyer, M., <em>Elucidating Reaction Mechanisms on Quantum Computers</em>. PNAS 2017 July, 114 (29) 7555-7560. <a href="https://doi.org/10.1073/pnas.1619152114">doi:10.1073/pnas.1619152114</a> <a href="#fnref:svore" class="reversefootnote">↩</a></p>
</li>
<li id="fn:arrhenius">
<p>Using the Arrhenius equation: $k = Ae^{-\Delta E/RT}$, where $\Delta E$ is the activation energy of the reaction or one of its intermediate steps. <a href="#fnref:arrhenius" class="reversefootnote">↩</a></p>
</li>
<li id="fn:3">
<p>Jayant M. Modak, <em>Haber Process for Ammonia Synthesis,</em> General Article, Volume 7, Issue 9, September 2002 pp 69-77. <a href="http://www.ias.ac.in/article/fulltext/reso/007/09/0069-0077">Fulltext</a> <a href="#fnref:3" class="reversefootnote">↩</a></p>
</li>
<li id="fn:4">
<p>More specifically, the k-local Hamiltonian simulation problem is QMA-complete for $k \ge 2$. See the following paper for details: Kempe, J., Kitaev, A., & Regev, O., <em>The complexity of the local Hamiltonian problem</em>, SIAM Journal on Computing, 35 (5): 1070–1097. (2006) <a href="https://doi.org/10.1137/S0097539704445226">doi:10.1137/S0097539704445226</a> <a href="#fnref:4" class="reversefootnote">↩</a></p>
</li>
<li id="fn:vqe">
<p>Jarrod R McClean, et al., <em>The theory of variational hybrid quantum-classical algorithms,</em> New J. Phys. 18 023023, (2016). <a href="https://doi.org/10.1088/1367-2630/18/2/023023">doi:10.1088/1367-2630/18/2/023023</a> <a href="#fnref:vqe" class="reversefootnote">↩</a></p>
</li>
<li id="fn:qma">
<p>QMA-hard, in fact. <a href="#fnref:qma" class="reversefootnote">↩</a></p>
</li>
<li id="fn:5">
<p>See Garnet Chan’s <a href="https://www.youtube.com/watch?v=OrRLB-fGS_U">talk</a> for more on these applications. <a href="#fnref:5" class="reversefootnote">↩</a></p>
</li>
<li id="fn:6">
<p>For a comprehensive outlook on technological forces poised to “disrupt” the field of chemical simulation, check out this recent paper: Aspuru-Guzik, A., Lindh, R., & Reiher, M., <em>The Matter Simulation (R)evolution,</em> ACS Cent. Sci., Article ASAP, (2018). <a href="https://doi.org/10.1021/acscentsci.7b00550">doi:10.1021/acscentsci.7b00550</a> <a href="#fnref:6" class="reversefootnote">↩</a></p>
</li>
</ol>
</div>
Thu, 15 Feb 2018 00:00:15 +0000
https://www.quasci.com/conference/2018/02/15/thinkqsim-Andrew/
https://www.quasci.com/conference/2018/02/15/thinkqsim-Andrew/ThinkQ Afterthoughts<p><em>Editor’s note: At the conclusion of the conference, the Vice President of IBM Q, Dario Gil, convened a panel discussion of quantum computing experts from industry and academia. They discussed the current outlook as well as the future of quantum computation. The three of us (Eddie Schoute, Andrew Guo, and Aniruddha Bapat) had a conversation about the themes brought up during the discussion.</em></p>
<p><img src="https://www.quasci.com/img/thinkq-panel.jpg" alt="The ThinkQ panel" />
<em>The ThinkQ 2017 Panel consisted of (left-to-right): Dave Bacon (Google), Andrew Childs (U. of Maryland), Richard Jozsa (U. of Cambridge), Dario Gil (IBM Q), Eleanor Rieffel (NASA), Matthias Steffen (IBM), and Jungsang Kim (Duke U., IonQ).</em></p>
<p><strong>Eddie:</strong> Andrew and Ani, I have to ask you what you thought about the ThinkQ conference.</p>
<p><strong>Andrew:</strong> It was an excellent conference, and IBM was a gracious host. Personally, I thought the best sound-bytes came at the end, during the panel discussion with experts from both industry and academia. Dario Gil, the senior executive in charge of IBM Q, certainly did a nice job moderating this “ThinkQ&A.”</p>
<p><strong>Ani:</strong> I agree, the conference was very well-organized (kudos to IBM), and I felt that the panel discussion effectively brought together many of the themes that had informed the talks, posters, and even the conversations over coffee or lunch. One theme that came up a lot during the discussion was the the joint role of industry and academia in quantum computing, and the selection of panelists seemed to reflect that reality.</p>
<p><strong>Eddie:</strong> Academia is hesitant to accept industry in the development of the field. A decent amount of time was spent on this topic at the panel discussion. I can understand some of the concerns: Academic freedom must be maintained. As researchers we should be able to work on topics that do not seem to have direct applications. There were multiple panelists expressing concern over their ability to stay independent of industry interests; I feel that the independent process should be defended by academia.</p>
<p><strong>Andrew:</strong> I think that the freedom of academics to pursue knowledge without direct application is not particularly at risk at the moment. Currently, governments still provide the bulk of the funding for research in quantum information. And given the field’s relevance to national cybersecurity, I do not foresee that dynamic shifting anytime soon. What could be problematic is if industrial partners were to use their private capital to fund individual academic research projects directly; that could lead to conflicts of interest, which would impact the credibility of the research. So far, it seems companies are content with funding in-house research teams and sponsoring academic conferences—not a bad deal for us!</p>
<p>Eleanor Rieffel brought up another, more subtle issue: whether industrial involvement might shift the culture of the field from its current atmosphere—one of open academic collaboration—to a more restrictive and competitive environment. Such pressures may stem from market forces, or the overall “hype” from media publicity. If researchers feel compelled to seek patents for their quantum algorithms or chip fabrication techniques, it could stifle innovation and impede the field’s overall progress.</p>
<blockquote>
<p>
This is a very open field, with everybody talking to everybody, and I’ve started to have a little bit of a fear of a change in culture because of the market forces and the hype. I think we all need to push against that.
</p>
<footer><cite title="Eleanor Rieffel">Eleanor Rieffel</cite></footer>
</blockquote>
<p><strong>Ani:</strong> I generally agree that right now, the spirit of long-term academic pursuit and open collaboration in quantum computing (QC) seems to be safe from “market forces,” which—very broadly—incentivize quick results and being possessive about intellectual property. However, as we get closer to having commercial QC devices, companies (and the research groups they support) might be forced to change strategy and look exclusively to the near-term.</p>
<p><strong>Andrew:</strong> Right. And the desire for quick results could certainly lead to a culture of “moving fast and break things,” a philosophy to which a number of start-ups tend to ascribe. As cautionary tale, recall the enormous media hype behind D-Wave Technologies in the early 2010s. The Vancouver-based start-up bucked the academic consensus that one must build a “universal” quantum computer to accrue a quantum computational advantage, and instead developed a quantum annealer that can only tackle highly constrained problems—ones for which there’s no guarantee of a quantum speed-up. D-Wave effectively chose quantity over quality by creating devices with thousands of qubits with poor coherence times. Such behavior threatens a “race to the bottom,” where upstart companies would use their marketing heft to capitalize on hype and end up building “quantum” computers that are no more powerful than classical ones.</p>
<p><strong>Eddie:</strong> It is probably not fair to have such an antagonistic view of “industry”; I do seem to recall some very successful research groups at various companies, e.g. Microsoft, Google and IBM, so I do think there is potential for a synergistic existence. Even now, I do not think that other fields with a significant industry influence such as Computer Science are doing so poorly. Industry interest also drives general interest, resulting in more potential candidates for research—another goal both Jungsang Kim and Eleanor Rieffel saw worthy of mention in the panel discussion.</p>
<p><strong>Ani:</strong> Good or bad, change is coming, and many in the panel addressed research strategies in the “post-industrial” scenario. For instance, Jungsang Kim said that we should expect a kind of tipping point, much like the one we saw in Silicon Valley. He basically argued that in the next (say) five years, the stakes will raise dramatically, and our approach (as a community) to the research will determine our success in producing good QC devices.</p>
<p>There were a few different aspects to this argument: first, that we should maintain a healthy balance of skeptics and “believers”—i.e. people who are motivated and optimistic enough about their own ideas to see them through to the bitter end. Moreover, we need a kind of brute-force, distributed approach, where we attract many young and smart people into the field and make research tools (such as IBM’s quantum computer) available to the community at large. This way, we expand the scope of explored possibilities and increase the probability of finding promising applications early on. Indeed, the spirit of open collaboration is a big part of this.</p>
<p><strong>Eddie:</strong> IBM has had a cloud QC platform for a long time now, and so has the University of Bristol. I would predict that most groups with a <em>reliable</em> QC will make available their devices to the general public (possibly at cost). There is currently just not enough demand for quantum devices for parties to purchase and maintain hardware themselves. Currently known applications are mostly aimed at large scale devices, but most would admit that they are scrambling to find “killer” applications for a small device.</p>
<p>Barring some significant breakthrough it will take a while for the general public to have a reason to interface with quantum devices. I am also fairly skeptic of the position of Dave Bacon that opening up quantum computers to the general public would lead to a flood of new ideas since much of what is quantum is so unintuitive.</p>
<p><strong>Andrew:</strong> Certainly in the near-term, giving the public access to the current generation of devices will have little scientific impact. But even a trickle of interest today could have a big impact downstream, especially in bringing forward a new generation of quantum researchers. As Richard Feynman once said, “We always have had … a great deal of difficulty in understanding the world view that quantum mechanics represents.” While older generations have struggled to “understand” quantum mechanics, today’s physicists are basically steeped in it. And the “modern” approach to teaching quantum has made it easier for us to wrap our heads around it—at least to the point where we can “shut up and compute.”</p>
<p><img src="https://www.quasci.com/img/smbc-earlier-papers.gif" alt="SMBC Earlier Papers" />
<em>“Kids these days are publishing their first QC papers younger and younger,”
courtesy of Scott Aaronson and (<a href="https://www.smbc-comics.com/comic/the-talk-3">reproduced</a> with permission of) Zach Weinersmith</em></p>
<p>Think of what could happen when programming a quantum computer comes as naturally as programming a classical computer does for many people today. Twenty year-old Bill Gates co-founded Microsoft; nineteen year-old Mark Zuckerberg created Facebook. Neither could’ve made his impact without early and consistent access to computers. Fast forwarding a few decades into the future, the “killer app” for a universal quantum computer might be devised by someone who hasn’t even been born yet!</p>
<p><strong>Eddie:</strong> Maybe you’re right—I certainly hope so! Either way, making the quantum computation more accessible is a good thing. I also personally would like to involve more Computer Scientists in the field, but the right tools aren’t there yet. At least academia & industry are working hard to making the field more accessible and easier to interact with, which does give me hope for the future.</p>
<p><strong>Ani:</strong> I agree that quantum “toys” and early devices will—in addition to guiding the development of commercial applications—become educational tools for a future generation of quantum computer scientists. I’m hopeful that people with academic and industry background and expertise will figure out their respective roles in this, and work together towards what one could call the technological integration of quantum computing into society. That’s certainly an exciting prospect!</p>
<p><em>Watch the video recording of the panel <a href="https://www.youtube.com/watch?v=ZLLBApPuz8c">here</a></em></p>
Sat, 27 Jan 2018 12:01:00 +0000
https://www.quasci.com/conference/high-level/2018/01/27/thinkq-panel/
https://www.quasci.com/conference/high-level/2018/01/27/thinkq-panel/QCrypt 2016<p>Last year I was fortunate enough to attend the <a href="http://2016.qcrypt.net/">QCrypt 2016</a> conference being held in Washington DC.
It is my first time attending this conference and I will highlight some of the presentation that
were of particular interest to me.
It was also a great opportunity to meet with friends and colleagues after moving overseas
from Europe to the United States.</p>
<p><em><img src="https://www.quasci.com/img/conference-room.jpg" alt="The conference room" /></em>
<em>The conference was hosted at this beautiful lecture hall at the Carnegie Institute for Science.</em></p>
<h2 id="quantum-key-distribution">Quantum Key Distribution</h2>
<p>A major topic a QCrypt this year was, of course, Quantum Key Distribution (QKD).
This topic of research stems from the famous <a href="https://en.wikipedia.org/wiki/BB84">BB84 protocol</a>, published in a 1984 paper by Bennett and Brassard<sup id="fnref:1"><a href="#fn:1" class="footnote">1</a></sup>.
It proposes a protocol that uses quantum communication to distribute a secret key between two parties.
There are many classical (i.e. non-quantum) protocols such as <a href="https://en.wikipedia.org/wiki/RSA_(cryptosystem)">RSA</a>
which would make this unnecessary,
were it not for the fact that the BB84 protocol is provably secure.
This means even an adversary with <em>unlimited</em> (quantum) resources cannot figure out what the key is;
the same cannot be said about RSA!
As a side note, it is even possible for a finite quantum computer to break RSA<sup id="fnref:2"><a href="#fn:2" class="footnote">2</a></sup>.</p>
<p>Unfortunately for us theorists, practice and theory are are not the same.
As evidenced by the many (poster) presentations implementing QKD is far from a done deal.
E.g., see presentations on <a href="http://2016.qcrypt.net/invited-talks#jennewein">free-space QKD</a>,
or the very amusing <a href="http://2016.qcrypt.net/contributed-talks/#sajeed">laser damage creates backdoors in quantum cryptography</a>,
or <a href="http://2016.qcrypt.net/contributed-talks/#dixon">field trials of high-speed QKD</a>;
as well industry exhibits and a lot of <a href="http://2016.qcrypt.net/posters/">posters</a>.</p>
<h2 id="device-independent-cryptography">Device Independent Cryptography</h2>
<p>Another topic that may also come as surprising –it certainly was for me– is
<a href="https://en.wikipedia.org/wiki/Device-independent_quantum_cryptography">device independent quantum cryptography</a>.
It is quite natural to assume that you have complete control of your device
and that an adversary would just attack your communications;
it is less so to remove this assumption.
Let’s say you want to generate a secure random number,
you could, for example, refer to <code class="highlighter-rouge">/dev/urandom</code> on Linux for a pseudo-random byte:</p>
<div class="language-shell highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nv">$ </span>od <span class="nt">-vAn</span> <span class="nt">-N1</span> <span class="nt">-tu1</span> < /dev/urandom
59
<span class="nv">$ </span>od <span class="nt">-vAn</span> <span class="nt">-N1</span> <span class="nt">-tu1</span> < /dev/urandom
148
</code></pre></div></div>
<p>But how would you <em>guarantee</em> that a bit is actually secret,
i.e. no one else knows any information about this bit.
With randomness generation (<a href="http://2016.qcrypt.net/tutorials-2/#colbeck">tutorial</a>)
it is possible to certify that two black boxes capable of a protocol similar to
QKD have indeed generated random bits.</p>
<p>This remarkable result is made possible by an inherently quantum property, the monogamy of entanglement:
Perfect entanglement between two qubits implies that there is no third qubit that entangled with the first two.
Given just</p>
<ol>
<li>two black boxes that supposedly perform what is known as a Bell test (similar to QKD), and</li>
<li>monogamy of entanglement</li>
</ol>
<p>we can certify randomness.</p>
<p>By performing a Bell test we can certify that the two black boxes indeed share (close to) perfect entanglement
which precludes any third party from listening in (by monogamy).
Now we just have to randomly mix in the Bell tests with measurement to get random bits
and the black boxes won’t even know if they are being tested or actually generating randomness.</p>
<h2 id="zero-knowledge-proofs">Zero-Knowledge Proofs</h2>
<p>Sometimes, and especially in cryptography, we want to show that we know some secret recipe
to cook a delicious dinner — or rather an algorithm for solving some problem.
As part of <a href="https://youtu.be/olPJLHzjwbU">her presentation</a>
Anne Broadbent gives an explanation a five-year old can understand.</p>
<p>An analogy that I’ve come across likens zero knowledge proofs to being able to distinguish
Coca-Cola from Pepsi.
Let’s assume that we think it’s really hard to distinguish between the two colas.
Someone comes along and they claim to know how to distinguish the two and will prove it to us!
We just have to choose a coke at random and keep it secret
and give it to them (a <a href="https://en.wikipedia.org/wiki/Blinded_experiment#Single-blind_trials">single-blind trial</a>).
They will then tell you what it was.</p>
<p>Either they were lying and simply guess or they do actually know the difference.
To see through any such deceiver, we repeat the experiment $n$ times.
The probability that a deceiver guesses correctly in all $n$ experiments is $2^{-n}$.
So you can <em>very</em> quickly figure out who is leading you by the nose and who is a connoisseur (of coke).
Note that you have not figured out the secret recipe to distinguishing the two cokes
which is the point of zero-knowledge.</p>
<p>Perhaps surprisingly, it is known that there exists a zero-knowledge proof for co-NP problems<sup id="fnref:classicalzk"><a href="#fn:classicalzk" class="footnote">3</a></sup>.
Under sufficient assumptions this can be extended to NP and even all of IP<sup id="fnref:ipzk"><a href="#fn:ipzk" class="footnote">4</a></sup>.</p>
<h3 id="quantum-zero-knowledge">Quantum Zero-Knowledge</h3>
<p>Now for the quantum results.
Previously it was proved that we can perform <em>quantum rewinding</em>,
an operation that is important in classical zero-knowledge proofs.
With this result Watrous was able to show that classical zero-knowledge protocols
remain secure against quantum attackers<sup id="fnref:watrouszk"><a href="#fn:watrouszk" class="footnote">5</a></sup>.</p>
<p>Now for the results presented at QCrypt 2016 (and <a href="https://stationq.microsoft.com/qip-2017-conference-schedule/#m-psIA-song">QIP 2017</a> which I may get to some time),
they extend the zero-knowledge result from classical NP to its quantum analogue, QMA.
Showing that (again under sufficient assumptions)
there are zero-knowledge proofs for all problems in QMA (Quantum Merlin Arthur).</p>
<p>If you wish to know more about Quantum zero-knowledge proofs and interactive provers,
have a look at Vidick and Watrous’s survey “Quantum Proofs”<sup id="fnref:quantumproofs"><a href="#fn:quantumproofs" class="footnote">6</a></sup>.
I had the pleasure of reading through most if it in a reading group and it is a great way to get started.
Admittedly, we did have some difficulty with the zero-knowledge chapter (the rest was great, promise!)
so having some background on classical zero-knowledge first may help.
It contains all you need to get started with QMA, Quantum Interaction Proofs (QIP),
Quantum (Statistical) Zero-Knowledge (QSZK), and even Multi-Prover QIP (QMIP).</p>
<h2 id="conclusion">Conclusion</h2>
<p>Many thanks to the organisers for making this great conference happen.
Even though I cannot attend this year, I’m looking forward to <a href="http://2017.qcrypt.net/previous-next-conferences/">QCrypt 2017</a>.
I’m hoping the presentation will again be recorded and made available afterwards.</p>
<p><em><img src="https://www.quasci.com/img/qcrypt-2017-banner.jpg" alt="The conference room" /></em>
<em><a href="http://2017.qcrypt.net/previous-next-conferences/">QCrypt 2017</a> will be hosted at Cambridge, UK.</em></p>
<div class="footnotes">
<ol>
<li id="fn:1">
<p>C. H. Bennett and G. Brassard. “Quantum cryptography: Public key distribution and coin tossing”. In Proceedings of IEEE International Conference on Computers, Systems and Signal Processing, volume 175, page 8. New York, 1984. <a href="#fnref:1" class="reversefootnote">↩</a></p>
</li>
<li id="fn:2">
<p>See <a href="https://en.wikipedia.org/wiki/Shor%27s_algorithm">Shor’s Algorithm</a> <a href="#fnref:2" class="reversefootnote">↩</a></p>
</li>
<li id="fn:classicalzk">
<p>Goldreich, Oded, Silvio Micali, and Avi Wigderson. “Proofs that yield nothing but their validity or all languages in NP have zero-knowledge proof systems.” Journal of the ACM (JACM) 38, no. 3 (1991): 690-728. <a href="https://doi.org/10.1145/116825.116852">doi:10.1145/116825.116852</a> <a href="#fnref:classicalzk" class="reversefootnote">↩</a></p>
</li>
<li id="fn:ipzk">
<p>Ben-Or, M. et al., Everything Provable is Provable in Zero-Knowledge. Lecture Notes in Computer Science, pp.37–56. <a href="https://doi.org/10.1007/0-387-34799-2_4">doi:10.1007/0-387-34799-2_4</a> <a href="#fnref:ipzk" class="reversefootnote">↩</a></p>
</li>
<li id="fn:watrouszk">
<p>Watrous, J., 2009. Zero-Knowledge against Quantum Attacks. SIAM Journal on Computing, 39(1), pp.25–58. <a href="https://doi.org/10.1137/060670997">doi:10.1137/060670997</a> <a href="#fnref:watrouszk" class="reversefootnote">↩</a></p>
</li>
<li id="fn:quantumproofs">
<p>Vidick, T. & Watrous, J., 2016. Quantum Proofs. Foundations and Trends® in Theoretical Computer Science, 11(1-2), pp.1–215. <a href="https://doi.org/10.1561/0400000068">doi:10.1561/0400000068</a> <a href="#fnref:quantumproofs" class="reversefootnote">↩</a></p>
</li>
</ol>
</div>
Mon, 24 Jul 2017 17:33:00 +0000
https://www.quasci.com/conference/basic/2017/07/24/qcrypt-2016/
https://www.quasci.com/conference/basic/2017/07/24/qcrypt-2016/