ADAPT
Documentation for ADAPT.
ADAPT.Basics.Callbacks
ADAPT.Basics.Operators
ADAPT.ADAPT_QAOA.DiagonalQAOAAnsatz
ADAPT.ADAPT_QAOA.DiagonalQAOAAnsatz
ADAPT.ADAPT_QAOA.PlasticQAOAAnsatz
ADAPT.ADAPT_QAOA.PlasticQAOAAnsatz
ADAPT.ADAPT_QAOA.QAOAAnsatz
ADAPT.ADAPT_QAOA.QAOAAnsatz
ADAPT.ADAPT_QAOA.QAOAObservable
ADAPT.AbstractAnsatz
ADAPT.AbstractCallback
ADAPT.AbstractCallback
ADAPT.AbstractCallback
ADAPT.AdaptProtocol
ADAPT.Basics.Ansatz
ADAPT.Basics.Ansatz
ADAPT.Basics.Callbacks.FloorStopper
ADAPT.Basics.Callbacks.ParameterPrinter
ADAPT.Basics.Callbacks.ParameterStopper
ADAPT.Basics.Callbacks.ParameterTracer
ADAPT.Basics.Callbacks.Printer
ADAPT.Basics.Callbacks.ScoreStopper
ADAPT.Basics.Callbacks.Serializer
ADAPT.Basics.Callbacks.SlowStopper
ADAPT.Basics.Callbacks.Tracer
ADAPT.Basics.OptimOptimizer
ADAPT.Basics.OptimOptimizer
ADAPT.Basics.VanillaADAPT
ADAPT.CallbackList
ADAPT.Data
ADAPT.Degenerate_ADAPT.DegenerateADAPT
ADAPT.Energy
ADAPT.EnergyList
ADAPT.Generator
ADAPT.GeneratorList
ADAPT.Observable
ADAPT.OptimizationFreeADAPT.OptimizationFree
ADAPT.OptimizationProtocol
ADAPT.Parameter
ADAPT.ParameterList
ADAPT.QuantumState
ADAPT.Score
ADAPT.ScoreList
ADAPT.TETRIS_ADAPT.TETRISADAPT
ADAPT.Trace
Base.Matrix
Base.Matrix
Base.Matrix
ADAPT.ADAPT_QAOA.QAOApools.qaoa_double_ops
ADAPT.ADAPT_QAOA.QAOApools.qaoa_double_pool
ADAPT.ADAPT_QAOA.QAOApools.qaoa_mixer
ADAPT.ADAPT_QAOA.QAOApools.qaoa_single_pool
ADAPT.ADAPT_QAOA.QAOApools.qaoa_single_x
ADAPT.ADAPT_QAOA.__make__costate
ADAPT.Basics.MyPauliOperators.cis!
ADAPT.Basics.MyPauliOperators.measure_commutator
ADAPT.Basics.Operators.hubbard_hamiltonian
ADAPT.Basics.Operators.hubbard_hamiltonian
ADAPT.Basics.Operators.qubitexcitation
ADAPT.Basics.Pools.fullpauli
ADAPT.Basics.Pools.minimal_complete_pool
ADAPT.Basics.Pools.one_local_pool
ADAPT.Basics.Pools.oneandtwo_local_pool
ADAPT.Basics.Pools.qubitadaptpool
ADAPT.Basics.Pools.qubitexcitation
ADAPT.Basics.Pools.qubitexcitationpool
ADAPT.Basics.Pools.qubitexcitationpool_complemented
ADAPT.Basics.Pools.tile_operators
ADAPT.Basics.Pools.two_local_pool
ADAPT.Basics.__make__costate
ADAPT.Basics.__make__costate
ADAPT.Hamiltonians.MaxCut.random_regular_max_cut_hamiltonian
ADAPT.Hamiltonians.get_unweighted_maxcut
ADAPT.Hamiltonians.get_weighted_maxcut
ADAPT.Hamiltonians.hubbard_hamiltonian
ADAPT.Hamiltonians.hubbard_hamiltonian
ADAPT.Hamiltonians.maxcut_hamiltonian
ADAPT.Hamiltonians.xyz_model
ADAPT.adapt!
ADAPT.angles
ADAPT.bind!
ADAPT.calculate_score
ADAPT.calculate_scores
ADAPT.evaluate
ADAPT.evaluate
ADAPT.evolve_state
ADAPT.evolve_state
ADAPT.evolve_state!
ADAPT.evolve_state!
ADAPT.evolve_unitary
ADAPT.evolve_unitary
ADAPT.evolve_unitary!
ADAPT.evolve_unitary!
ADAPT.gradient
ADAPT.gradient!
ADAPT.gradient!
ADAPT.is_converged
ADAPT.is_optimized
ADAPT.make_costfunction
ADAPT.make_gradfunction
ADAPT.make_gradfunction!
ADAPT.optimize!
ADAPT.partial
ADAPT.partial
ADAPT.run!
ADAPT.set_converged!
ADAPT.set_optimized!
ADAPT.typeof_energy
ADAPT.typeof_parameter
ADAPT.typeof_score
ADAPT.validate
ADAPT.validate_consistency
ADAPT.validate_evaluation
ADAPT.validate_evolution
ADAPT.validate_gradient
ADAPT.validate_runtime
ADAPT.validate_scores
Base.:*
Base.:*
Base.adjoint
Base.adjoint
ADAPT.@runtime
Core
ADAPT.AbstractAnsatz
— TypeAbstractAnsatz{F,G}
An image of an ADAPT protocol in a frozen state.
The type is so named because the most basic possible such image consists of just the generators and parameters of the ansatz thus far selected, but richer variants of ADAPT will have richer states.
For example, a version of ADAPT which carries information on the inverse Hessian across ADAPT iterations would need to operate on an ansatz type which includes the inverse Hessian.
Nevertheless, every sub-type of AbstractAnsatz implements the AbstractVector interface, where elements are pairs (generator => parameter)
.
So, for example, an ansatz maintaining an inverse Hessian would need to override push!
insert!
, etc. to ensure the dimension of the Hessian matches.
Type Parameters
F
: the number type of the parameters (usuallyFloat64
)G
: the subtype ofGenerator
Implementation
Sub-types must implement the following methods:
__get__generators(::AbstractAnsatz{F,G})::Vector{G}
__get__parameters(::AbstractAnsatz{F,G})::Vector{F}
__get__optimized(::AbstractAnsatz{F,G})::Ref{Bool}
__get__converged(::AbstractAnsatz{F,G})::Ref{Bool}
Each of these is expected to simply retrieve an attribute of the struct. You can call them whatever you'd like, but functionally, here's what they mean:
generators::Vector{G}
: the sequence of generatorsparameters::Vector{F}
: the corresponding sequence of parametersNote that these vectors will be mutated and resized as needed.
optimized::Ref{Bool}
: a flag indicating that the current parameters are optimalconverged::Ref{Bool}
: a flag indicating that the current generators are optimalNote that these must be of type
Ref
, so their values can be toggled as needed.
In addition, there must be a compatible implementation for each of:
partial( index::Int, ansatz::AbstractAnsatz, observable::Observable, reference::QuantumState, )
calculate_score( ::AbstractAnsatz, ::AdaptProtocol, ::Generator, ::Observable, ::QuantumState, )::Score
adapt!( ::AbstractAnsatz, ::Trace, ::AdaptProtocol, ::GeneratorList, ::Observable, ::QuantumState, ::CallbackList, )
optimize!( ::AbstractAnsatz, ::Trace, ::OptimizationProtocol, ::Observable, ::QuantumState, ::CallbackList, )
That said, most basic implementations of these methods are defined for abstract ansatze, so you oughtn't need to worry about them.
Please see individual method documentation for details.
ADAPT.AbstractCallback
— TypeAbstractCallback
A function to be called at each adapt iteration, or vqe iteration, or both.
Common Examples
- Tracers: update the running
trace
with information passed indata
- Printers: display the information passed in
data
to the screen or to a file - Stoppers: flag the ADAPT state as converged, based on some condition
In particular, the standard way to converge an adapt run is to include a ScoreStopper
. Otherwise, the run will keep adding parameters until every score is essentially zero.
More details can be found in the Callbacks
module, where many standard callbacks are already implemented.
Implementation
Callbacks are implemented as callable objects, with two choices of method header (one for adaptations, one for optimization iterations).
(::AbstractCallback)( ::Data, ::AbstractAnsatz, ::Trace, ::AdaptProtocol, ::GeneratorList, ::Observable, ::QuantumState, )
(::AbstractCallback)( ::Data, ::AbstractAnsatz, ::Trace, ::OptimizationProtocol, ::Observable, ::QuantumState, )
If your callback is only meant for adaptations, simply do not implement the method for optimizations. (Behind the scenes, every AbstractCallback
has default implementations for both methods, which just don't do anything.)
Precisely what data is contained within the data
depends on the protocol. For example, the ScoreStopper
expects to find the key :scores
, whose value is a ScoreList
, one score for each pool operator. Generally, the callback should assume data
has whatever it needs, and if it doesn't, that means this callback is incompatible with the given protocol. That said, see the Callbacks
module for some standard choices.
The callback is free to mutate the ansatz
. For example, the ScoreStopper
signals a run should end by calling set_converged!
. But, if the callback wants to signal the run should end in an UN-converged state, it should simply return true
.
ADAPT.AbstractCallback
— Method(::AbstractCallback)(
::Data,
::AbstractAnsatz,
::Trace,
::AdaptProtocol,
::GeneratorList,
::Observable,
::QuantumState,
)
Callback for adapt iterations, called immediately prior to the ansatz update.
Note that the ansatz is already updated in the optimization callback, but not in the adaptation callback.
Parameters
- Almost all parameters for the
adapt!
method. See that method for details. data
: (replacescallbacks
) additional calculations the ADAPT method has made Keys depend on the protocol. See theCallbacks
module for some standard choices.
Returns
true
iff ADAPT should terminate, without updating ansatz
ADAPT.AbstractCallback
— Method(::AbstractCallback)(
::Data,
::AbstractAnsatz,
::Trace,
::OptimizationProtocol,
::Observable,
::QuantumState,
)
Callback for optimization iterations, called AFTER ansatz update.
Note that the ansatz is already updated in optimization callback, but not in the adaptation callback.
Parameters
- Almost all parameters for the
optimize!
method. See that method for details. data
: (replacescallbacks
) additional calculations the optimization method has made Keys depend on the protocol. See theCallbacks
module for some standard choices.
Returns
true
iff optimization should terminate
ADAPT.AdaptProtocol
— TypeAdaptProtocol
A distinctive protocol for adding new parameters after optimizing an initial ansatz.
Implementation
Sub-types must implement the following method:
typeof_parameter(::AbstractAnsatz)::Type{<:Parameter}
In addition, new sub-types probably need to implement:
calculate_score( ::AbstractAnsatz, ::AdaptProtocol, ::Generator, ::Observable, ::QuantumState, )::Score
adapt!( ::AbstractAnsatz, ::Trace, ::AdaptProtocol, ::GeneratorList, ::Observable, ::QuantumState, ::CallbackList, )
Finally, new sub-types might be able to provide better implementations of:
calculate_scores( ansatz::AbstractAnsatz, ADAPT::AdaptProtocol, pool::GeneratorList, observable::Observable, reference::QuantumState, )
For the most part, sub-types should be singleton objects, ie. no attributes. Arbitrary hyperparameters like gradient tolerance should be delegated to callbacks as much as possible. It's okay to insist a particular callback always be included with your ADAPT protocol, so long as you are clear in the documentation. That said, this rule is a "style" guideline, not a contract.
ADAPT.CallbackList
— TypeCallbackList
Semantic alias for a vector of callbacks.
ADAPT.Data
— TypeData
Semantic alias for trace-worthy information from a single adapt or vqe iteration.
You'll never actually have to deal with this object unless you are implementing your own protocol or callback.
Keys can in principle be any Symbol at all. You can design your own protocols to fill the data, and your own callbacks to use it. That said, see the Callbacks
module for some standard choices.
ADAPT.Energy
— TypeEnergy
Semantic alias for the expectation value of an observable.
ADAPT.EnergyList
— TypeEnergyList
Semantic alias for a vector of energies.
ADAPT.Generator
— TypeGenerator
The Union of any type that could be used as a pool operator.
Implemented Types
Any type at all can be used as a generator if there is a compatible implementation of the methods listed in the Implementation
section.
The following types have implementations fleshed out in this library already:
PauliOperators.Pauli
: A single Pauli wordPauliOperators.ScaledPauli
: A single Pauli word, alongside some scaling coefficientPauliOperators.PauliSum
: A Hermitian operator decomposed into the Pauli basisPauliOperators.ScaledPauliVector
: Same but with a different internal data structureFor each of the above, the generator
G
generates the unitaryexp(-iθG)
. Hermiticity inG
is not enforced, so be careful when constructing your pool operators.
Implementation
This list should be extended as new sub-types are implemented.
New sub-types probably need to implement:
evolve_state!( ::Generator, ::Parameter, ::QuantumState, )
In addition, new sub-types might be able to provide better implementations of:
calculate_scores( ansatz::AbstractAnsatz, adapt::AdaptProtocol, pool::GeneratorList, observable::Observable, reference::QuantumState, )
ADAPT.GeneratorList
— TypeGeneratorList
Semantic alias for a vector of generators.
ADAPT.Observable
— TypeObservable
The Union of any type that could define a cost function.
The type is so named because the typical cost-function is the expecation value of a Hermitian operator, aka a quantum observable.
Implemented Types
Any type at all can be used as a generator if there is a compatible implementation of the methods listed in the Implementation
section.
The following types have implementations fleshed out in this library already:
PauliOperators.Pauli
: A single Pauli wordPauliOperators.ScaledPauli
: A single Pauli word, alongside some scaling coefficientPauliOperators.PauliSum
: A Hermitian operator decomposed into the Pauli basisPauliOperators.ScaledPauliVector
: Same but with a different internal data structureFor each of the above, the evaluation of
H
with respect to a quantum state|Ψ⟩
is⟨Ψ|H|Ψ⟩
.
Implementation
This list should be extended as new sub-types are implemented.
Sub-types must implement the following method:
typeof_energy(::Observable)::Type{<:Energy}
In addition, new sub-types probably need to implement:
evaluate( ::Observable, ::QuantumState, )::Energy
Finally, new sub-types might be able to provide better implementations of:
gradient!( ansatz::AbstractAnsatz, observable::Observable, reference::QuantumState, )
calculate_scores( ansatz::AbstractAnsatz, adapt::AdaptProtocol, pool::GeneratorList, observable::Observable, reference::QuantumState, )
ADAPT.OptimizationProtocol
— TypeOptimizationProtocol
A distinctive protocol for refining parameters in an ansatz.
Implementation
Sub-types must implement the following method:
optimize!( ::AbstractAnsatz, ::Trace, ::OptimizationProtocol, ::Observable, ::QuantumState, ::CallbackList, )::Bool
ADAPT.Parameter
— TypeParameter
Semantic alias for the coefficient of a generator in an ansatz.
ADAPT.ParameterList
— TypeParameterList
Semantic alias for a vector of parameters.
ADAPT.QuantumState
— TypeQuantumState
The Union of any type that could define a quantum state.
Implemented Types
Any type at all can be used as a generator if there is a compatible implementation of the methods listed in the Implementation
section.
The following types have implementations fleshed out in this library already:
Vector{<:Complex}
: A dense statevector in the computational basisPauliOperators.SparseKetBasis
:A dict mapping individual kets (
PauliOperators.KetBitString
) to their coefficients.
Implementation
This list should be extended as new sub-types are implemented.
There must be a compatible implementation for each of:
evolve_state!( ::Generator, ::Parameter, ::QuantumState, )
evaluate( ::Observable, ::QuantumState, )::Energy
gradient!( index::Int, ansatz::AbstractAnsatz, observable::Observable, reference::QuantumState, )
calculate_scores( ansatz::AbstractAnsatz, adapt::AdaptProtocol, pool::GeneratorList, observable::Observable, reference::QuantumState, )
ADAPT.Score
— TypeScore
Semantic alias for the importance-factor of a pool operator, eg. the expecation value of its commutator with an observable.
ADAPT.ScoreList
— TypeScoreList
Semantic alias for a vector of scores.
ADAPT.Trace
— TypeTrace
Semantic alias for a compact record of the entire ADAPT run.
The adapt!
, optimize!
, and run!
functions require a Trace
object, which will be mutated throughout according to callbacks. Initialize an empty trace object by trace = Trace()
.
Keys can in principle be any Symbol at all. You can design your own protocols to fill the data, and your own callbacks to use it. That said, see the Callbacks
module for some standard choices.
Base.Matrix
— MethodBase.Matrix([F,] N::Int, ansatz::AbstractAnsatz)
Construct the unitary matrix representation of the action of an ansatz.
Parameters
F
: float type; the resulting matrix will be of type Matrix{Complex{F}}N
: size of Hilbert space (ie. the number of rows in the matrix)ansatz
: the ansatz to be represented
Base.Matrix
— MethodBase.Matrix([F,] N::Int, G::Generator, θ::Parameter)
Construct the unitary matrix representation of $exp(-iθG)$.
Parameters
F
: float type; the resulting matrix will be of type Matrix{Complex{F}}N
: size of Hilbert space (ie. the number of rows in the matrix)G
: the generator of the matrixθ
: a scalar coefficient multiplying the generator
ADAPT.adapt!
— Methodadapt!(
::AbstractAnsatz,
::Trace,
::AdaptProtocol,
::GeneratorList,
::Observable,
::QuantumState,
::CallbackList,
)
Update an ansatz with a new generator(s) according to a given ADAPT protocol.
Typically, each call to this function will select a single generator whose score has the largest magnitude, but richer variants of ADAPT will have richer behavior.
For example, an implementation of Tetris ADAPT would add multiple generators, based on both the score and the "disjointness" of the generators (which is something that implementation would have to define).
Parameters
ansatz
: the ADAPT statetrace
: a history of the ADAPT run thus farADAPT
: the ADAPT protocolpool
: the list of generators to consider adding to the ansatzH
: the object defining the cost-functionψ0
: an initial quantum state which theansatz
operates oncallbacks
: a list of functions to be called just prior to updating the ansatz
Returns
- a
Bool
, indicating whether or not an adaptation was made
Implementation
Any implementation of this method must be careful to obey the following contract:
If your ADAPT protocol decides the ansatz is already converged, call
set_converged!(ansatz, true)
and returnfalse
, without calling any callbacks.Fill up a
data
dict with the expensive calculations you have to make anyway. See the default implementation for a minimal selection of data to include.BEFORE you actually update the ansatz, call each callback in succession, passing it the
data
dict. If any callback returnstrue
, returnfalse
without calling any more callbacks, and without updating theansatz
.After all callbacks have been completed, update the ansatz, call
set_optimized!(ansatz, false)
, and returntrue
.
Standard operating procedure is to let callbacks do all the updates to trace
. Thus, implementations of this method should normally ignore trace
entirely (except in passing it along to the callbacks). That said, this rule is a "style" guideline, not a contract.
ADAPT.angles
— Methodangles(::AbstractAnsatz)
Fetch all parameters in the ansatz as a vector.
ADAPT.bind!
— Methodbind!(::AbstractAnsatz, ::ParameterList)
Replace all parameters in the ansatz.
ADAPT.calculate_score
— Methodcalculate_score(
ansatz::AbstractAnsatz,
ADAPT::AdaptProtocol,
generator::Generator,
H::Observable,
ψ0::QuantumState,
)
Calculate an "importance" score for a generator, with respect to a particular ADAPT state.
Parameters
ansatz
: the ADAPT stateADAPT
: the ADAPT protocol. Different protocols may have different scoring strategies.generator
: the generator to be scoredH
: the object defining the cost-functionψ0
: an initial quantum state which theansatz
operates on
Returns
score
: a scalar number, whose type istypeof_score(ADAPT)
Implementation
In addition to implementing this method (which is mandatory), strongly consider over-riding calculate_scores
also, to take advantage of compact measurement protocols, or simply the fact that you should only need to evolve your reference state once.
ADAPT.calculate_scores
— Methodcalculate_scores(
ansatz::AbstractAnsatz,
ADAPT::AdaptProtocol,
pool::GeneratorList,
observable::Observable,
reference::QuantumState,
)
Calculate a vector of scores for all generators in the pool.
Parameters
ansatz
: the ADAPT stateADAPT
: the ADAPT protocol. Different protocols may have different scoring strategies.pool
: the list of generators to be scoredH
: the object defining the cost-functionψ0
: an initial quantum state which theansatz
operates on
Returns
scores
: a vector whose elements are of typetypeof_score(ADAPT)
.scores[i]
is the score for the generatorpool[i]
ADAPT.evaluate
— Methodevaluate(
ansatz::AbstractAnsatz,
H::Observable,
ψ0::QuantumState,
)
Evaluate a cost-function with respect to a particular ADAPT state.
Parameters
ansatz
: the ADAPT stateH
: the object defining the cost-functionψ0
: an initial quantum state which theansatz
operates on
Returns
energy
: a scalar number, whose type istypeof_energy(observable)
ADAPT.evaluate
— Methodevaluate(
H::Observable,
Ψ::QuantumState,
)
Evaluate a cost-function with respect to a particular quantum state.
Parameters
H
: the object defining the cost-functionΨ
: the quantum state
Returns
energy
: a scalar number, whose type istypeof_energy(observable)
Implementation
Typically, the "cost-function" is the expectation value ⟨Ψ|H|Ψ⟩
, but different Observable
types could have different definitions.
ADAPT.evolve_state!
— Methodevolve_state!(
ansatz::AbstractAnsatz,
state::QuantumState,
)
Apply an ansatz to the given quantum state, mutating and returning the state.
By default, generators with a lower index are applied to the state earlier. This means that the equation for |Ψ⟩ would list out generators in reverse order. Specific implementations of ansatze may override this behavior.
ADAPT.evolve_state!
— Methodevolve_state!(
G::Generator,
θ::Parameter,
ψ::QuantumState,
)
Rotate a quantum state ψ
by an amount x
about the axis defined by G
, mutating and returning ψ
.
Implementation
Typically, the "rotation" is the unitary operator exp(-iθG)
, but different Generator
types could have different effects.
ADAPT.evolve_state
— Methodevolve_state(
ansatz::AbstractAnsatz,
reference::QuantumState,
)
Calculate the quantum state resulting from applying an ansatz to a given reference state.
ADAPT.evolve_state
— Methodevolve_state(
G::Generator,
θ::Parameter,
ψ::QuantumState,
)
Calculate the quantum state rotating ψ
by an amount x
about the axis defined by G
.
ADAPT.evolve_unitary!
— Methodevolve_unitary!(
ansatz::AbstractAnsatz,
unitary::AbstractMatrix{<:Complex},
)
Extend a unitary by applying each generator in the ansatz (on the left).
ADAPT.evolve_unitary!
— Methodevolve_unitary!(
G::Generator,
θ::Parameter,
unitary::AbstractMatrix{<:Complex},
)
Extend a unitary by applying a single generator (on the left).
ADAPT.evolve_unitary
— Methodevolve_state(
ansatz::AbstractAnsatz,
unitary::AbstractMatrix{<:Complex},
)
Calculate the matrix extending the given unitary by an ansatz (on the left).
ADAPT.evolve_unitary
— Methodevolve_unitary(
G::Generator,
θ::Parameter,
unitary::AbstractMatrix{<:Complex},
)
Calculate the matrix extending the given unitary by a single generator (on the left).
ADAPT.gradient!
— Methodgradient!(
result::AbstractVector,
ansatz::AbstractAnsatz,
observable::Observable,
reference::QuantumState,
)
Fill a vector of partial derivatives with respect to each parameter in the ansatz.
Parameters
result
: vector which will contain the gradient after calling this functionansatz
: the ADAPT stateH
: the object defining the cost-functionψ0
: an initial quantum state which theansatz
operates on
Returns
result
ADAPT.gradient
— Methodgradient(
ansatz::AbstractAnsatz,
observable::Observable,
reference::QuantumState,
)
Construct a vector of partial derivatives with respect to each parameter in the ansatz.
Parameters
ansatz
: the ADAPT stateH
: the object defining the cost-functionψ0
: an initial quantum state which theansatz
operates on
Returns
- a vector of type
typeof_energy(observable)
.
ADAPT.is_converged
— Methodis_converged(::AbstractAnsatz)
Check whether the sequence of generators in this ansatz are flagged as optimal.
Note that this is a state variable in its own right; its value is independent of the actual generators themselves, but depends on all the protocols and callbacks which brought the ansatz to its current state.
ADAPT.is_optimized
— Methodis_optimized(::AbstractAnsatz)
Check whether the ansatz parameters are flagged as optimal.
Note that this is a state variable in its own right; its value is independent of the actual parameters themselves, but depends on all the protocols and callbacks which brought the ansatz to its current state.
ADAPT.make_costfunction
— Methodmake_costfunction(
ansatz::ADAPT.AbstractAnsatz,
observable::ADAPT.Observable,
reference::ADAPT.QuantumState,
)
Construct a single-parameter cost-function f(x), where x is a parameter vector.
Note that calling f does not change the state of the ansatz (although actually it does temporarily, so this function is not thread-safe).
Parameters
ansatz
: the ADAPT stateobservable
: the object defining the cost-functionreference
: an initial quantum state which theansatz
operates on
Returns
fn
a callable functionf(x)
wherex
is a vector of angles compatible withansatz
ADAPT.make_gradfunction!
— Methodmake_gradfunction!(
ansatz::ADAPT.AbstractAnsatz,
observable::ADAPT.Observable,
reference::ADAPT.QuantumState,
)
Construct a mutating gradient function g!(∇f, x), where x is a parameter vector.
Using this in place of make_gradfunction
for optimization will tend to significantly reduce memory allocations.
Note that calling g! does not change the state of the ansatz (although actually it does temporarily, so this function is not thread-safe).
Parameters
ansatz
: the ADAPT stateobservable
: the object defining the cost-functionreference
: an initial quantum state which theansatz
operates on
Returns
g!
a callable functiong!(∇f,x)
∇f
andx
are vectors of angles compatible withansatz
. The first argument∇f
is used to store the result; its initial values are ignored.
ADAPT.make_gradfunction
— Methodmake_gradfunction(
ansatz::ADAPT.AbstractAnsatz,
observable::ADAPT.Observable,
reference::ADAPT.QuantumState,
)
Construct a single-parameter gradient function g(x), where x is a parameter vector.
Note that calling g does not change the state of the ansatz (although actually it does temporarily, so this function is not thread-safe).
Parameters
ansatz
: the ADAPT stateobservable
: the object defining the cost-functionreference
: an initial quantum state which theansatz
operates on
Returns
gd
a callable functiongd(x)
wherex
is a vector of angles compatible withansatz
ADAPT.optimize!
— Methodoptimize!(
ansatz::AbstractAnsatz,
trace::Trace,
VQE::OptimizationProtocol,
H::Observable,
ψ0::QuantumState,
callbacks::CallbackList,
)
Update the parameters of an ansatz according to a given optimization protocol.
Parameters
ansatz
: the ADAPT statetrace
: a history of the ADAPT run thus farVQE
: the optimization protocol (it doesn't have to be a VQE ^_^)H
: the object defining the cost-functionψ0
: an initial quantum state which theansatz
operates oncallbacks
: a list of functions to be called just prior to updating the ansatz
Implementation
Callbacks must be called in each "iteration". The optimization protocol is free to decide what an "iteration" is, but it should generally correspond to "any time the ansatz is changed". That's not a hard-fast rule, though - for example, it doesn't necessarily make sense to call the callbacks for each function evaluation in a linesearch.
Any implementation of this method must be careful to obey the following contract:
In each iteration, update the ansatz parameters and do whatever calculations you need to do. Fill up a
data
dict with as much information as possible. See theCallbacks
module for some standard choices.Call each callback in succession, passing it the
data
dict. If any callback returnstrue
, terminate without calling any more callbacks, and discontinue the optimization.After calling all callbacks, check if the ansatz has been flagged as optimized. If so, discontinue the optimization.
If the optimization protocol terminates successfully without interruption by callbacks, call
set_optimized!(ansatz, true)
. Be careful to ensure the ansatz parameters actually are the ones found by the optimizer!
Standard operating procedure is to let callbacks do all the updates to trace
. Thus, implementations of this method should normally ignore trace
entirely (except in passing it along to the callbacks). That said, this rule is a "style" guideline, not a contract.
The return type of this method is intentionally unspecified, so that implementations can return something helpful for debugging, eg. an Optim
result object. If the callbacks interrupt your optimization, it may be worthwhile to check if they flagged the ansatz
as converged, and modify this return object accordingly if possible.
ADAPT.partial
— Methodpartial(
index::Int,
ansatz::AbstractAnsatz,
observable::Observable,
reference::QuantumState,
)
The partial derivative of a cost-function with respect to the i-th parameter in an ansatz.
Parameters
index
: the index of the parameter to calculate withinansatz
ansatz
: the ADAPT stateobservable
: the object defining the cost-functionreference
: an initial quantum state which theansatz
operates on
Returns
- a number of type
typeof_energy(observable)
.
Implementation
Typically, generators apply a unitary rotation, so the partial consists of a partial evolution up to the indexed generator, then a "kick" from the generator itself, then a final evolution, and a braket with the observable. But, different ansatze may have a different procedure.
ADAPT.run!
— Methodrun!(
ansatz::AbstractAnsatz,
trace::Trace,
ADAPT::AdaptProtocol,
VQE::OptimizationProtocol,
pool::GeneratorList,
H::Observable,
ψ0::QuantumState,
callbacks::CallbackList,
)
Loop between optimization and adaptation until convergence.
The ansatz
and trace
are mutated throughout, so that if the user runs this method in a REPL, she can terminate it (eg. by Ctrl+C
) after however long, and still have meaningful results.
Parameters
ansatz
: the ADAPT statetrace
: a history of the ADAPT run thus farADAPT
: the ADAPT protocolVQE
: the optimization protocol (it doesn't have to be a VQE ^_^)pool
: the list of generators to consider adding to the ansatzH
: the object defining the cost-functionψ0
: an initial quantum state which theansatz
operates oncallbacks
: a list of functions to be called just prior to updating the ansatz
Returns
true
iff the ansatz is converged, with respect to the given protocols and callbacks
ADAPT.set_converged!
— Methodset_converged!(::AbstractAnsatz, ::Bool)
Flag the sequence of generators in this ansatz as optimal.
ADAPT.set_optimized!
— Methodset_optimized!(::AbstractAnsatz, ::Bool)
Flag the ansatz parameters as optimal.
ADAPT.typeof_energy
— Methodtypeof_energy(::Observable)
The number type of a cost-function.
The method is so named because the typical cost-function is the expecation value of a Hamiltonian, aka an energy.
Implementation
Usually a sub-type of AbstractFloat
, and probably just about always Float64
.
ADAPT.typeof_parameter
— Methodtypeof_parameter(::AbstractAnsatz)
The number type of the variational parameters in this ansatz.
I think this will always be a sub-type of AbstractFloat
, and almost always Float64
.
ADAPT.typeof_score
— Methodtypeof_score(::AdaptProtocol)
The number type of the score for each pool operator.
Implementation
Usually a sub-type of AbstractFloat
, and probably just about always Float64
.
ADAPT.validate
— Methodvalidate(
ansatz::AbstractAnsatz,
adapt::AdaptProtocol,
vqe::OptimizationProtocol,
pool::GeneratorList,
observable::Observable,
reference::QuantumState;
kwargs...
)
Validate that ADAPT will work correctly with the given types.
It actually runs ADAPT, so ensure your pool and observable are as simple as the types allow.
The mandatory arguments are exactly those found in the run!
method, except there is no trace.
Keyword Arguments
- label: the name of the test-set (useful when validating more than one set of types).
- tolerance: the default tolerance for numerical tests
- evolution: special tolerance for the evolution test, or
nothing
to skip - evaluation: special tolerance for the evaluation test, or
nothing
to skip - gradient: special tolerance for the gradient test, or
nothing
to skip - scores: special tolerance for the scores test, or
nothing
to skip
ADAPT.validate_consistency
— Methodvalidate_consistency(
ansatz::AbstractAnsatz,
adapt::AdaptProtocol,
pool::GeneratorList,
observable::Observable,
reference::QuantumState,
)
Check that every core ADAPT function is internally consistent (ie. different versions of the same function give consistent results).
ADAPT.validate_evaluation
— Methodvalidate_evaluation(
observable::Observable,
reference::QuantumState;
tolerance=1e-10,
)
Check that observable evaluation matches brute-force matrix-vector results.
The difference between core ADAPT and brute-force must have an absolute value within tolerance
.
This function requires the following constructors to be defined:
- Matrix(::Observable)
- Vector(::QuantumState)
ADAPT.validate_evolution
— Methodvalidate_evolution(
generator::Generator,
angle::Parameter,
reference::QuantumState;
tolerance=1e-10,
)
Check that generator evolution matches brute-force matrix-vector results.
The difference vector between core ADAPT and brute-force must have a norm within tolerance
.
This function requires the following constructors to be defined:
- Matrix(::Generator)
- Vector(::QuantumState)
ADAPT.validate_gradient
— Methodvalidate_gradient(
ansatz::AbstractAnsatz,
observable::Observable,
reference::QuantumState;
tolerance=1e-10,
)
Check that the gradient function matches the finite difference.
The difference vector between core ADAPT and brute-force must have a norm within tolerance
.
ADAPT.validate_runtime
— Methodvalidate_runtime(
ansatz::AbstractAnsatz,
adapt::AdaptProtocol,
vqe::OptimizationProtocol,
pool::GeneratorList,
observable::Observable,
reference::QuantumState;
verbose=true,
)
Check that every core ADAPT function can run for the given types.
If verbose
is true, this method also explicilty @time's everything, to catch any super-obvious memory leaks when called manually.
Note that this will run ADAPT for one iteration, so ensure your pool and observable are as simple as the types allow.
ADAPT.validate_scores
— Methodvalidate_score(
ansatz::AbstractAnsatz,
adapt::AdaptProtocol,
pool::GeneratorList,
observable::Observable,
reference::QuantumState;
tolerance=1e-10,
)
Check that the score for each pool operator matches the partial for that pool operator when added to a candidate ansatz.
Of course this only makes sense when the score is the gradient, which depends on the ADAPT protocol. But this is a common-enough choice to justify a standard method. Other ADAPT protocols may override this method, if desired.
The difference vector between core ADAPT and brute-force must have a norm within tolerance
.
ADAPT.@runtime
— Macro@runtime do_time, ex
@runtime(do_time, ex)
A macro to check that an expression evaluates without error, optionally including an explicit test for runtime.
Basics
ADAPT.Basics.Ansatz
— TypeAnsatz{F<:Parameter,G<:Generator}(
parameters::Vector{F},
generators::Vector{G},
optimized::Bool,
converged::Bool,
)
A minimal ADAPT state.
Type Parameters
F
: the number type for the parameters (usuallyFloat64
is appropriate.)G
: the generator type. Any type will do, but it's best to be specific.
Parameter
parameters
: list of current parametersgenerators
: list of current generatorsoptimized
: whether the current parameters are flagged as optimalconverged
: whether the current generators are flagged as converged
ADAPT.Basics.Ansatz
— MethodAnsatz(F, G)
Convenience constructor for initializing an empty ansatz.
Parameters
- the parameter type OR an instance of that type OR a vector whose elements are that type
- the generator type OR an instance of that type OR a vector whose elements are that type
The easiest way to use this constructor is probably to prepare your generator pool first, then call Ansatz(Float64, pool)
. But please note, the ansatz is always initialized as empty, even though you've passed a list of generators in the constructor!
ADAPT.Basics.OptimOptimizer
— TypeOptimOptimizer(method, options)
Parameters
method
: an optimizer object from theOptim
packageoptions
: an options object from theOptim
package
IMPORTANT: the callback
attribute of options
will be generated automatically whenever ADAPT.optimize!
is called, to insert dynamic callbacks. If you provide your own callback in options
, it will be ignored. Use the ADAPT.Callback
framework to gain extra behavior throughout optimization. If this framework does not meet your needs, you'll need to implement your own OptimizationProtocol
.
ADAPT.Basics.OptimOptimizer
— MethodOptimOptimizer(method::Symbol; options...)
A convenience constructor to create OptimOptimizers
without referring to Optim
.
Parameters
method
: a symbol-ization of theOptim
method
Kewyord Arguments
You can pass any keyword argument accepted either by your Optim
method's constructor, or by that of Optim.Options
. If you try to pass a callback
keyword argument, it will be ignored (see above).
ADAPT.Basics.VanillaADAPT
— TypeVanillaADAPT
Score pool operators by their initial gradients if they were to be appended to the pool. Equivalently, score pool operators by the expectation value of the commutator of the pool operator with the observable.
This protocol is defined for when the pool operators and the observable are AbstractPaulis. Note that fermionic operators are perfectly well-represented with AbstractPaulis.
ADAPT.Basics.__make__costate
— Method__make__costate(G, x, Ψ)
Compute ∂/∂x exp(ixG) |ψ⟩.
ADAPT.Basics.__make__costate
— Method__make__costate(G::ScaledPauliVector, x, Ψ)
Compute ∂/∂x exp(ixG) |ψ⟩.
Default implementation just applies -iG to Ψ then evolves. That's fine as long as the evolution is exact. But evolution is not exact if G
is a ScaledPauliVector
containing non-commuting terms. In such a case, the co-state must be more complicated.
ADAPT.partial
— Methodpartial(
index::Int,
ansatz::AbstractAnsatz,
observable::Observable,
reference::QuantumState,
)
The partial derivative of a cost-function with respect to the i-th parameter in an ansatz.
The ansatz is assumed to apply a unitary rotation exp(-iθG)
, where G
is the (Hermitian) generator, and generators with a lower index are applied to the state earlier. Ansatz sub-types may change both behaviors.
Parameters
index
: the index of the parameter to calculate withinansatz
ansatz
: the ADAPT stateH
: the object defining the cost-functionψ0
: an initial quantum state which theansatz
operates on
Returns
- a number of type
typeof_energy(observable)
.
ADAPT.Basics.Callbacks
— ModuleCallbacks
A suite of basic callbacks for essential functionality.
Explanation
The final argument of the adapt!
, optimize!
and run!
methods calls for a vector of Callbacks
. These are callable objects extending behavior at each iteration or adaptation (or both; see the AbstractCallback
type documentation for more details).
The callback is passed a data
object (aka. a Dict
where the keys are Symbols
like :energy
or :scores
), in addition to the ADAPT state and all the quantum objects. Callbacks may be as simple as displaying the data
, or as involved as carefully modifying the quantum objects to satsify some constraint.
Each callback in this module can be categorized as one of the following:
- Tracers: update the running
trace
with information passed indata
- Printers: display the information passed in
data
to the screen or to a file - Stoppers: flag the ADAPT state as converged, based on some condition
In particular, Stoppers are the primary means of establishing convergence in Vanilla ADAPT. They do this by flagging the ADAPT state as converged, which signals to the run!
function that it can stop looping once this round is done. Alternatively, though none of the basic callbacks in this module do so, you amy implement a callback that returns true
based on some condition. This signals an instant termination, regardless of convergence.
Just to reiterate, Stoppers are the primary means of establishing convergence. If you don't include any callbacks, the run!
call may not terminate this century!
Callback Order
Callback order matters. Using the callbacks in this module, I recommend the order listed above (Tracers, then Printers, then Stoppers).
The first callback in the list gets dibs on mutating the trace or the ADAPT state, which could change the behavior of subsequent callbacks. For example, the basic Printer
inspects the trace to infer the current iteration, so it naturally follows the Tracer
. (although the Printer
knows to skip this part if there is no Tracer
). Some Stoppers (eg. SlowStopper
, FloorStopper
) inspect the trace to decide whether energy has converged, so the "latest" energy should already be logged. Therefore, these too should follow the Tracer
.
Please note that, because the callbacks are called prior to actually updating the ansatz, the Tracer will usually log one last round of updates which are not actually reflected in the ansatz. The only times this does not happen are if convergence is flagged by the protocol itself rather than a Stopper callback (eg. all scores are essentially zero), which is probably never. ^_^ This behavior seems fine, even desirable, to me, but if you'd like to avoid it, you could implement a Stopper which explicitly terminates by returning true
(rather than merely flagging the ansatz as converged, like basic Stoppers), and listing that Stopper prior to the Tracer.
Standard keys
The actual keys used in the data
argument are determined by the protocol, so you may design custom callbacks to make use of the data in your custom protocols.
However, for the sake of modularity, it is worth keeping keys standardized when possible. Here is a list of recommended keys.
Reserved keys
:iteration
: iteration count over all optimizations:adaptation
: which iteration an adaptation occurred
These keys are not part of data
but are used in the running trace
.
Standard keys for adapt!
:scores
: vector of scores for each pool operator:selected_index
: index in the pool of the operator ADAPT plans to add:selected_generator
: the actual generator object ADAPT plans to add:selected_parameter
: the parameter ADAPT plans to attach to the new generator
Protocols which add multiple generators in a single adaptation may still use these same keys, replacing the values with vectors.
Standard keys for optimize!
:energy
: the result of evaluating the observable. Required for some Stoppers:g_norm
: the norm of the gradient vector (typically ∞ norm, aka. largest element):elapsed_iterations
: the number of iterations of the present optimization run:elapsed_time
: time elapsed since starting the present optimization run:elapsed_f_calls
: number of function calls since starting the present optimization run:elapsed_g_calls
: number of gradient calls since starting the present optimization run
ADAPT.Basics.Callbacks.FloorStopper
— TypeFloorStopper(threshold::Energy, floor::Energy)
Converge once the energy has gotten close enough to some target value.
Called for adapt!
only. Requires a preceding Tracer(:energy)
.
Parameters
threshold
: maximum energy difference before convergencefloor
: the target value
ADAPT.Basics.Callbacks.ParameterPrinter
— TypeParameterPrinter(; io=stdout, adapt=true, optimize=false, ncol=8)
Print the current ansatz parameters as neatly and compactly as I can think to.
Parameters
io
: the IO stream to print toadapt
: print parameters at each adaptationoptimize
: print parameters at each optimization iterationncol
: number of parameters to print in one line, before starting another
ADAPT.Basics.Callbacks.ParameterStopper
— TypeParameterStopper(n::Int)
Converge once the ansatz reaches a certain number of parameters.
Called for adapt!
only.
Parameters
n
: the minimum number of parameters required for convergence
ADAPT.Basics.Callbacks.ParameterTracer
— TypeParameterTracer()
Add the ansatz parameters to the running trace, under the key :parameters
.
Only compatible when following a Tracer including :selectedindex. This is no great handicap since the principal point of this is to be able to reconstruct an ansatz, and you'll need the :selectedindex for that also. ;)
Parameters are stored in a matrix. Each column is associated with an angle in the ansatz (vanilla protocol sets the first column as the first parameter added to the ansatz and the first one applied to the reference state). Each row gives the optimized parameters for the corresponding ADAPT iteration.
The adapt callback is responsible for adding a new row (vanilla protocol is to initialize with the previously optimized parameters), and for padding previous rows with zeros. The optimization callback is responsible for keeping the last row updated with the currently-best parameters for this choice of parameters.
Standard practice is to include the ParameterTracer AFTER the regular Tracer, but BEFORE any ADAPT convergence Stoppers. Thus, the parameter matrix INCLUDES columns for the last-selected parameter(s). Standard practice for reconstructing an optimized ansatz of a converged trace is to look at the PENULTIMATE row.
Please note that the default implementation of this callback is unsuitable (or at least the matrix requires some post-processing) if the AdaptProtocol reorders parameters, or even simply inserts new parameters anywhere other than the end, or even (currently) if parameters aren't initialized to zero, or even (currently) if it adds more than one parameter at once. (NOTE: These last two are easily adjusted but will require a more complex trace
precondition.) If you need a parameter tracer for such protocols, you'll need to dispatch to your own method.
ADAPT.Basics.Callbacks.Printer
— TypePrinter([io::IO=stdout,] keys::Symbol...)
Print selected data keys at each iteration or adaptation.
The keys
arguments are passed in the same way as Tracer
; see that method for some examples. Unlike Tracer
, the first argument can be an IO
object, which determines where the printing is done. By default, it is the standard output stream, ie. your console, or a file if you are redirecting output via >
. The io
argument allows you to explicitly write to a file, via Julia's open
function.
If a key is not present in data
, it is ignored. Thus, the same list of keys is used for calls from adapt!
and optimize!
, so long as keys do not overlap (which should be avoided)!
The keys :iteration
and :adaptation
are treated specially. These keys will not appear directly in data
, and they should not be included in keys
. If the trace
contains these keys (ie. if a Tracer
callback was also included), they are used as "section headers". Otherwise, they are skipped.
ADAPT.Basics.Callbacks.ScoreStopper
— TypeScoreStopper(threshold::Score)
Converge if all scores are below a certain threshold.
Called for adapt!
only.
Parameters
threshold
: the maximum score
ADAPT.Basics.Callbacks.Serializer
— TypeSerializer(; ansatz_file="", trace_file="", on_adapt=false, on_iterate=false)
Serialize the current state so that it can be resumed more easily.
Please note that robust serialization depends heavily on version control; if the definition of a serialized type has changed since it was serialized, it is very, very difficult to recover. Thus, serialization of this nature should be considered somewhat transient and unreliable. It's good for restarting when your supercomputer crashes unexpectedly mid-job, but not so good for long-term archival purposes.
Parameters
ansatz_file
: file to save ansatz in ("" will skip saving ansatz)trace_file
: file to save trace in ("" will skip saving trace)on_adapt
: whether to serialize on adaptationson_iterate
: whether to serialize in every optimization iteration
ADAPT.Basics.Callbacks.SlowStopper
— TypeSlowStopper(threshold::Energy, n::Int)
Converge if all energies in the past n iterations are within a certain range.
Called for adapt!
only. Requires a preceding Tracer(:energy)
.
Parameters
threshold
: maximum energy range before convergencen
: number of recent adaptations to checkThis function will not flag convergence before at least
n
adaptations have occurred.
ADAPT.Basics.Callbacks.Tracer
— TypeTracer(keys::Symbol...)
Add selected data keys at each iteration or adaptation to the running trace.
Examples
Tracer(:energy)
Including this callback in a run!
call will fill the trace
argument with the energy at each optimization iteration, as well as noting in which iteration each adaptation occurred. I cannot think of a circumstance when you will not want to trace at least this much.
Tracer(:energy, :scores)
This example shows the syntax to keep track of multiple data keys: just list them out as successive arguments of the same Tracer
. Do NOT include multiple instances of Tracer
in the same run, or you will record twice as many iterations as actually occurred! The ParameterTracer
is a distinct type and is safe to use with Tracer
.
Other Notes
If a key is not present in data
, it is ignored. Thus, the same list of keys is used for calls from adapt!
and optimize!
, so long as keys do not overlap (which should be avoided)!
The keys :iteration
and :adaptation
are treated specially. These keys will not appear directly in data
, and they should not be included in keys
.
The :iteration
value will simply increment with each call from optimize!
. The :adaptation
value will be set to the most recent :iteration
value.
I highly recommend including at minimum Tracer(:energy)
with every single ADAPT run you ever do.
ADAPT.Basics.Operators
— ModuleOperators
A suite of common operators, especially useful for constructing operator pools.
TODO: I haven't decided yet whether observables should live here or not. If they do, I'll want to standardize the interface somehow. In particular, the interface with pyscf for molecules is rather hazy. I think we need a separate package which is a Julia wrapper for openfermion. Then observables will generally be input as qubit operators from that package, or perhaps we have a simple method that converts qubit operators to PauliSums, so we have better control over the arithmetic being performed. In any case, though I may evict them someday, standard lattice systems like Hubbard and Heisenberg, not requiring openfermion, may inhabit this module for the time being.
ADAPT.Basics.Operators.hubbard_hamiltonian
— Methodhubbard_hamiltonian(L::Int, U, t; pbc=false)
Convenience constructor for a 1D nearest-neighbor Hubbard model with L sites.
ADAPT.Basics.Operators.hubbard_hamiltonian
— Methodhubbard_jw(graph::Array{T,2}, U, t)
A Hubbard Hamiltonian in the Jordan-Wigner basis.
Copied shamelessly from Diksha's ACSE repository.
Parameters
graph
: an adjacency matrix identifying couplings. Must be symmetric.U
: Coulomb interaction for all sitest
: hopping energy for all couplings
Returns
PauliOperators.PauliSum
: the Hamiltonian
ADAPT.Basics.Operators.qubitexcitation
— Methodqubitexcitation(n::Int, i::Int, k::Int)
qubitexcitation(n::Int, i::Int, j::Int, k::Int, l::Int)
Qubit excitation operators as defined in Yordanov et al. 2021.
Note that Yordanov's unitaries are defined as exp(iθG)
rather than exp(-iθG)
, so variational parameters will be off by a sign.
Parameters
n
: total number of qubitsi,j,k,l
: qubit indices as defined in Yordanov's paper.
Returns
PauliOperators.ScaledPauliVector
: the qubit excitation operatorNote that all Pauli terms in any single qubit excitation operator commute, so the
ScaledPauliVector
representation is "safe".
ADAPT.Basics.Pools.fullpauli
— Methodfullpauli(n::Int)
The pool of all (4^n) n-qubit Pauli operators.
Parameters
n
: Number of qubits in the system
Returns
pool
: the full pauli pool.
ADAPT.Basics.Pools.minimal_complete_pool
— Methodminimal_complete_pool(n::Int64)
Return the minimal complete pool on n
qubits corresponding to the V
pool in the qubit-ADAPT paper (PRX QUANTUM 2, 020310 (2021)).
ADAPT.Basics.Pools.one_local_pool
— Functionone_local_pool(n::Int64, axes=["I","X","Y","Z"])
Returns the one-local pool containing each one-local operator on n qubits.
Parameters
n
: Number of qubits in the system
Returns
pool
: the one-local pool.
ADAPT.Basics.Pools.oneandtwo_local_pool
— Methodoneandtwo_local_pool(n::Int64)
Returns the union of the one-local and two-local pools on n qubits.
Parameters
n
: Number of qubits in the system
Returns
pool
: union of one-local and two-local pools.
ADAPT.Basics.Pools.qubitadaptpool
— Methodqubitadaptpool(n_system::Int)
Returns the qubit ADAPT pool on n_system qubits as defined in PRX QUANTUM 2, 020310 (2021). It is generated by taking each qubit-excitation-based operator and breaking it into individual Pauli terms.
Parameters
n_system
: Number of qubits in the system
Returns
pool
: the qubit-ADAPT pool.
ADAPT.Basics.Pools.qubitexcitation
— Methodqubitexcitation(n::Int, i::Int, k::Int)
qubitexcitation(n::Int, i::Int, j::Int, k::Int, l::Int)
Qubit excitation operators as defined in Yordanov et al. 2021.
Note that Yordanov's unitaries are defined as exp(iθG)
rather than exp(-iθG)
, so variational parameters will be off by a sign.
Parameters
n
: total number of qubitsi,j,k,l
: qubit indices as defined in Yordanov's paper.
Returns
PauliOperators.ScaledPauliVector
: the qubit excitation operatorNote that all Pauli terms in any single qubit excitation operator commute, so the
ScaledPauliVector
representation is "safe".
ADAPT.Basics.Pools.qubitexcitationpool
— Methodqubitexcitationpool(n_system::Int)
The number of singles excitations = (n 2), and the doubles = 3*(n 4).
Parameters
n_system
: Number of qubits in the system
Returns
pool
: the qubit-excitation-based pool as defined in Communications Physics 4, 1 (2021).target_and_source
: Dict mapping each pool operator to the target and source orbitals involved in the excitation.
ADAPT.Basics.Pools.qubitexcitationpool_complemented
— Methodqubitexcitationpool_complemented(n_system::Int)
Returns the complemented qubit excitation pool on n_system qubits, inspired from arXiv 2109.01318.
Parameters
n_system
: Number of qubits in the system
Returns
pool
: the complemented qubit-excitation-based pool.target_and_source
: Dict mapping each pool operator to the target and source orbitals involved in the excitation.
ADAPT.Basics.Pools.tile_operators
— Methodtile_operators(L1::Int, L2::Int, chosen_operators::Vector{Vector{ScaledPauli{N}}}, PBCs)
Constructs the tiled operators for a system of L2
qubits, given a set of operators defined for a smaller problem instance on L1
qubits.
Parameters
L1
: number of qubits for small problem instanceL2
: number of qubits for large problem instancechosen_operators
: list of operators for small problem instancePBCs
: periodic boundary conditions
Returns
tiled_ops
: tiled operators as a Vector{Vector{ScaledPauli}}
ADAPT.Basics.Pools.two_local_pool
— Functiontwo_local_pool(n::Int64, axes=["X","Y","Z"])
Returns the two-local pool containing each two-local operator on n qubits.
Parameters
n
: Number of qubits in the system
Returns
pool
: the two-local pool.
Other Modules
ADAPT.OptimizationFreeADAPT.OptimizationFree
— TypeOptimizationFree
The optimization protocol which just doesn't do anything.
There are no iterations, so there is no reason to callback. Contract obliged!
Base.Matrix
— MethodMatrix(infidelity)
Convert an infidelity to a matrix.
This implementation assumes:
- The target state
infidelity.Φ
can be cast to a vector. - The reference state in
evaluate(infidelity, reference)
is always normalized.
ADAPT.Degenerate_ADAPT.DegenerateADAPT
— TypeDegenerateADAPT
Score pool operators by their initial gradients if they were to be appended to the ansatz. Equivalently, score pool operators by the expectation value of the commutator of the pool operator with the observable. In the case where the largest scores (gradients) are degenerate between multiple pool operators, choose the operator to append to the ansatz randomly.
ADAPT.TETRIS_ADAPT.TETRISADAPT
— TypeTETRISADAPT
Score pool operators by their initial gradients if they were to be appended to the pool. TETRIS-ADAPT is a modified version of ADAPT-VQE in which multiple operators with disjoint support are added to the ansatz at each iteration. They are chosen by selecting from operators ordered in decreasing magnitude of gradients.
ADAPT.ADAPT_QAOA.DiagonalQAOAAnsatz
— TypeDiagonalQAOAAnsatz{F<:Parameter,G<:Generator}(
observable::QAOAObservable,
γ0::F,
generators::Vector{G},
β_parameters::Vector{F},
γ_parameters::Vector{F},
optimized::Bool,
converged::Bool,
)
An ADAPT state suitable for ADAPT-QAOA. The standard ADAPT generators are interspersed with the observable itself.
Type Parameters
F
: the number type for the parameters (usuallyFloat64
is appropriate).G
: the generator type.
Parameter
observable
: the observable, which is interspersed with generators when evolvingγ0
: initial coefficient of the observable, whenever a new generator is addedgenerators
: list of current generators (i.e. mixers)β_parameters
: list of current generator coefficientsγ_parameters
: list of current observable coefficientsoptimized
: whether the current parameters are flagged as optimalconverged
: whether the current generators are flagged as converged
ADAPT.ADAPT_QAOA.DiagonalQAOAAnsatz
— MethodDiagonalQAOAAnsatz(γ0, pool, observable)
Convenience constructor for initializing an empty ansatz.
Parameters
- γ0
- pool
- observable
Note that the observable must be a QAOAObservable
.
ADAPT.ADAPT_QAOA.PlasticQAOAAnsatz
— TypePlasticQAOAAnsatz{F<:Parameter,G<:Generator}(
observable::QAOAObservable,
γ0::F,
generators::Vector{G},
β_parameters::Vector{F},
γ_parameters::Vector{F},
optimized::Bool,
converged::Bool,
)
An ADAPT state suitable for ADAPT-QAOA. The standard ADAPT generators are interspersed with the observable itself.
The only difference between PlasticQAOAAnsatz
and DiagonalQAOAAnsatz
is that the latter initializes every new γ
value to γ0
, while the former initializes every new γ
value to that of the previous layer, using γ0
only for the first first round of optimization.
Type Parameters
F
: the number type for the parameters (usuallyFloat64
is appropriate).G
: the generator type.
Parameter
observable
: the observable, which is interspersed with generators when evolvingγ0
: initial coefficient of the observable, whenever a new generator is addedgenerators
: list of current generators (i.e. mixers)β_parameters
: list of current generator coefficientsγ_parameters
: list of current observable coefficientsoptimized
: whether the current parameters are flagged as optimalconverged
: whether the current generators are flagged as converged
ADAPT.ADAPT_QAOA.PlasticQAOAAnsatz
— MethodPlasticQAOAAnsatz(γ0, pool, observable)
Convenience constructor for initializing an empty ansatz.
Parameters
- γ0
- pool
- observable
Note that the observable must be a QAOAObservable
.
ADAPT.ADAPT_QAOA.QAOAAnsatz
— TypeQAOAAnsatz{F<:Parameter,G<:Generator}(
observable::G,
γ0::F,
generators::Vector{G},
β_parameters::Vector{F},
γ_parameters::Vector{F},
optimized::Bool,
converged::Bool,
)
An ADAPT state suitable for ADAPT-QAOA. The standard ADAPT generators are interspersed with the observable itself.
Type Parameters
F
: the number type for the parameters (usuallyFloat64
is appropriate).G
: the generator type. Uniquely for QAOA,G
must ALSO be a validObservable
type.
Parameter
observable
: the observable, which is interspersed with generators when evolvingγ0
: initial coefficient of the observable, whenever a new generator is addedgenerators
: list of current generators (i.e. mixers)β_parameters
: list of current generator coefficientsγ_parameters
: list of current observable coefficientsoptimized
: whether the current parameters are flagged as optimalconverged
: whether the current generators are flagged as converged
ADAPT.ADAPT_QAOA.QAOAAnsatz
— MethodAnsatz(γ0, observable)
Convenience constructor for initializing an empty ansatz.
Parameters
- γ0
- observable
Note that, uniquely for QAOA, the observable and the pool operators must be of the same type.
ADAPT.ADAPT_QAOA.QAOAObservable
— TypeQAOAObservable(spv::ScaledPauliVector)
Wrap a ScaledPauliVector observable in a view that assumes each element is diagonal, allowing for more memory-efficient state evolution.
The constructor throws an error if any element sp
of spv
has sp.pauli.x != 0
.
ADAPT.ADAPT_QAOA.__make__costate
— MethodCarbon copy of the usual costate function with Pauli operators.
The only difference is that we don't copy in the method defining special behavior for ScaledPauliVectors in this namespace, so those will be treated as though they consist only of commuting terms.
ADAPT.gradient!
— MethodCarbon copy of the usual gradient with Pauli operators.
The only difference is which __make__costate
function is getting called.
ADAPT.ADAPT_QAOA.QAOApools.qaoa_double_ops
— Methodqaoa_double_ops(n::Int64)
Returns the pool containing two-qubit Paulis respecting bit-flip symmetry.
Parameters
n
: Number of qubits
Returns
pool
: pool containing symmetric two-qubit Paulis
ADAPT.ADAPT_QAOA.QAOApools.qaoa_double_pool
— Methodqaoa_double_pool(n::Int64)
Returns the pool containing symmetric single- and double-qubit Paulis and standard mixer.
Parameters
n
: Number of qubits
Returns
pool
: double-qubit pool
ADAPT.ADAPT_QAOA.QAOApools.qaoa_mixer
— Methodqaoa_mixer(n::Int64)
Returns the pool containing only the standard qaoa mixer.
Parameters
n
: Number of qubits
Returns
pool
: one element pool with qaoa mixer
ADAPT.ADAPT_QAOA.QAOApools.qaoa_single_pool
— Methodqaoa_single_pool(n::Int64)
Returns the pool containing single-qubit Pauli Xs and standard mixer.
Parameters
n
: Number of qubits
Returns
pool
: single-qubit pool
ADAPT.ADAPT_QAOA.QAOApools.qaoa_single_x
— Methodqaoa_single_x(n::Int64)
Returns the pool containing single-qubit Pauli Xs.
Parameters
n
: Number of qubits
Returns
pool
: pool containing Xs only
ADAPT.Hamiltonians.get_unweighted_maxcut
— Methodget_unweighted_maxcut(g::Graphs.SimpleGraph)
Take a graph object and extract edges for MaxCut.
Parameters
g
: graph instance.
Returns
edge_list
: list of edges and weights equal to one.
ADAPT.Hamiltonians.get_weighted_maxcut
— Functionget_weighted_maxcut(g::Graphs.SimpleGraph, rng = _DEFAULT_RNG)
Take a graph object and extract edges and assign edge weights.
Parameters
g
: graph instance.rng
: random number generator to generate weights.
Returns
edge_list
: list of edges and weights.
ADAPT.Hamiltonians.hubbard_hamiltonian
— Methodhubbard_hamiltonian(L::Int, U, t; pbc=false)
Convenience constructor for a 1D nearest-neighbor Hubbard model with L sites.
ADAPT.Hamiltonians.hubbard_hamiltonian
— Methodhubbard_jw(graph::Array{T,2}, U, t)
A Hubbard Hamiltonian in the Jordan-Wigner basis.
Copied shamelessly from Diksha's ACSE repository.
Parameters
graph
: an adjacency matrix identifying couplings. Must be symmetric.U
: Coulomb interaction for all sitest
: hopping energy for all couplings
Returns
PauliOperators.PauliSum
: the Hamiltonian
ADAPT.Hamiltonians.maxcut_hamiltonian
— Methodmaxcut_hamiltonian(V::Int, Edges::Vector{Tuple{Int,Int,T}}) where T<:Real
A MaxCut Hamiltonian defined on a graph containing only Pauli ZZ terms.
Parameters
V
: number of vertices.Edges
: list of edges, in the form of (first index, second index, weight).
Returns
H
: MaxCut Hamiltonian
ADAPT.Hamiltonians.xyz_model
— Methodxyz_model(L::Int, Jx::Float, Jy::Float, Jz::Float, PBCs::Bool)
An XYZ Heisenberg Hamiltonian.
Parameters
L
: system size.Jx
: coupling along X.Jy
: coupling along Y.Jz
: coupling along Z.PBCs
: Periodic Boundary Conditions
Returns
PauliOperators.PauliSum
: the Hamiltonian
ADAPT.Hamiltonians.MaxCut.random_regular_max_cut_hamiltonian
— Methodrandom_regular_max_cut_hamiltonian(n::Int, k::Int; rng = _DEFAULT_RNG, weighted = true)
Return a random Hamiltonian for a max cut problem on n
qubits.
The corresponding graph is degree k
. If an RNG is provided, this will be used to sample the graph and edge weights. If weighted
is true, the edge weights will be randomly sampled from the uniform distribution U(0,1)
.
MyPauliOperators
These methods should not be considered part of "ADAPT", but rather, destined for the PauliOperators.jl
package. The only reason I document them here is that the doc builder is configured to throw an error if any doc strings aren't included in the documentation...
ADAPT.Basics.MyPauliOperators.cis!
— MethodTODO: VERY SPECIFICALLY ASSERT that pauli xz=00 is to be interpreted as I, pauli xz=10 is to be interpreted as X, pauli xz=01 is to be interpreted as Z, and pauli xz=11 is to be interpreted as Y, despite the last usually being interpreted as iY. Also clear this definition with Nick before putting it in his package...
ADAPT.Basics.MyPauliOperators.measure_commutator
— Methodmeasure_commutator(
A::AnyPauli,
B::AnyPauli,
Ψ::Union{SparseKetBasis,AbstractVector},
)
Calculate the expectation value of the commutator, ie. ⟨Ψ|[A,B]|Ψ⟩.
TODO: There could be a place for this in PauliOperators, but it would need to be carefully fleshed out type by type. A and B needn't be Hermitian in general (though I assume they are here), so my intuition is rather lacking.
Base.:*
— MethodCross-type multiplication. Best to discourage ever doing this operation. Needed for a lazy commutator, but not necessarily needed long-term. We'll return pauli sum for now.
Base.:*
— MethodOf course this one is missing... ^_^ Note strict typing in out, because Paulis themselves are strictly typed.
Base.adjoint
— MethodTODO: Consult with Nick before adding this definition to PauliOperators.
I hesitate for two reasons:
It is not "lazy". It allocates a new array. Not unprecedented but not ideal. Not sure the proper way to make it lazy.
Column vector adjoint should properly be a row vector, rather than reversed. Can't think of why we'd ever use ScaledPauliVector as a column vector, but its data type is so, properly.
But, this definition achieves desired polymorphism in evolving by ScaledPauliVector, so if Nick okays it, I'm happy with it. The alternative is a dedicated unevolve
function with a tedious special case for unevolving ansatze whose generators are ScaledPauliVector...
Base.adjoint
— MethodTODO: This adjoint is not strictly "lazy". But I don't think anyone will care.