ADAPT

Documentation for ADAPT.

Core

ADAPT.AbstractAnsatzType
AbstractAnsatz{F,G}

An image of an ADAPT protocol in a frozen state.

The type is so named because the most basic possible such image consists of just the generators and parameters of the ansatz thus far selected, but richer variants of ADAPT will have richer states.

For example, a version of ADAPT which carries information on the inverse Hessian across ADAPT iterations would need to operate on an ansatz type which includes the inverse Hessian.

Nevertheless, every sub-type of AbstractAnsatz implements the AbstractVector interface, where elements are pairs (generator => parameter).

So, for example, an ansatz maintaining an inverse Hessian would need to override push! insert!, etc. to ensure the dimension of the Hessian matches.

Type Parameters

  • F: the number type of the parameters (usually Float64)
  • G: the subtype of Generator

Implementation

Sub-types must implement the following methods:

  • __get__generators(::AbstractAnsatz{F,G})::Vector{G}
  • __get__parameters(::AbstractAnsatz{F,G})::Vector{F}
  • __get__optimized(::AbstractAnsatz{F,G})::Ref{Bool}
  • __get__converged(::AbstractAnsatz{F,G})::Ref{Bool}

Each of these is expected to simply retrieve an attribute of the struct. You can call them whatever you'd like, but functionally, here's what they mean:

  • generators::Vector{G}: the sequence of generators

  • parameters::Vector{F}: the corresponding sequence of parameters

    Note that these vectors will be mutated and resized as needed.

  • optimized::Ref{Bool}: a flag indicating that the current parameters are optimal

  • converged::Ref{Bool}: a flag indicating that the current generators are optimal

    Note that these must be of type Ref, so their values can be toggled as needed.

In addition, there must be a compatible implementation for each of:

  • partial( index::Int, ansatz::AbstractAnsatz, observable::Observable, reference::QuantumState, )

  • calculate_score( ::AbstractAnsatz, ::AdaptProtocol, ::Generator, ::Observable, ::QuantumState, )::Score

  • adapt!( ::AbstractAnsatz, ::Trace, ::AdaptProtocol, ::GeneratorList, ::Observable, ::QuantumState, ::CallbackList, )

  • optimize!( ::AbstractAnsatz, ::Trace, ::OptimizationProtocol, ::Observable, ::QuantumState, ::CallbackList, )

That said, most basic implementations of these methods are defined for abstract ansatze, so you oughtn't need to worry about them.

Please see individual method documentation for details.

source
ADAPT.AbstractCallbackType
AbstractCallback

A function to be called at each adapt iteration, or vqe iteration, or both.

Common Examples

  1. Tracers: update the running trace with information passed in data
  2. Printers: display the information passed in data to the screen or to a file
  3. Stoppers: flag the ADAPT state as converged, based on some condition

In particular, the standard way to converge an adapt run is to include a ScoreStopper. Otherwise, the run will keep adding parameters until every score is essentially zero.

More details can be found in the Callbacks module, where many standard callbacks are already implemented.

Implementation

Callbacks are implemented as callable objects, with two choices of method header (one for adaptations, one for optimization iterations).

  • (::AbstractCallback)( ::Data, ::AbstractAnsatz, ::Trace, ::AdaptProtocol, ::GeneratorList, ::Observable, ::QuantumState, )

  • (::AbstractCallback)( ::Data, ::AbstractAnsatz, ::Trace, ::OptimizationProtocol, ::Observable, ::QuantumState, )

If your callback is only meant for adaptations, simply do not implement the method for optimizations. (Behind the scenes, every AbstractCallback has default implementations for both methods, which just don't do anything.)

Precisely what data is contained within the data depends on the protocol. For example, the ScoreStopper expects to find the key :scores, whose value is a ScoreList, one score for each pool operator. Generally, the callback should assume data has whatever it needs, and if it doesn't, that means this callback is incompatible with the given protocol. That said, see the Callbacks module for some standard choices.

The callback is free to mutate the ansatz. For example, the ScoreStopper signals a run should end by calling set_converged!. But, if the callback wants to signal the run should end in an UN-converged state, it should simply return true.

source
ADAPT.AbstractCallbackMethod
(::AbstractCallback)(
    ::Data,
    ::AbstractAnsatz,
    ::Trace,
    ::AdaptProtocol,
    ::GeneratorList,
    ::Observable,
    ::QuantumState,
)

Callback for adapt iterations, called immediately prior to the ansatz update.

Note that the ansatz is already updated in the optimization callback, but not in the adaptation callback.

Parameters

  • Almost all parameters for the adapt! method. See that method for details.
  • data: (replaces callbacks) additional calculations the ADAPT method has made Keys depend on the protocol. See the Callbacks module for some standard choices.

Returns

  • true iff ADAPT should terminate, without updating ansatz
source
ADAPT.AbstractCallbackMethod
(::AbstractCallback)(
    ::Data,
    ::AbstractAnsatz,
    ::Trace,
    ::OptimizationProtocol,
    ::Observable,
    ::QuantumState,
)

Callback for optimization iterations, called AFTER ansatz update.

Note that the ansatz is already updated in optimization callback, but not in the adaptation callback.

Parameters

  • Almost all parameters for the optimize! method. See that method for details.
  • data: (replaces callbacks) additional calculations the optimization method has made Keys depend on the protocol. See the Callbacks module for some standard choices.

Returns

  • true iff optimization should terminate
source
ADAPT.AdaptProtocolType
AdaptProtocol

A distinctive protocol for adding new parameters after optimizing an initial ansatz.

Implementation

Sub-types must implement the following method:

  • typeof_parameter(::AbstractAnsatz)::Type{<:Parameter}

In addition, new sub-types probably need to implement:

  • calculate_score( ::AbstractAnsatz, ::AdaptProtocol, ::Generator, ::Observable, ::QuantumState, )::Score

  • adapt!( ::AbstractAnsatz, ::Trace, ::AdaptProtocol, ::GeneratorList, ::Observable, ::QuantumState, ::CallbackList, )

Finally, new sub-types might be able to provide better implementations of:

  • calculate_scores( ansatz::AbstractAnsatz, ADAPT::AdaptProtocol, pool::GeneratorList, observable::Observable, reference::QuantumState, )

For the most part, sub-types should be singleton objects, ie. no attributes. Arbitrary hyperparameters like gradient tolerance should be delegated to callbacks as much as possible. It's okay to insist a particular callback always be included with your ADAPT protocol, so long as you are clear in the documentation. That said, this rule is a "style" guideline, not a contract.

source
ADAPT.DataType
Data

Semantic alias for trace-worthy information from a single adapt or vqe iteration.

You'll never actually have to deal with this object unless you are implementing your own protocol or callback.

Keys can in principle be any Symbol at all. You can design your own protocols to fill the data, and your own callbacks to use it. That said, see the Callbacks module for some standard choices.

source
ADAPT.EnergyType
Energy

Semantic alias for the expectation value of an observable.

source
ADAPT.GeneratorType
Generator

The Union of any type that could be used as a pool operator.

Implemented Types

Any type at all can be used as a generator if there is a compatible implementation of the methods listed in the Implementation section.

The following types have implementations fleshed out in this library already:

  • PauliOperators.Pauli: A single Pauli word

  • PauliOperators.ScaledPauli: A single Pauli word, alongside some scaling coefficient

  • PauliOperators.PauliSum: A Hermitian operator decomposed into the Pauli basis

  • PauliOperators.ScaledPauliVector: Same but with a different internal data structure

    For each of the above, the generator G generates the unitary exp(-iθG). Hermiticity in G is not enforced, so be careful when constructing your pool operators.

Implementation

This list should be extended as new sub-types are implemented.

New sub-types probably need to implement:

  • evolve_state!( ::Generator, ::Parameter, ::QuantumState, )

In addition, new sub-types might be able to provide better implementations of:

  • calculate_scores( ansatz::AbstractAnsatz, adapt::AdaptProtocol, pool::GeneratorList, observable::Observable, reference::QuantumState, )
source
ADAPT.ObservableType
Observable

The Union of any type that could define a cost function.

The type is so named because the typical cost-function is the expecation value of a Hermitian operator, aka a quantum observable.

Implemented Types

Any type at all can be used as a generator if there is a compatible implementation of the methods listed in the Implementation section.

The following types have implementations fleshed out in this library already:

  • PauliOperators.Pauli: A single Pauli word

  • PauliOperators.ScaledPauli: A single Pauli word, alongside some scaling coefficient

  • PauliOperators.PauliSum: A Hermitian operator decomposed into the Pauli basis

  • PauliOperators.ScaledPauliVector: Same but with a different internal data structure

    For each of the above, the evaluation of H with respect to a quantum state |Ψ⟩ is ⟨Ψ|H|Ψ⟩.

Implementation

This list should be extended as new sub-types are implemented.

Sub-types must implement the following method:

  • typeof_energy(::Observable)::Type{<:Energy}

In addition, new sub-types probably need to implement:

  • evaluate( ::Observable, ::QuantumState, )::Energy

Finally, new sub-types might be able to provide better implementations of:

  • gradient!( ansatz::AbstractAnsatz, observable::Observable, reference::QuantumState, )

  • calculate_scores( ansatz::AbstractAnsatz, adapt::AdaptProtocol, pool::GeneratorList, observable::Observable, reference::QuantumState, )

source
ADAPT.OptimizationProtocolType
OptimizationProtocol

A distinctive protocol for refining parameters in an ansatz.

Implementation

Sub-types must implement the following method:

  • optimize!( ::AbstractAnsatz, ::Trace, ::OptimizationProtocol, ::Observable, ::QuantumState, ::CallbackList, )::Bool
source
ADAPT.QuantumStateType
QuantumState

The Union of any type that could define a quantum state.

Implemented Types

Any type at all can be used as a generator if there is a compatible implementation of the methods listed in the Implementation section.

The following types have implementations fleshed out in this library already:

  • Vector{<:Complex}: A dense statevector in the computational basis

  • PauliOperators.SparseKetBasis:

    A dict mapping individual kets (PauliOperators.KetBitString) to their coefficients.

Implementation

This list should be extended as new sub-types are implemented.

There must be a compatible implementation for each of:

  • evolve_state!( ::Generator, ::Parameter, ::QuantumState, )

  • evaluate( ::Observable, ::QuantumState, )::Energy

  • gradient!( index::Int, ansatz::AbstractAnsatz, observable::Observable, reference::QuantumState, )

  • calculate_scores( ansatz::AbstractAnsatz, adapt::AdaptProtocol, pool::GeneratorList, observable::Observable, reference::QuantumState, )

source
ADAPT.ScoreType
Score

Semantic alias for the importance-factor of a pool operator, eg. the expecation value of its commutator with an observable.

source
ADAPT.TraceType
Trace

Semantic alias for a compact record of the entire ADAPT run.

The adapt!, optimize!, and run! functions require a Trace object, which will be mutated throughout according to callbacks. Initialize an empty trace object by trace = Trace().

Keys can in principle be any Symbol at all. You can design your own protocols to fill the data, and your own callbacks to use it. That said, see the Callbacks module for some standard choices.

source
Base.MatrixMethod
Base.Matrix([F,] N::Int, ansatz::AbstractAnsatz)

Construct the unitary matrix representation of the action of an ansatz.

Parameters

  • F: float type; the resulting matrix will be of type Matrix{Complex{F}}
  • N: size of Hilbert space (ie. the number of rows in the matrix)
  • ansatz: the ansatz to be represented
source
Base.MatrixMethod
Base.Matrix([F,] N::Int, G::Generator, θ::Parameter)

Construct the unitary matrix representation of $exp(-iθG)$.

Parameters

  • F: float type; the resulting matrix will be of type Matrix{Complex{F}}
  • N: size of Hilbert space (ie. the number of rows in the matrix)
  • G: the generator of the matrix
  • θ: a scalar coefficient multiplying the generator
source
ADAPT.adapt!Method
adapt!(
    ::AbstractAnsatz,
    ::Trace,
    ::AdaptProtocol,
    ::GeneratorList,
    ::Observable,
    ::QuantumState,
    ::CallbackList,
)

Update an ansatz with a new generator(s) according to a given ADAPT protocol.

Typically, each call to this function will select a single generator whose score has the largest magnitude, but richer variants of ADAPT will have richer behavior.

For example, an implementation of Tetris ADAPT would add multiple generators, based on both the score and the "disjointness" of the generators (which is something that implementation would have to define).

Parameters

  • ansatz: the ADAPT state
  • trace: a history of the ADAPT run thus far
  • ADAPT: the ADAPT protocol
  • pool: the list of generators to consider adding to the ansatz
  • H: the object defining the cost-function
  • ψ0: an initial quantum state which the ansatz operates on
  • callbacks: a list of functions to be called just prior to updating the ansatz

Returns

  • a Bool, indicating whether or not an adaptation was made

Implementation

Any implementation of this method must be careful to obey the following contract:

  1. If your ADAPT protocol decides the ansatz is already converged, call set_converged!(ansatz, true) and return false, without calling any callbacks.

  2. Fill up a data dict with the expensive calculations you have to make anyway. See the default implementation for a minimal selection of data to include.

  3. BEFORE you actually update the ansatz, call each callback in succession, passing it the data dict. If any callback returns true, return false without calling any more callbacks, and without updating the ansatz.

  4. After all callbacks have been completed, update the ansatz, call set_optimized!(ansatz, false), and return true.

Standard operating procedure is to let callbacks do all the updates to trace. Thus, implementations of this method should normally ignore trace entirely (except in passing it along to the callbacks). That said, this rule is a "style" guideline, not a contract.

source
ADAPT.anglesMethod
angles(::AbstractAnsatz)

Fetch all parameters in the ansatz as a vector.

source
ADAPT.bind!Method
bind!(::AbstractAnsatz, ::ParameterList)

Replace all parameters in the ansatz.

source
ADAPT.calculate_scoreMethod
calculate_score(
    ansatz::AbstractAnsatz,
    ADAPT::AdaptProtocol,
    generator::Generator,
    H::Observable,
    ψ0::QuantumState,
)

Calculate an "importance" score for a generator, with respect to a particular ADAPT state.

Parameters

  • ansatz: the ADAPT state
  • ADAPT: the ADAPT protocol. Different protocols may have different scoring strategies.
  • generator: the generator to be scored
  • H: the object defining the cost-function
  • ψ0: an initial quantum state which the ansatz operates on

Returns

  • score: a scalar number, whose type is typeof_score(ADAPT)

Implementation

In addition to implementing this method (which is mandatory), strongly consider over-riding calculate_scores also, to take advantage of compact measurement protocols, or simply the fact that you should only need to evolve your reference state once.

source
ADAPT.calculate_scoresMethod
calculate_scores(
    ansatz::AbstractAnsatz,
    ADAPT::AdaptProtocol,
    pool::GeneratorList,
    observable::Observable,
    reference::QuantumState,
)

Calculate a vector of scores for all generators in the pool.

Parameters

  • ansatz: the ADAPT state
  • ADAPT: the ADAPT protocol. Different protocols may have different scoring strategies.
  • pool: the list of generators to be scored
  • H: the object defining the cost-function
  • ψ0: an initial quantum state which the ansatz operates on

Returns

  • scores: a vector whose elements are of type typeof_score(ADAPT). scores[i] is the score for the generator pool[i]
source
ADAPT.evaluateMethod
evaluate(
    ansatz::AbstractAnsatz,
    H::Observable,
    ψ0::QuantumState,
)

Evaluate a cost-function with respect to a particular ADAPT state.

Parameters

  • ansatz: the ADAPT state
  • H: the object defining the cost-function
  • ψ0: an initial quantum state which the ansatz operates on

Returns

  • energy: a scalar number, whose type is typeof_energy(observable)
source
ADAPT.evaluateMethod
evaluate(
    H::Observable,
    Ψ::QuantumState,
)

Evaluate a cost-function with respect to a particular quantum state.

Parameters

  • H: the object defining the cost-function
  • Ψ: the quantum state

Returns

  • energy: a scalar number, whose type is typeof_energy(observable)

Implementation

Typically, the "cost-function" is the expectation value ⟨Ψ|H|Ψ⟩, but different Observable types could have different definitions.

source
ADAPT.evolve_state!Method
evolve_state!(
    ansatz::AbstractAnsatz,
    state::QuantumState,
)

Apply an ansatz to the given quantum state, mutating and returning the state.

By default, generators with a lower index are applied to the state earlier. This means that the equation for |Ψ⟩ would list out generators in reverse order. Specific implementations of ansatze may override this behavior.

source
ADAPT.evolve_state!Method
evolve_state!(
    G::Generator,
    θ::Parameter,
    ψ::QuantumState,
)

Rotate a quantum state ψ by an amount x about the axis defined by G, mutating and returning ψ.

Implementation

Typically, the "rotation" is the unitary operator exp(-iθG), but different Generator types could have different effects.

source
ADAPT.evolve_stateMethod
evolve_state(
    ansatz::AbstractAnsatz,
    reference::QuantumState,
)

Calculate the quantum state resulting from applying an ansatz to a given reference state.

source
ADAPT.evolve_stateMethod
evolve_state(
    G::Generator,
    θ::Parameter,
    ψ::QuantumState,
)

Calculate the quantum state rotating ψ by an amount x about the axis defined by G.

source
ADAPT.evolve_unitary!Method
evolve_unitary!(
    ansatz::AbstractAnsatz,
    unitary::AbstractMatrix{<:Complex},
)

Extend a unitary by applying each generator in the ansatz (on the left).

source
ADAPT.evolve_unitary!Method
evolve_unitary!(
    G::Generator,
    θ::Parameter,
    unitary::AbstractMatrix{<:Complex},
)

Extend a unitary by applying a single generator (on the left).

source
ADAPT.evolve_unitaryMethod
evolve_state(
    ansatz::AbstractAnsatz,
    unitary::AbstractMatrix{<:Complex},
)

Calculate the matrix extending the given unitary by an ansatz (on the left).

source
ADAPT.evolve_unitaryMethod
evolve_unitary(
    G::Generator,
    θ::Parameter,
    unitary::AbstractMatrix{<:Complex},
)

Calculate the matrix extending the given unitary by a single generator (on the left).

source
ADAPT.gradient!Method
gradient!(
    result::AbstractVector,
    ansatz::AbstractAnsatz,
    observable::Observable,
    reference::QuantumState,
)

Fill a vector of partial derivatives with respect to each parameter in the ansatz.

Parameters

  • result: vector which will contain the gradient after calling this function
  • ansatz: the ADAPT state
  • H: the object defining the cost-function
  • ψ0: an initial quantum state which the ansatz operates on

Returns

  • result
source
ADAPT.gradientMethod
gradient(
    ansatz::AbstractAnsatz,
    observable::Observable,
    reference::QuantumState,
)

Construct a vector of partial derivatives with respect to each parameter in the ansatz.

Parameters

  • ansatz: the ADAPT state
  • H: the object defining the cost-function
  • ψ0: an initial quantum state which the ansatz operates on

Returns

  • a vector of type typeof_energy(observable).
source
ADAPT.is_convergedMethod
is_converged(::AbstractAnsatz)

Check whether the sequence of generators in this ansatz are flagged as optimal.

Note that this is a state variable in its own right; its value is independent of the actual generators themselves, but depends on all the protocols and callbacks which brought the ansatz to its current state.

source
ADAPT.is_optimizedMethod
is_optimized(::AbstractAnsatz)

Check whether the ansatz parameters are flagged as optimal.

Note that this is a state variable in its own right; its value is independent of the actual parameters themselves, but depends on all the protocols and callbacks which brought the ansatz to its current state.

source
ADAPT.make_costfunctionMethod
make_costfunction(
    ansatz::ADAPT.AbstractAnsatz,
    observable::ADAPT.Observable,
    reference::ADAPT.QuantumState,
)

Construct a single-parameter cost-function f(x), where x is a parameter vector.

Note that calling f does not change the state of the ansatz (although actually it does temporarily, so this function is not thread-safe).

Parameters

  • ansatz: the ADAPT state
  • observable: the object defining the cost-function
  • reference: an initial quantum state which the ansatz operates on

Returns

  • fn a callable function f(x) where x is a vector of angles compatible with ansatz
source
ADAPT.make_gradfunction!Method
make_gradfunction!(
    ansatz::ADAPT.AbstractAnsatz,
    observable::ADAPT.Observable,
    reference::ADAPT.QuantumState,
)

Construct a mutating gradient function g!(∇f, x), where x is a parameter vector.

Using this in place of make_gradfunction for optimization will tend to significantly reduce memory allocations.

Note that calling g! does not change the state of the ansatz (although actually it does temporarily, so this function is not thread-safe).

Parameters

  • ansatz: the ADAPT state
  • observable: the object defining the cost-function
  • reference: an initial quantum state which the ansatz operates on

Returns

  • g! a callable function g!(∇f,x)
    • ∇f and x are vectors of angles compatible with ansatz. The first argument ∇f is used to store the result; its initial values are ignored.
source
ADAPT.make_gradfunctionMethod
make_gradfunction(
    ansatz::ADAPT.AbstractAnsatz,
    observable::ADAPT.Observable,
    reference::ADAPT.QuantumState,
)

Construct a single-parameter gradient function g(x), where x is a parameter vector.

Note that calling g does not change the state of the ansatz (although actually it does temporarily, so this function is not thread-safe).

Parameters

  • ansatz: the ADAPT state
  • observable: the object defining the cost-function
  • reference: an initial quantum state which the ansatz operates on

Returns

  • gd a callable function gd(x) where x is a vector of angles compatible with ansatz
source
ADAPT.optimize!Method
optimize!(
    ansatz::AbstractAnsatz,
    trace::Trace,
    VQE::OptimizationProtocol,
    H::Observable,
    ψ0::QuantumState,
    callbacks::CallbackList,
)

Update the parameters of an ansatz according to a given optimization protocol.

Parameters

  • ansatz: the ADAPT state
  • trace: a history of the ADAPT run thus far
  • VQE: the optimization protocol (it doesn't have to be a VQE ^_^)
  • H: the object defining the cost-function
  • ψ0: an initial quantum state which the ansatz operates on
  • callbacks: a list of functions to be called just prior to updating the ansatz

Implementation

Callbacks must be called in each "iteration". The optimization protocol is free to decide what an "iteration" is, but it should generally correspond to "any time the ansatz is changed". That's not a hard-fast rule, though - for example, it doesn't necessarily make sense to call the callbacks for each function evaluation in a linesearch.

Any implementation of this method must be careful to obey the following contract:

  1. In each iteration, update the ansatz parameters and do whatever calculations you need to do. Fill up a data dict with as much information as possible. See the Callbacks module for some standard choices.

  2. Call each callback in succession, passing it the data dict. If any callback returns true, terminate without calling any more callbacks, and discontinue the optimization.

  3. After calling all callbacks, check if the ansatz has been flagged as optimized. If so, discontinue the optimization.

  4. If the optimization protocol terminates successfully without interruption by callbacks, call set_optimized!(ansatz, true). Be careful to ensure the ansatz parameters actually are the ones found by the optimizer!

Standard operating procedure is to let callbacks do all the updates to trace. Thus, implementations of this method should normally ignore trace entirely (except in passing it along to the callbacks). That said, this rule is a "style" guideline, not a contract.

The return type of this method is intentionally unspecified, so that implementations can return something helpful for debugging, eg. an Optim result object. If the callbacks interrupt your optimization, it may be worthwhile to check if they flagged the ansatz as converged, and modify this return object accordingly if possible.

source
ADAPT.partialMethod
partial(
    index::Int,
    ansatz::AbstractAnsatz,
    observable::Observable,
    reference::QuantumState,
)

The partial derivative of a cost-function with respect to the i-th parameter in an ansatz.

Parameters

  • index: the index of the parameter to calculate within ansatz
  • ansatz: the ADAPT state
  • observable: the object defining the cost-function
  • reference: an initial quantum state which the ansatz operates on

Returns

  • a number of type typeof_energy(observable).

Implementation

Typically, generators apply a unitary rotation, so the partial consists of a partial evolution up to the indexed generator, then a "kick" from the generator itself, then a final evolution, and a braket with the observable. But, different ansatze may have a different procedure.

source
ADAPT.run!Method
run!(
    ansatz::AbstractAnsatz,
    trace::Trace,
    ADAPT::AdaptProtocol,
    VQE::OptimizationProtocol,
    pool::GeneratorList,
    H::Observable,
    ψ0::QuantumState,
    callbacks::CallbackList,
)

Loop between optimization and adaptation until convergence.

The ansatz and trace are mutated throughout, so that if the user runs this method in a REPL, she can terminate it (eg. by Ctrl+C) after however long, and still have meaningful results.

Parameters

  • ansatz: the ADAPT state
  • trace: a history of the ADAPT run thus far
  • ADAPT: the ADAPT protocol
  • VQE: the optimization protocol (it doesn't have to be a VQE ^_^)
  • pool: the list of generators to consider adding to the ansatz
  • H: the object defining the cost-function
  • ψ0: an initial quantum state which the ansatz operates on
  • callbacks: a list of functions to be called just prior to updating the ansatz

Returns

  • true iff the ansatz is converged, with respect to the given protocols and callbacks
source
ADAPT.set_converged!Method
set_converged!(::AbstractAnsatz, ::Bool)

Flag the sequence of generators in this ansatz as optimal.

source
ADAPT.typeof_energyMethod
typeof_energy(::Observable)

The number type of a cost-function.

The method is so named because the typical cost-function is the expecation value of a Hamiltonian, aka an energy.

Implementation

Usually a sub-type of AbstractFloat, and probably just about always Float64.

source
ADAPT.typeof_parameterMethod
typeof_parameter(::AbstractAnsatz)

The number type of the variational parameters in this ansatz.

I think this will always be a sub-type of AbstractFloat, and almost always Float64.

source
ADAPT.typeof_scoreMethod
typeof_score(::AdaptProtocol)

The number type of the score for each pool operator.

Implementation

Usually a sub-type of AbstractFloat, and probably just about always Float64.

source
ADAPT.validateMethod
validate(
    ansatz::AbstractAnsatz,
    adapt::AdaptProtocol,
    vqe::OptimizationProtocol,
    pool::GeneratorList,
    observable::Observable,
    reference::QuantumState;
    kwargs...
)

Validate that ADAPT will work correctly with the given types.

It actually runs ADAPT, so ensure your pool and observable are as simple as the types allow.

The mandatory arguments are exactly those found in the run! method, except there is no trace.

Keyword Arguments

  • label: the name of the test-set (useful when validating more than one set of types).
  • tolerance: the default tolerance for numerical tests
  • evolution: special tolerance for the evolution test, or nothing to skip
  • evaluation: special tolerance for the evaluation test, or nothing to skip
  • gradient: special tolerance for the gradient test, or nothing to skip
  • scores: special tolerance for the scores test, or nothing to skip
source
ADAPT.validate_consistencyMethod
validate_consistency(
    ansatz::AbstractAnsatz,
    adapt::AdaptProtocol,
    pool::GeneratorList,
    observable::Observable,
    reference::QuantumState,
)

Check that every core ADAPT function is internally consistent (ie. different versions of the same function give consistent results).

source
ADAPT.validate_evaluationMethod
validate_evaluation(
    observable::Observable,
    reference::QuantumState;
    tolerance=1e-10,
)

Check that observable evaluation matches brute-force matrix-vector results.

The difference between core ADAPT and brute-force must have an absolute value within tolerance.

This function requires the following constructors to be defined:

  • Matrix(::Observable)
  • Vector(::QuantumState)
source
ADAPT.validate_evolutionMethod
validate_evolution(
    generator::Generator,
    angle::Parameter,
    reference::QuantumState;
    tolerance=1e-10,
)

Check that generator evolution matches brute-force matrix-vector results.

The difference vector between core ADAPT and brute-force must have a norm within tolerance.

This function requires the following constructors to be defined:

  • Matrix(::Generator)
  • Vector(::QuantumState)
source
ADAPT.validate_gradientMethod
validate_gradient(
    ansatz::AbstractAnsatz,
    observable::Observable,
    reference::QuantumState;
    tolerance=1e-10,
)

Check that the gradient function matches the finite difference.

The difference vector between core ADAPT and brute-force must have a norm within tolerance.

source
ADAPT.validate_runtimeMethod
validate_runtime(
    ansatz::AbstractAnsatz,
    adapt::AdaptProtocol,
    vqe::OptimizationProtocol,
    pool::GeneratorList,
    observable::Observable,
    reference::QuantumState;
    verbose=true,
)

Check that every core ADAPT function can run for the given types.

If verbose is true, this method also explicilty @time's everything, to catch any super-obvious memory leaks when called manually.

Note that this will run ADAPT for one iteration, so ensure your pool and observable are as simple as the types allow.

source
ADAPT.validate_scoresMethod
validate_score(
    ansatz::AbstractAnsatz,
    adapt::AdaptProtocol,
    pool::GeneratorList,
    observable::Observable,
    reference::QuantumState;
    tolerance=1e-10,
)

Check that the score for each pool operator matches the partial for that pool operator when added to a candidate ansatz.

Of course this only makes sense when the score is the gradient, which depends on the ADAPT protocol. But this is a common-enough choice to justify a standard method. Other ADAPT protocols may override this method, if desired.

The difference vector between core ADAPT and brute-force must have a norm within tolerance.

source
ADAPT.@runtimeMacro
@runtime do_time, ex
@runtime(do_time, ex)

A macro to check that an expression evaluates without error, optionally including an explicit test for runtime.

source

Basics

ADAPT.Basics.AnsatzType
Ansatz{F<:Parameter,G<:Generator}(
    parameters::Vector{F},
    generators::Vector{G},
    optimized::Bool,
    converged::Bool,
)

A minimal ADAPT state.

Type Parameters

  • F: the number type for the parameters (usually Float64 is appropriate.)
  • G: the generator type. Any type will do, but it's best to be specific.

Parameter

  • parameters: list of current parameters
  • generators: list of current generators
  • optimized: whether the current parameters are flagged as optimal
  • converged: whether the current generators are flagged as converged
source
ADAPT.Basics.AnsatzMethod
Ansatz(F, G)

Convenience constructor for initializing an empty ansatz.

Parameters

  • the parameter type OR an instance of that type OR a vector whose elements are that type
  • the generator type OR an instance of that type OR a vector whose elements are that type

The easiest way to use this constructor is probably to prepare your generator pool first, then call Ansatz(Float64, pool). But please note, the ansatz is always initialized as empty, even though you've passed a list of generators in the constructor!

source
ADAPT.Basics.OptimOptimizerType
OptimOptimizer(method, options)

Parameters

  • method: an optimizer object from the Optim package
  • options: an options object from the Optim package

IMPORTANT: the callback attribute of options will be generated automatically whenever ADAPT.optimize! is called, to insert dynamic callbacks. If you provide your own callback in options, it will be ignored. Use the ADAPT.Callback framework to gain extra behavior throughout optimization. If this framework does not meet your needs, you'll need to implement your own OptimizationProtocol.

source
ADAPT.Basics.OptimOptimizerMethod
OptimOptimizer(method::Symbol; options...)

A convenience constructor to create OptimOptimizers without referring to Optim.

Parameters

  • method: a symbol-ization of the Optim method

Kewyord Arguments

You can pass any keyword argument accepted either by your Optim method's constructor, or by that of Optim.Options. If you try to pass a callback keyword argument, it will be ignored (see above).

source
ADAPT.Basics.VanillaADAPTType
VanillaADAPT

Score pool operators by their initial gradients if they were to be appended to the pool. Equivalently, score pool operators by the expectation value of the commutator of the pool operator with the observable.

This protocol is defined for when the pool operators and the observable are AbstractPaulis. Note that fermionic operators are perfectly well-represented with AbstractPaulis.

source
ADAPT.Basics.__make__costateMethod
__make__costate(G::ScaledPauliVector, x, Ψ)

Compute ∂/∂x exp(ixG) |ψ⟩.

Default implementation just applies -iG to Ψ then evolves. That's fine as long as the evolution is exact. But evolution is not exact if G is a ScaledPauliVector containing non-commuting terms. In such a case, the co-state must be more complicated.

source
ADAPT.partialMethod
partial(
    index::Int,
    ansatz::AbstractAnsatz,
    observable::Observable,
    reference::QuantumState,
)

The partial derivative of a cost-function with respect to the i-th parameter in an ansatz.

The ansatz is assumed to apply a unitary rotation exp(-iθG), where G is the (Hermitian) generator, and generators with a lower index are applied to the state earlier. Ansatz sub-types may change both behaviors.

Parameters

  • index: the index of the parameter to calculate within ansatz
  • ansatz: the ADAPT state
  • H: the object defining the cost-function
  • ψ0: an initial quantum state which the ansatz operates on

Returns

  • a number of type typeof_energy(observable).
source
ADAPT.Basics.CallbacksModule
Callbacks

A suite of basic callbacks for essential functionality.

Explanation

The final argument of the adapt!, optimize! and run! methods calls for a vector of Callbacks. These are callable objects extending behavior at each iteration or adaptation (or both; see the AbstractCallback type documentation for more details).

The callback is passed a data object (aka. a Dict where the keys are Symbols like :energy or :scores), in addition to the ADAPT state and all the quantum objects. Callbacks may be as simple as displaying the data, or as involved as carefully modifying the quantum objects to satsify some constraint.

Each callback in this module can be categorized as one of the following:

  1. Tracers: update the running trace with information passed in data
  2. Printers: display the information passed in data to the screen or to a file
  3. Stoppers: flag the ADAPT state as converged, based on some condition

In particular, Stoppers are the primary means of establishing convergence in Vanilla ADAPT. They do this by flagging the ADAPT state as converged, which signals to the run! function that it can stop looping once this round is done. Alternatively, though none of the basic callbacks in this module do so, you amy implement a callback that returns true based on some condition. This signals an instant termination, regardless of convergence.

Just to reiterate, Stoppers are the primary means of establishing convergence. If you don't include any callbacks, the run! call may not terminate this century!

Callback Order

Callback order matters. Using the callbacks in this module, I recommend the order listed above (Tracers, then Printers, then Stoppers).

The first callback in the list gets dibs on mutating the trace or the ADAPT state, which could change the behavior of subsequent callbacks. For example, the basic Printer inspects the trace to infer the current iteration, so it naturally follows the Tracer. (although the Printer knows to skip this part if there is no Tracer). Some Stoppers (eg. SlowStopper, FloorStopper) inspect the trace to decide whether energy has converged, so the "latest" energy should already be logged. Therefore, these too should follow the Tracer.

Please note that, because the callbacks are called prior to actually updating the ansatz, the Tracer will usually log one last round of updates which are not actually reflected in the ansatz. The only times this does not happen are if convergence is flagged by the protocol itself rather than a Stopper callback (eg. all scores are essentially zero), which is probably never. ^_^ This behavior seems fine, even desirable, to me, but if you'd like to avoid it, you could implement a Stopper which explicitly terminates by returning true (rather than merely flagging the ansatz as converged, like basic Stoppers), and listing that Stopper prior to the Tracer.

Standard keys

The actual keys used in the data argument are determined by the protocol, so you may design custom callbacks to make use of the data in your custom protocols.

However, for the sake of modularity, it is worth keeping keys standardized when possible. Here is a list of recommended keys.

Reserved keys

  • :iteration: iteration count over all optimizations
  • :adaptation: which iteration an adaptation occurred

These keys are not part of data but are used in the running trace.

Standard keys for adapt!

  • :scores: vector of scores for each pool operator
  • :selected_index: index in the pool of the operator ADAPT plans to add
  • :selected_generator: the actual generator object ADAPT plans to add
  • :selected_parameter: the parameter ADAPT plans to attach to the new generator

Protocols which add multiple generators in a single adaptation may still use these same keys, replacing the values with vectors.

Standard keys for optimize!

  • :energy: the result of evaluating the observable. Required for some Stoppers
  • :g_norm: the norm of the gradient vector (typically ∞ norm, aka. largest element)
  • :elapsed_iterations: the number of iterations of the present optimization run
  • :elapsed_time: time elapsed since starting the present optimization run
  • :elapsed_f_calls: number of function calls since starting the present optimization run
  • :elapsed_g_calls: number of gradient calls since starting the present optimization run
source
ADAPT.Basics.Callbacks.FloorStopperType
FloorStopper(threshold::Energy, floor::Energy)

Converge once the energy has gotten close enough to some target value.

Called for adapt! only. Requires a preceding Tracer(:energy).

Parameters

  • threshold: maximum energy difference before convergence
  • floor: the target value
source
ADAPT.Basics.Callbacks.ParameterPrinterType
ParameterPrinter(; io=stdout, adapt=true, optimize=false, ncol=8)

Print the current ansatz parameters as neatly and compactly as I can think to.

Parameters

  • io: the IO stream to print to
  • adapt: print parameters at each adaptation
  • optimize: print parameters at each optimization iteration
  • ncol: number of parameters to print in one line, before starting another
source
ADAPT.Basics.Callbacks.ParameterStopperType
ParameterStopper(n::Int)

Converge once the ansatz reaches a certain number of parameters.

Called for adapt! only.

Parameters

  • n: the minimum number of parameters required for convergence
source
ADAPT.Basics.Callbacks.ParameterTracerType
ParameterTracer()

Add the ansatz parameters to the running trace, under the key :parameters.

Only compatible when following a Tracer including :selectedindex. This is no great handicap since the principal point of this is to be able to reconstruct an ansatz, and you'll need the :selectedindex for that also. ;)

Parameters are stored in a matrix. Each column is associated with an angle in the ansatz (vanilla protocol sets the first column as the first parameter added to the ansatz and the first one applied to the reference state). Each row gives the optimized parameters for the corresponding ADAPT iteration.

The adapt callback is responsible for adding a new row (vanilla protocol is to initialize with the previously optimized parameters), and for padding previous rows with zeros. The optimization callback is responsible for keeping the last row updated with the currently-best parameters for this choice of parameters.

Standard practice is to include the ParameterTracer AFTER the regular Tracer, but BEFORE any ADAPT convergence Stoppers. Thus, the parameter matrix INCLUDES columns for the last-selected parameter(s). Standard practice for reconstructing an optimized ansatz of a converged trace is to look at the PENULTIMATE row.

Please note that the default implementation of this callback is unsuitable (or at least the matrix requires some post-processing) if the AdaptProtocol reorders parameters, or even simply inserts new parameters anywhere other than the end, or even (currently) if parameters aren't initialized to zero, or even (currently) if it adds more than one parameter at once. (NOTE: These last two are easily adjusted but will require a more complex trace precondition.) If you need a parameter tracer for such protocols, you'll need to dispatch to your own method.

source
ADAPT.Basics.Callbacks.PrinterType
Printer([io::IO=stdout,] keys::Symbol...)

Print selected data keys at each iteration or adaptation.

The keys arguments are passed in the same way as Tracer; see that method for some examples. Unlike Tracer, the first argument can be an IO object, which determines where the printing is done. By default, it is the standard output stream, ie. your console, or a file if you are redirecting output via >. The io argument allows you to explicitly write to a file, via Julia's open function.

If a key is not present in data, it is ignored. Thus, the same list of keys is used for calls from adapt! and optimize!, so long as keys do not overlap (which should be avoided)!

The keys :iteration and :adaptation are treated specially. These keys will not appear directly in data, and they should not be included in keys. If the trace contains these keys (ie. if a Tracer callback was also included), they are used as "section headers". Otherwise, they are skipped.

source
ADAPT.Basics.Callbacks.SerializerType
Serializer(; ansatz_file="", trace_file="", on_adapt=false, on_iterate=false)

Serialize the current state so that it can be resumed more easily.

Please note that robust serialization depends heavily on version control; if the definition of a serialized type has changed since it was serialized, it is very, very difficult to recover. Thus, serialization of this nature should be considered somewhat transient and unreliable. It's good for restarting when your supercomputer crashes unexpectedly mid-job, but not so good for long-term archival purposes.

Parameters

  • ansatz_file: file to save ansatz in ("" will skip saving ansatz)
  • trace_file: file to save trace in ("" will skip saving trace)
  • on_adapt: whether to serialize on adaptations
  • on_iterate: whether to serialize in every optimization iteration
source
ADAPT.Basics.Callbacks.SlowStopperType
SlowStopper(threshold::Energy, n::Int)

Converge if all energies in the past n iterations are within a certain range.

Called for adapt! only. Requires a preceding Tracer(:energy).

Parameters

  • threshold: maximum energy range before convergence

  • n: number of recent adaptations to check

    This function will not flag convergence before at least n adaptations have occurred.

source
ADAPT.Basics.Callbacks.TracerType
Tracer(keys::Symbol...)

Add selected data keys at each iteration or adaptation to the running trace.

Examples

Tracer(:energy)

Including this callback in a run! call will fill the trace argument with the energy at each optimization iteration, as well as noting in which iteration each adaptation occurred. I cannot think of a circumstance when you will not want to trace at least this much.

Tracer(:energy, :scores)

This example shows the syntax to keep track of multiple data keys: just list them out as successive arguments of the same Tracer. Do NOT include multiple instances of Tracer in the same run, or you will record twice as many iterations as actually occurred! The ParameterTracer is a distinct type and is safe to use with Tracer.

Other Notes

If a key is not present in data, it is ignored. Thus, the same list of keys is used for calls from adapt! and optimize!, so long as keys do not overlap (which should be avoided)!

The keys :iteration and :adaptation are treated specially. These keys will not appear directly in data, and they should not be included in keys.

The :iteration value will simply increment with each call from optimize!. The :adaptation value will be set to the most recent :iteration value.

I highly recommend including at minimum Tracer(:energy) with every single ADAPT run you ever do.

source
ADAPT.Basics.OperatorsModule
Operators

A suite of common operators, especially useful for constructing operator pools.

TODO: I haven't decided yet whether observables should live here or not. If they do, I'll want to standardize the interface somehow. In particular, the interface with pyscf for molecules is rather hazy. I think we need a separate package which is a Julia wrapper for openfermion. Then observables will generally be input as qubit operators from that package, or perhaps we have a simple method that converts qubit operators to PauliSums, so we have better control over the arithmetic being performed. In any case, though I may evict them someday, standard lattice systems like Hubbard and Heisenberg, not requiring openfermion, may inhabit this module for the time being.

source
ADAPT.Basics.Operators.hubbard_hamiltonianMethod
hubbard_jw(graph::Array{T,2}, U, t)

A Hubbard Hamiltonian in the Jordan-Wigner basis.

Copied shamelessly from Diksha's ACSE repository.

Parameters

  • graph: an adjacency matrix identifying couplings. Must be symmetric.
  • U: Coulomb interaction for all sites
  • t: hopping energy for all couplings

Returns

  • PauliOperators.PauliSum: the Hamiltonian
source
ADAPT.Basics.Operators.qubitexcitationMethod
qubitexcitation(n::Int, i::Int, k::Int)
qubitexcitation(n::Int, i::Int, j::Int, k::Int, l::Int)

Qubit excitation operators as defined in Yordanov et al. 2021.

Note that Yordanov's unitaries are defined as exp(iθG) rather than exp(-iθG), so variational parameters will be off by a sign.

Parameters

  • n: total number of qubits
  • i,j,k,l: qubit indices as defined in Yordanov's paper.

Returns

  • PauliOperators.ScaledPauliVector: the qubit excitation operator

    Note that all Pauli terms in any single qubit excitation operator commute, so the ScaledPauliVector representation is "safe".

source
ADAPT.Basics.Pools.fullpauliMethod
fullpauli(n::Int)

The pool of all (4^n) n-qubit Pauli operators.

Parameters

  • n: Number of qubits in the system

Returns

  • pool: the full pauli pool.
source
ADAPT.Basics.Pools.one_local_poolFunction
one_local_pool(n::Int64, axes=["I","X","Y","Z"])

Returns the one-local pool containing each one-local operator on n qubits.

Parameters

  • n: Number of qubits in the system

Returns

  • pool: the one-local pool.
source
ADAPT.Basics.Pools.oneandtwo_local_poolMethod
oneandtwo_local_pool(n::Int64)

Returns the union of the one-local and two-local pools on n qubits.

Parameters

  • n: Number of qubits in the system

Returns

  • pool: union of one-local and two-local pools.
source
ADAPT.Basics.Pools.qubitadaptpoolMethod
qubitadaptpool(n_system::Int)

Returns the qubit ADAPT pool on n_system qubits as defined in PRX QUANTUM 2, 020310 (2021). It is generated by taking each qubit-excitation-based operator and breaking it into individual Pauli terms.

Parameters

  • n_system: Number of qubits in the system

Returns

  • pool: the qubit-ADAPT pool.
source
ADAPT.Basics.Pools.qubitexcitationMethod
qubitexcitation(n::Int, i::Int, k::Int)
qubitexcitation(n::Int, i::Int, j::Int, k::Int, l::Int)

Qubit excitation operators as defined in Yordanov et al. 2021.

Note that Yordanov's unitaries are defined as exp(iθG) rather than exp(-iθG), so variational parameters will be off by a sign.

Parameters

  • n: total number of qubits
  • i,j,k,l: qubit indices as defined in Yordanov's paper.

Returns

  • PauliOperators.ScaledPauliVector: the qubit excitation operator

    Note that all Pauli terms in any single qubit excitation operator commute, so the ScaledPauliVector representation is "safe".

source
ADAPT.Basics.Pools.qubitexcitationpoolMethod
qubitexcitationpool(n_system::Int)

The number of singles excitations = (n 2), and the doubles = 3*(n 4).

Parameters

  • n_system: Number of qubits in the system

Returns

  • pool: the qubit-excitation-based pool as defined in Communications Physics 4, 1 (2021).
  • target_and_source: Dict mapping each pool operator to the target and source orbitals involved in the excitation.
source
ADAPT.Basics.Pools.qubitexcitationpool_complementedMethod
qubitexcitationpool_complemented(n_system::Int)

Returns the complemented qubit excitation pool on n_system qubits, inspired from arXiv 2109.01318.

Parameters

  • n_system: Number of qubits in the system

Returns

  • pool: the complemented qubit-excitation-based pool.
  • target_and_source: Dict mapping each pool operator to the target and source orbitals involved in the excitation.
source
ADAPT.Basics.Pools.tile_operatorsMethod
tile_operators(L1::Int, L2::Int, chosen_operators::Vector{Vector{ScaledPauli{N}}}, PBCs)

Constructs the tiled operators for a system of L2 qubits, given a set of operators defined for a smaller problem instance on L1 qubits.

Parameters

  • L1: number of qubits for small problem instance
  • L2: number of qubits for large problem instance
  • chosen_operators: list of operators for small problem instance
  • PBCs: periodic boundary conditions

Returns

  • tiled_ops: tiled operators as a Vector{Vector{ScaledPauli}}
source
ADAPT.Basics.Pools.two_local_poolFunction
two_local_pool(n::Int64, axes=["X","Y","Z"])

Returns the two-local pool containing each two-local operator on n qubits.

Parameters

  • n: Number of qubits in the system

Returns

  • pool: the two-local pool.
source

Other Modules

Base.MatrixMethod
Matrix(infidelity)

Convert an infidelity to a matrix.

This implementation assumes:

  • The target state infidelity.Φ can be cast to a vector.
  • The reference state in evaluate(infidelity, reference) is always normalized.
source
ADAPT.Degenerate_ADAPT.DegenerateADAPTType
DegenerateADAPT

Score pool operators by their initial gradients if they were to be appended to the ansatz. Equivalently, score pool operators by the expectation value of the commutator of the pool operator with the observable. In the case where the largest scores (gradients) are degenerate between multiple pool operators, choose the operator to append to the ansatz randomly.

source
ADAPT.TETRIS_ADAPT.TETRISADAPTType
TETRISADAPT

Score pool operators by their initial gradients if they were to be appended to the pool. TETRIS-ADAPT is a modified version of ADAPT-VQE in which multiple operators with disjoint support are added to the ansatz at each iteration. They are chosen by selecting from operators ordered in decreasing magnitude of gradients.

source
ADAPT.ADAPT_QAOA.DiagonalQAOAAnsatzType
DiagonalQAOAAnsatz{F<:Parameter,G<:Generator}(
    observable::QAOAObservable,
    γ0::F,
    generators::Vector{G},
    β_parameters::Vector{F},
    γ_parameters::Vector{F},
    optimized::Bool,
    converged::Bool,
)

An ADAPT state suitable for ADAPT-QAOA. The standard ADAPT generators are interspersed with the observable itself.

Type Parameters

  • F: the number type for the parameters (usually Float64 is appropriate).
  • G: the generator type.

Parameter

  • observable: the observable, which is interspersed with generators when evolving
  • γ0: initial coefficient of the observable, whenever a new generator is added
  • generators: list of current generators (i.e. mixers)
  • β_parameters: list of current generator coefficients
  • γ_parameters: list of current observable coefficients
  • optimized: whether the current parameters are flagged as optimal
  • converged: whether the current generators are flagged as converged
source
ADAPT.ADAPT_QAOA.DiagonalQAOAAnsatzMethod
DiagonalQAOAAnsatz(γ0, pool, observable)

Convenience constructor for initializing an empty ansatz.

Parameters

  • γ0
  • pool
  • observable

Note that the observable must be a QAOAObservable.

source
ADAPT.ADAPT_QAOA.PlasticQAOAAnsatzType
PlasticQAOAAnsatz{F<:Parameter,G<:Generator}(
    observable::QAOAObservable,
    γ0::F,
    generators::Vector{G},
    β_parameters::Vector{F},
    γ_parameters::Vector{F},
    optimized::Bool,
    converged::Bool,
)

An ADAPT state suitable for ADAPT-QAOA. The standard ADAPT generators are interspersed with the observable itself.

The only difference between PlasticQAOAAnsatz and DiagonalQAOAAnsatz is that the latter initializes every new γ value to γ0, while the former initializes every new γ value to that of the previous layer, using γ0 only for the first first round of optimization.

Type Parameters

  • F: the number type for the parameters (usually Float64 is appropriate).
  • G: the generator type.

Parameter

  • observable: the observable, which is interspersed with generators when evolving
  • γ0: initial coefficient of the observable, whenever a new generator is added
  • generators: list of current generators (i.e. mixers)
  • β_parameters: list of current generator coefficients
  • γ_parameters: list of current observable coefficients
  • optimized: whether the current parameters are flagged as optimal
  • converged: whether the current generators are flagged as converged
source
ADAPT.ADAPT_QAOA.PlasticQAOAAnsatzMethod
PlasticQAOAAnsatz(γ0, pool, observable)

Convenience constructor for initializing an empty ansatz.

Parameters

  • γ0
  • pool
  • observable

Note that the observable must be a QAOAObservable.

source
ADAPT.ADAPT_QAOA.QAOAAnsatzType
QAOAAnsatz{F<:Parameter,G<:Generator}(
    observable::G,
    γ0::F,
    generators::Vector{G},
    β_parameters::Vector{F},
    γ_parameters::Vector{F},
    optimized::Bool,
    converged::Bool,
)

An ADAPT state suitable for ADAPT-QAOA. The standard ADAPT generators are interspersed with the observable itself.

Type Parameters

  • F: the number type for the parameters (usually Float64 is appropriate).
  • G: the generator type. Uniquely for QAOA, G must ALSO be a valid Observable type.

Parameter

  • observable: the observable, which is interspersed with generators when evolving
  • γ0: initial coefficient of the observable, whenever a new generator is added
  • generators: list of current generators (i.e. mixers)
  • β_parameters: list of current generator coefficients
  • γ_parameters: list of current observable coefficients
  • optimized: whether the current parameters are flagged as optimal
  • converged: whether the current generators are flagged as converged
source
ADAPT.ADAPT_QAOA.QAOAAnsatzMethod
Ansatz(γ0, observable)

Convenience constructor for initializing an empty ansatz.

Parameters

  • γ0
  • observable

Note that, uniquely for QAOA, the observable and the pool operators must be of the same type.

source
ADAPT.ADAPT_QAOA.QAOAObservableType
QAOAObservable(spv::ScaledPauliVector)

Wrap a ScaledPauliVector observable in a view that assumes each element is diagonal, allowing for more memory-efficient state evolution.

The constructor throws an error if any element sp of spv has sp.pauli.x != 0.

source
ADAPT.ADAPT_QAOA.__make__costateMethod

Carbon copy of the usual costate function with Pauli operators.

The only difference is that we don't copy in the method defining special behavior for ScaledPauliVectors in this namespace, so those will be treated as though they consist only of commuting terms.

source
ADAPT.gradient!Method

Carbon copy of the usual gradient with Pauli operators.

The only difference is which __make__costate function is getting called.

source
ADAPT.ADAPT_QAOA.QAOApools.qaoa_double_opsMethod
qaoa_double_ops(n::Int64)

Returns the pool containing two-qubit Paulis respecting bit-flip symmetry.

Parameters

  • n: Number of qubits

Returns

  • pool: pool containing symmetric two-qubit Paulis
source
ADAPT.Hamiltonians.get_unweighted_maxcutMethod
get_unweighted_maxcut(g::Graphs.SimpleGraph)

Take a graph object and extract edges for MaxCut.

Parameters

  • g: graph instance.

Returns

  • edge_list: list of edges and weights equal to one.
source
ADAPT.Hamiltonians.get_weighted_maxcutFunction
get_weighted_maxcut(g::Graphs.SimpleGraph, rng = _DEFAULT_RNG)

Take a graph object and extract edges and assign edge weights.

Parameters

  • g: graph instance.
  • rng: random number generator to generate weights.

Returns

  • edge_list: list of edges and weights.
source
ADAPT.Hamiltonians.hubbard_hamiltonianMethod
hubbard_jw(graph::Array{T,2}, U, t)

A Hubbard Hamiltonian in the Jordan-Wigner basis.

Copied shamelessly from Diksha's ACSE repository.

Parameters

  • graph: an adjacency matrix identifying couplings. Must be symmetric.
  • U: Coulomb interaction for all sites
  • t: hopping energy for all couplings

Returns

  • PauliOperators.PauliSum: the Hamiltonian
source
ADAPT.Hamiltonians.maxcut_hamiltonianMethod
maxcut_hamiltonian(V::Int, Edges::Vector{Tuple{Int,Int,T}}) where T<:Real

A MaxCut Hamiltonian defined on a graph containing only Pauli ZZ terms.

Parameters

  • V: number of vertices.
  • Edges: list of edges, in the form of (first index, second index, weight).

Returns

  • H: MaxCut Hamiltonian
source
ADAPT.Hamiltonians.xyz_modelMethod
xyz_model(L::Int, Jx::Float, Jy::Float, Jz::Float, PBCs::Bool)

An XYZ Heisenberg Hamiltonian.

Parameters

  • L: system size.
  • Jx: coupling along X.
  • Jy: coupling along Y.
  • Jz: coupling along Z.
  • PBCs: Periodic Boundary Conditions

Returns

  • PauliOperators.PauliSum: the Hamiltonian
source
ADAPT.Hamiltonians.MaxCut.random_regular_max_cut_hamiltonianMethod
random_regular_max_cut_hamiltonian(n::Int, k::Int; rng = _DEFAULT_RNG, weighted = true)

Return a random Hamiltonian for a max cut problem on n qubits.

The corresponding graph is degree k. If an RNG is provided, this will be used to sample the graph and edge weights. If weighted is true, the edge weights will be randomly sampled from the uniform distribution U(0,1).

source

MyPauliOperators

These methods should not be considered part of "ADAPT", but rather, destined for the PauliOperators.jl package. The only reason I document them here is that the doc builder is configured to throw an error if any doc strings aren't included in the documentation...

ADAPT.Basics.MyPauliOperators.cis!Method

TODO: VERY SPECIFICALLY ASSERT that pauli xz=00 is to be interpreted as I, pauli xz=10 is to be interpreted as X, pauli xz=01 is to be interpreted as Z, and pauli xz=11 is to be interpreted as Y, despite the last usually being interpreted as iY. Also clear this definition with Nick before putting it in his package...

source
ADAPT.Basics.MyPauliOperators.measure_commutatorMethod
measure_commutator(
    A::AnyPauli,
    B::AnyPauli,
    Ψ::Union{SparseKetBasis,AbstractVector},
)

Calculate the expectation value of the commutator, ie. ⟨Ψ|[A,B]|Ψ⟩.

TODO: There could be a place for this in PauliOperators, but it would need to be carefully fleshed out type by type. A and B needn't be Hermitian in general (though I assume they are here), so my intuition is rather lacking.

source
Base.:*Method

Cross-type multiplication. Best to discourage ever doing this operation. Needed for a lazy commutator, but not necessarily needed long-term. We'll return pauli sum for now.

source
Base.:*Method

Of course this one is missing... ^_^ Note strict typing in out, because Paulis themselves are strictly typed.

source
Base.adjointMethod

TODO: Consult with Nick before adding this definition to PauliOperators.

I hesitate for two reasons:

  1. It is not "lazy". It allocates a new array. Not unprecedented but not ideal. Not sure the proper way to make it lazy.

  2. Column vector adjoint should properly be a row vector, rather than reversed. Can't think of why we'd ever use ScaledPauliVector as a column vector, but its data type is so, properly.

But, this definition achieves desired polymorphism in evolving by ScaledPauliVector, so if Nick okays it, I'm happy with it. The alternative is a dedicated unevolve function with a tedious special case for unevolving ansatze whose generators are ScaledPauliVector...

source
Base.adjointMethod

TODO: This adjoint is not strictly "lazy". But I don't think anyone will care.

source