Variational

Trainable Encoding

Alternates data-dependent and trainable rotation layers for task-specific, optimizable encoding.

Qubits

4

Depth

6

Total Gates

22

Simulability

Not simulable

Mathematical Formulation

ψ(x,θ)=l=1L[UentiRt(θl,i)Rd(xi)]0n|\psi(\mathbf{x}, \boldsymbol{\theta})\rangle = \prod_{l=1}^{L} \left[ U_{\text{ent}} \cdot \bigotimes_i R_t(\theta_{l,i}) \cdot R_d(x_i) \right] |0\rangle^{\otimes n}

Description

Trainable encoding interleaves data-dependent rotation gates with trainable (learnable) rotation gates, creating an encoding that can be optimized for specific downstream tasks. Each layer applies a data rotation R_d(x_i) followed by a trainable rotation R_t(θ_i), where θ_i are parameters learned through gradient-based optimization.

This architecture bridges the gap between fixed feature maps and fully parameterized quantum neural networks. The trainable parameters allow the encoding to adapt its feature representation to the specific classification or regression task, potentially discovering more effective data embeddings than hand-designed feature maps.

Multiple parameter initialization strategies are supported: Xavier, He, zeros, random, and small_random. The entanglement structure (linear, circular, full, or none) is applied after each data-trainable pair. The trainable parameters add expressibility beyond what data-dependent gates alone provide, but require an optimization budget to train.

Circuit Diagram

Property Radar

Properties

Qubits
4
Circuit Depth
6
Total Gates
22
Single-Qubit Gates
16
Two-Qubit Gates
6
Parameters
8
Entangling
Yes
Simulability
Not Simulable
Expressibility
Entanglement Capability
Trainability
0.79
Noise Resilience

Resource Scaling

How resource requirements grow with the number of input features.

FeaturesQubitsDepthGates2Q Gates
226102
446226
8864614
161669430

Code Examples

Trainable encoding with PennyLane using Xavier initialization.

python
from encoding_atlas import TrainableEncoding
import pennylane as qml
import numpy as np

enc = TrainableEncoding(n_features=4, n_layers=2, initialization="xavier")
dev = qml.device("default.qubit", wires=enc.n_qubits)

@qml.qnode(dev)
def circuit(x):
    enc.get_circuit(x, backend="pennylane")
    return qml.state()

x = np.array([0.1, 0.5, 1.2, 2.3])
state = circuit(x)

When to Use This Encoding

  • Task-specific quantum feature map optimization
  • Quantum neural networks (QNNs) with learnable encoding
  • Transfer learning in quantum ML (pre-train encoding, fine-tune classifier)
  • Encoding optimization for specific kernel alignment
  • Research into optimal encoding strategies

Pros & Cons

Advantages

  • Adapts encoding to specific tasks through trainable parameters
  • Multiple initialization strategies for different optimization landscapes
  • Good trainability with moderate layer count
  • Flexible entanglement topologies
  • Bridges fixed feature maps and fully variational circuits

Limitations

  • Requires optimization budget to train parameters
  • Risk of overfitting with too many trainable parameters
  • Must opt in explicitly (requires_trainable=true in guide)
  • Added circuit complexity from dual rotation layers
  • Barren plateau risk increases with layer count >8

References

  1. [1]Benedetti, M., et al. (2019). Parameterized quantum circuits as machine learning models. Quantum Science and Technology, 4(4), 043001.
  2. [2]Mitarai, K., et al. (2018). Quantum circuit learning. Physical Review A, 98(3), 032309.
  3. [3]Schuld, M. & Killoran, N. (2019). Quantum Machine Learning in Feature Hilbert Spaces. Physical Review Letters, 122(4), 040504.