Trainable Encoding
Alternates data-dependent and trainable rotation layers for task-specific, optimizable encoding.
Qubits
4
Depth
6
Total Gates
22
Simulability
Not simulable
Mathematical Formulation
Description
Trainable encoding interleaves data-dependent rotation gates with trainable (learnable) rotation gates, creating an encoding that can be optimized for specific downstream tasks. Each layer applies a data rotation R_d(x_i) followed by a trainable rotation R_t(θ_i), where θ_i are parameters learned through gradient-based optimization.
This architecture bridges the gap between fixed feature maps and fully parameterized quantum neural networks. The trainable parameters allow the encoding to adapt its feature representation to the specific classification or regression task, potentially discovering more effective data embeddings than hand-designed feature maps.
Multiple parameter initialization strategies are supported: Xavier, He, zeros, random, and small_random. The entanglement structure (linear, circular, full, or none) is applied after each data-trainable pair. The trainable parameters add expressibility beyond what data-dependent gates alone provide, but require an optimization budget to train.
Circuit Diagram
Property Radar
Properties
Resource Scaling
How resource requirements grow with the number of input features.
| Features | Qubits | Depth | Gates | 2Q Gates |
|---|---|---|---|---|
| 2 | 2 | 6 | 10 | 2 |
| 4 | 4 | 6 | 22 | 6 |
| 8 | 8 | 6 | 46 | 14 |
| 16 | 16 | 6 | 94 | 30 |
Code Examples
Trainable encoding with PennyLane using Xavier initialization.
from encoding_atlas import TrainableEncoding
import pennylane as qml
import numpy as np
enc = TrainableEncoding(n_features=4, n_layers=2, initialization="xavier")
dev = qml.device("default.qubit", wires=enc.n_qubits)
@qml.qnode(dev)
def circuit(x):
enc.get_circuit(x, backend="pennylane")
return qml.state()
x = np.array([0.1, 0.5, 1.2, 2.3])
state = circuit(x)When to Use This Encoding
- Task-specific quantum feature map optimization
- Quantum neural networks (QNNs) with learnable encoding
- Transfer learning in quantum ML (pre-train encoding, fine-tune classifier)
- Encoding optimization for specific kernel alignment
- Research into optimal encoding strategies
Pros & Cons
Advantages
- Adapts encoding to specific tasks through trainable parameters
- Multiple initialization strategies for different optimization landscapes
- Good trainability with moderate layer count
- Flexible entanglement topologies
- Bridges fixed feature maps and fully variational circuits
Limitations
- Requires optimization budget to train parameters
- Risk of overfitting with too many trainable parameters
- Must opt in explicitly (requires_trainable=true in guide)
- Added circuit complexity from dual rotation layers
- Barren plateau risk increases with layer count >8
References
- [1]Benedetti, M., et al. (2019). Parameterized quantum circuits as machine learning models. Quantum Science and Technology, 4(4), 043001.
- [2]Mitarai, K., et al. (2018). Quantum circuit learning. Physical Review A, 98(3), 032309.
- [3]Schuld, M. & Killoran, N. (2019). Quantum Machine Learning in Feature Hilbert Spaces. Physical Review Letters, 122(4), 040504.