Autoencoding Probabilistic Circuits (APC)¶
APC combines a tractable probabilistic-circuit encoder over joint variables \((X, Z)\) with a pluggable decoder and trains with a hybrid objective:
Reference¶
APC is described in:
Status Note¶
APC inference APIs remain available (encode/decode/sampling/likelihood). Latent-stat extraction and KL-style training helpers are currently unsupported.
Main Components¶
Configuration¶
- class spflow.zoo.apc.config.ApcConfig(latent_dim, rec_loss='mse', n_bits=8, sample_tau=1.0, loss_weights=<factory>)[source]¶
Bases:
objectCore APC model configuration.
- latent_dim¶
Dimensionality of the latent variable block
Z.
- rec_loss¶
Reconstruction criterion used by
AutoencodingPC.
- n_bits¶
Bit-depth used by reference-style image reconstruction scaling.
- sample_tau¶
Temperature for differentiable sampling (SIMPLE/Gumbel style paths).
- loss_weights¶
Weights for
rec,kld, andnllobjective terms.
- loss_weights: ApcLossWeights¶
- class spflow.zoo.apc.config.ApcLossWeights(rec=1.0, kld=1.0, nll=1.0)[source]¶
Bases:
objectWeights for the APC training objective terms.
- rec¶
Weight for the reconstruction loss.
- kld¶
Weight for the latent KL term.
- nll¶
Weight for the joint negative log-likelihood term.
- class spflow.zoo.apc.config.ApcTrainConfig(epochs=1, batch_size=64, learning_rate=0.001, weight_decay=0.0, grad_clip_norm=None)[source]¶
Bases:
objectConfiguration for lightweight APC trainer helpers.
- epochs¶
Number of training epochs.
- batch_size¶
Batch size used for tensor-backed training/evaluation inputs.
- learning_rate¶
Optimizer learning rate when an optimizer is not provided.
- weight_decay¶
Adam weight decay when an optimizer is not provided.
- grad_clip_norm¶
Optional gradient clipping threshold (L2 norm).
Model¶
- class spflow.zoo.apc.model.AutoencodingPC(encoder, decoder, config)[source]¶
Bases:
ModuleAPC model combining a tractable encoder and an optional decoder.
If
decoderisNone, decoding is delegated to the encoder’s evidence-conditioneddecodemethod.- decode(z, *, mpe=False, tau=None)[source]¶
Decode latents into reconstructions/samples in data space.
- forward(x)[source]¶
Alias for
loss_components()to integrate with training loops.
- joint_log_likelihood(x, z)[source]¶
Compute encoder joint log-likelihood
log p(x, z)per sample.- Return type:
- log_likelihood_x(x)[source]¶
Compute encoder marginal log-likelihood
log p(x)per sample.- Return type:
- reconstruct(x, *, mpe=False, tau=None)[source]¶
Reconstruct
xby encoding tozand decoding back to data space.- Return type:
Decoders¶
- class spflow.zoo.apc.decoders.MLPDecoder1D(latent_dim, output_dim, hidden_dims=(256, 256), out_activation='identity')[source]¶
Bases:
ModuleMLP decoder mapping latent vectors to flat feature vectors.
The module expects latent input shaped
(B, latent_dim)(or reshape-compatible) and returns reconstructed vectors shaped(B, output_dim).
- class spflow.zoo.apc.decoders.ConvDecoder2D(latent_dim, output_shape, base_channels=128, num_upsamples=2, out_activation='identity')[source]¶
Bases:
ModuleConvolutional decoder mapping latent vectors to image-shaped outputs.
The decoder projects
zto a coarse feature map, upsamples through small convolutional blocks, and resizes to the exact configured output image size.- __init__(latent_dim, output_shape, base_channels=128, num_upsamples=2, out_activation='identity')[source]¶
Initialize a convolutional image decoder.
- Parameters:
latent_dim (
int) – Size of the latent representation.output_shape (
tuple[int,int,int]) – Target output shape(channels, height, width).base_channels (
int) – Initial projected channel count.num_upsamples (
int) – Number of nearest-neighbor upsampling blocks.out_activation (
Literal['identity','tanh','sigmoid']) – Final output activation.
Trainer Helpers¶
- spflow.zoo.apc.train.train_apc_step(model, batch, optimizer, *, grad_clip_norm=None)[source]¶
Run a single APC optimization step.
- Parameters:
- Return type:
- Returns:
Detached tensor metrics produced by
model.loss_components.
- spflow.zoo.apc.train.evaluate_apc(model, data, *, batch_size=256)[source]¶
Evaluate mean APC losses on a dataset/iterator.
- spflow.zoo.apc.train.fit_apc(model, train_data, *, config, optimizer=None, val_data=None)[source]¶
Fit an APC model and return epoch-level metrics.
- Parameters:
model (
AutoencodingPC) – APC model to train.train_data (
Tensor|Iterable) – Tensor dataset or iterable of batches.config (
ApcTrainConfig) – Training hyperparameters.optimizer (
Optimizer|None) – Optional optimizer override. Defaults to Adam.val_data (
Tensor|Iterable|None) – Optional validation data source.
- Return type:
- Returns:
Per-epoch metric dictionaries including train metrics and, when provided, validation metrics.
Minimal Example (Einet APC)¶
import torch
from spflow.zoo.apc.config import ApcConfig, ApcLossWeights, ApcTrainConfig
from spflow.zoo.apc.decoders import MLPDecoder1D
from spflow.zoo.apc.encoders.einet_joint_encoder import EinetJointEncoder
from spflow.zoo.apc.model import AutoencodingPC
from spflow.zoo.apc.train import fit_apc
encoder = EinetJointEncoder(
num_x_features=32,
latent_dim=8,
num_sums=8,
num_leaves=8,
depth=3,
num_repetitions=1,
layer_type="linsum",
structure="top-down",
)
decoder = MLPDecoder1D(latent_dim=8, output_dim=32, hidden_dims=(128, 128))
cfg = ApcConfig(
latent_dim=8,
rec_loss="mse",
sample_tau=1.0,
loss_weights=ApcLossWeights(rec=1.0, kld=0.1, nll=1.0),
)
model = AutoencodingPC(encoder=encoder, decoder=decoder, config=cfg)
data = torch.randn(512, 32)
history = fit_apc(model, data, config=ApcTrainConfig(epochs=5, batch_size=64))
Conv-PC APC Note¶
ConvPcJointEncoder supports image-shaped inputs and latent fusion at a configurable hierarchy depth. In the current implementation, latent_dim must match the feature count at the selected latent fusion depth.