ExamplesΒΆ

This section contains some usage examples for TorchJD.

  • Basic Usage provides a toy example using torchjd.backward to make a step of Jacobian descent with the UPGrad aggregator.

  • Instance-Wise Risk Minimization (IWRM) provides an example in which we minimize the vector of per-instance losses, using stochastic sub-Jacobian descent (SSJD). It is compared to the usual minimization of the average loss, called empirical risk minimization (ERM), using stochastic gradient descent (SGD).

  • Multi-Task Learning (MTL) provides an example of multi-task learning where Jacobian descent is used to optimize the vector of per-task losses of a multi-task model, using the dedicated backpropagation function mtl_backward.

  • PyTorch Lightning Integration showcases how to combine TorchJD with PyTorch Lightning, by providing an example implementation of a multi-task LightningModule optimized by Jacobian descent.