torchjd torchjd

TorchJD is a library enabling Jacobian descent with PyTorch, to train neural networks with multiple objectives. It is based on the theory from Jacobian Descent For Multi-Objective Optimization and several other related publications.

The main purpose is to jointly optimize multiple objectives without combining them into a single scalar loss. When the objectives are conflicting, this can be the key to a successful and stable optimization. To get started, check out our basic usage example.

Gradient descent relies on gradients to optimize a single objective. Jacobian descent takes this idea a step further, using the Jacobian to optimize multiple objectives. An important component of Jacobian descent is the aggregator, which maps the Jacobian to an optimization step. In the page Aggregation, we provide an overview of the various aggregators available in TorchJD, along with some of their key characteristics. A precise description of this formalism along with the UPGrad aggregator is available in Section 2 of Jacobian Descent For Multi-Objective Optimization.

A straightforward application of Jacobian descent is multi-task learning, in which the vector of per-task losses has to be minimized. To start using TorchJD for multi-task learning, follow our MTL example.

TorchJD is open-source, under MIT License. The source code is available on GitHub.