UPGrad

class torchjd.aggregation.UPGrad(pref_vector=None, norm_eps=0.0001, reg_eps=0.0001, solver='quadprog')[source]

Aggregator that projects each row of the input matrix onto the dual cone of all rows of this matrix, and that combines the result, as proposed in Jacobian Descent For Multi-Objective Optimization.

Parameters:
  • pref_vector (Tensor | None) – The preference vector used to combine the projected rows. If not provided, defaults to \(\begin{bmatrix} \frac{1}{m} & \dots & \frac{1}{m} \end{bmatrix}^T \in \mathbb{R}^m\).

  • norm_eps (float) – A small value to avoid division by zero when normalizing.

  • reg_eps (float) – A small value to add to the diagonal of the gramian of the matrix. Due to numerical errors when computing the gramian, it might not exactly be positive definite. This issue can make the optimization fail. Adding reg_eps to the diagonal of the gramian ensures that it is positive definite.

  • solver (Literal['quadprog']) – The solver used to optimize the underlying optimization problem.

class torchjd.aggregation.UPGradWeighting(pref_vector=None, norm_eps=0.0001, reg_eps=0.0001, solver='quadprog')[source]

Weighting giving the weights of UPGrad.

Parameters:
  • pref_vector (Tensor | None) – The preference vector to use. If not provided, defaults to \(\begin{bmatrix} \frac{1}{m} & \dots & \frac{1}{m} \end{bmatrix}^T \in \mathbb{R}^m\).

  • norm_eps (float) – A small value to avoid division by zero when normalizing.

  • reg_eps (float) – A small value to add to the diagonal of the gramian of the matrix. Due to numerical errors when computing the gramian, it might not exactly be positive definite. This issue can make the optimization fail. Adding reg_eps to the diagonal of the gramian ensures that it is positive definite.

  • solver (Literal['quadprog']) – The solver used to optimize the underlying optimization problem.