mrpro.algorithms.optimizers.lbfgs

mrpro.algorithms.optimizers.lbfgs(f: Operator[Unpack[tuple[Tensor, ...]], tuple[Tensor]], initial_parameters: Sequence[Tensor], lr: float = 1.0, max_iter: int = 100, max_eval: int | None = 100, tolerance_grad: float = 1e-07, tolerance_change: float = 1e-09, history_size: int = 10, line_search_fn: None | Literal['strong_wolfe'] = 'strong_wolfe', callback: Callable[[OptimizerStatus], None] | None = None) tuple[Tensor, ...][source]

LBFGS for non-linear minimization problems.

Parameters:
  • f – scalar function to be minimized

  • initial_parameters – Sequence (for example list) of parameters to be optimized. Note that these parameters will not be changed. Instead, we create a copy and leave the initial values untouched.

  • lr – learning rate

  • max_iter – maximal number of iterations

  • max_eval – maximal number of evaluations of f per optimization step

  • tolerance_grad – termination tolerance on first order optimality

  • tolerance_change – termination tolerance on function value/parameter changes

  • history_size – update history size

  • line_search_fn – line search algorithm, either ‘strong_wolfe’ or None (meaning constant step size)

  • callback – function to be called after each iteration. N.B. the callback is NOT called within the line search of LBFGS

Return type:

list of optimized parameters