tick.solver.
AGD
(step: float = None, tol: float = 1e-10, max_iter: int = 100, linesearch: bool = True, linesearch_step_increase: float = 2.0, linesearch_step_decrease: float = 0.5, verbose: bool = True, print_every: int = 10, record_every: int = 1)[source]¶Accelerated proximal gradient descent
For the minimization of objectives of the form
where \(f\) has a smooth gradient and \(g\) is prox-capable.
Function \(f\) corresponds to the model.loss
method of the model
(passed with set_model
to the solver) and \(g\) corresponds to
the prox.value
method of the prox (passed with the set_prox
method).
One iteration of AGD
is as follows:
where \(\nabla f(w)\) is the gradient of \(f\) given by the
model.grad
method and \(\mathrm{prox}_{\eta g}\) is given by the
prox.call
method. The step-size \(\eta\) can be tuned with
step
. The iterations stop whenever tolerance tol
is achieved, or
after max_iter
iterations. The obtained solution \(w\) is returned
by the solve
method, and is also stored in the solution
attribute
of the solver.
step : float
, default=None
Step-size parameter, the most important parameter of the solver. Whenever possible, this can be automatically tuned as
step = 1 / model.get_lip_best()
. Iflinesearch=True
, this is the first step-size to be used in the linesearch (that should be taken as too large).
tol : float
, default=1e-10
The tolerance of the solver (iterations stop when the stopping criterion is below it)
max_iter : int
, default=100
Maximum number of iterations of the solver.
linesearch : bool
, default=True
If
True
, use backtracking linesearch to tune the step automatically.
verbose : bool
, default=True
If
True
, solver verboses history, otherwise nothing is displayed, but history is recorded anyway
print_every : int
, default=10
Print history information every time the iteration number is a multiple of
print_every
. Used only isverbose
is True
record_every : int
, default=1
Save history information every time the iteration number is a multiple of
record_every
linesearch_step_increase : float
, default=2.
Factor of step increase when using linesearch
linesearch_step_decrease : float
, default=0.5
Factor of step decrease when using linesearch
model : Model
The model used by the solver, passed with the
set_model
method
prox : Prox
Proximal operator used by the solver, passed with the
set_prox
method
solution : numpy.array
, shape=(n_coeffs,)
Minimizer found by the solver
history : dict
-like
A dict-type of object that contains history of the solver along iterations. It should be accessed using the
get_history
method
time_start : str
Start date of the call to
solve()
time_elapsed : float
Duration of the call to
solve()
, in seconds
time_end : str
End date of the call to
solve()
References
A. Beck and M. Teboulle, A fast iterative shrinkage-thresholding algorithm for linear inverse problems, SIAM journal on imaging sciences, 2009
__init__
(step: float = None, tol: float = 1e-10, max_iter: int = 100, linesearch: bool = True, linesearch_step_increase: float = 2.0, linesearch_step_decrease: float = 0.5, verbose: bool = True, print_every: int = 10, record_every: int = 1)[source]¶Initialize self. See help(type(self)) for accurate signature.
get_history
(key=None)¶Returns history of the solver
key : str
, default=None
If
None
all history is returned as adict
If
str
, name of the history element to retrieve
output : list
or dict
If
key
is None orkey
is not in history then output is a dict containing history of all keysIf
key
is the name of an element in the history, output is alist
containing the history of this element
objective
(coeffs, loss: float = None)¶Compute the objective function
coeffs : np.array
, shape=(n_coeffs,)
Point where the objective is computed
loss : float
, default=`None`
Gives the value of the loss if already known (allows to avoid its computation in some cases)
output : float
Value of the objective at given
coeffs
set_model
(model: tick.base_model.model.Model)¶Set model in the solver
model : Model
Sets the model in the solver. The model gives the first order information about the model (loss, gradient, among other things)
output : Solver
The same instance with given model
set_prox
(prox: tick.prox.base.prox.Prox)¶Set proximal operator in the solver
prox : Prox
The proximal operator of the penalization function
output : Solver
The solver with given prox
Notes
In some solvers, set_model
must be called before
set_prox
, otherwise and error might be raised
solve
(x0=None, step=None)¶Launch the solver
x0 : np.array
, shape=(n_coeffs,), default=`None`
Starting point of the solver
step : float
, default=`None`
Step-size or learning rate for the solver. This can be tuned also using the
step
attribute
output : np.array
, shape=(n_coeffs,)
Obtained minimizer for the problem, same as
solution
attribute