Logistic regression learner, with many choices of penalization and solvers.
C : float, default=1e3
Level of penalization
penalty : {‘l1’, ‘l2’, ‘elasticnet’, ‘tv’, ‘none’, ‘binarsity’}, default=’l2’
The penalization to use. Default is ridge penalization
solver : {‘gd’, ‘agd’, ‘bfgs’, ‘svrg’, ‘sdca’, ‘sgd’}, default=’svrg’
The name of the solver to use
fit_intercept : bool, default=True
If
True, include an intercept in the model
warm_start : bool, default=False
If true, learning will start from the last reached solution
step : float, default=None
Initial step size used for learning. Used in gd, agd, sgd and svrg solvers
tol : float, default=1e-5
The tolerance of the solver (iterations stop when the stopping criterion is below it). By default the solver does
max_iteriterations
max_iter : int, default=100
Maximum number of iterations of the solver
verbose : bool, default=False
If
True, we verbose things, otherwise the solver does not print anything (but records information in history anyway)
print_every : int, default=10
Print history information when
n_iter(iteration number) is a multiple ofprint_every
record_every : int, default=10
Record history information when
n_iter(iteration number) is a multiple ofrecord_every
weights : numpy.array, shape=(n_features,)
The learned weights of the model (not including the intercept)
intercept : float or None
The intercept, if
fit_intercept=True, otherwiseNone
classes : numpy.array, shape=(n_classes,)
The class labels of our problem
tick.linear_model.LogisticRegression¶