tick.linear_model.
LogisticRegression
(fit_intercept=True, penalty='l2', C=1000.0, solver='svrg', step=None, tol=1e-05, max_iter=100, verbose=False, warm_start=False, print_every=10, record_every=10, sdca_ridge_strength=0.001, elastic_net_ratio=0.95, random_state=None, blocks_start=None, blocks_length=None)[source]¶Logistic regression learner, with many choices of penalization and solvers.
C : float
, default=1e3
Level of penalization
penalty : {‘l1’, ‘l2’, ‘elasticnet’, ‘tv’, ‘none’, ‘binarsity’}, default=’l2’
The penalization to use. Default is ridge penalization
solver : {‘gd’, ‘agd’, ‘bfgs’, ‘svrg’, ‘sdca’, ‘sgd’}, default=’svrg’
The name of the solver to use
fit_intercept : bool
, default=True
If
True
, include an intercept in the model
warm_start : bool
, default=False
If true, learning will start from the last reached solution
step : float
, default=None
Initial step size used for learning. Used in gd, agd, sgd and svrg solvers
tol : float
, default=1e-5
The tolerance of the solver (iterations stop when the stopping criterion is below it). By default the solver does
max_iter
iterations
max_iter : int
, default=100
Maximum number of iterations of the solver
verbose : bool
, default=False
If
True
, we verbose things, otherwise the solver does not print anything (but records information in history anyway)
print_every : int
, default=10
Print history information when
n_iter
(iteration number) is a multiple ofprint_every
record_every : int
, default=10
Record history information when
n_iter
(iteration number) is a multiple ofrecord_every
weights : numpy.array
, shape=(n_features,)
The learned weights of the model (not including the intercept)
intercept : float
or None
The intercept, if
fit_intercept=True
, otherwiseNone
classes : numpy.array
, shape=(n_classes,)
The class labels of our problem
__init__
(fit_intercept=True, penalty='l2', C=1000.0, solver='svrg', step=None, tol=1e-05, max_iter=100, verbose=False, warm_start=False, print_every=10, record_every=10, sdca_ridge_strength=0.001, elastic_net_ratio=0.95, random_state=None, blocks_start=None, blocks_length=None)¶decision_function
(X)[source]¶Predict scores for given samples
The confidence score for a sample is the signed distance of that sample to the hyperplane.
X : np.ndarray
or scipy.sparse.csr_matrix
, shape=(n_samples, n_features)
Samples.
output : np.array
, shape=(n_samples,)
Confidence scores.
fit
(X: object, y: numpy.array)[source]¶Fit the model according to the given training data.
X : np.ndarray
or scipy.sparse.csr_matrix
,, shape=(n_samples, n_features)
Training vector, where n_samples in the number of samples and n_features is the number of features.
y : np.array
, shape=(n_samples,)
Target vector relative to X.
self : LearnerGLM
The fitted instance of the model
get_params
()¶Get parameters for this estimator.
params : dict
Parameter names mapped to their values.
predict
(X)[source]¶Predict class for given samples
X : np.ndarray
or scipy.sparse.csr_matrix
, shape=(n_samples, n_features)
Samples.
output : np.array
, shape=(n_samples,)
Returns predicted values.
predict_proba
(X)[source]¶Probability estimates.
The returned estimates for all classes are ordered by the label of classes.
X : np.ndarray
or scipy.sparse.csr_matrix
, shape=(n_samples, n_features)
Input features matrix
output : np.ndarray
, shape=(n_samples, 2)
Returns the probability of the sample for each class in the model in the same order as in
self.classes
set_params
(**kwargs)¶Set the parameters for this learner.
**kwargs : :
Named arguments to update in the learner
output : LearnerGLM
self with updated parameters