tick.survival.
ModelSCCS
(n_intervals: int, n_lags: numpy.array)[source]¶Discrete-time Self Control Case Series (SCCS) likelihood. This class provides first order information (gradient and loss) model.
n_intervals : int
Number of time intervals observed for each sample.
n_lags : numpy.ndarray
, shape=(n_features,), dtype=”uint64”
Number of lags per feature. The model will regress labels on the last observed values of the features over the corresponding
n_lags
time intervals.n_lags
values must be between 0 andn_intervals
- 1.
features : list
of numpy.ndarray
or list
of scipy.sparse.csr_matrix
,
list of length n_cases, each element of the list of shape=(n_intervals, n_features) The list of features matrices.
labels : list
of numpy.ndarray
,
list of length n_cases, each element of the list of shape=(n_intervals,) The labels vector
censoring : numpy.ndarray
, shape=(n_cases,), dtype=”uint64”
The censoring data. This array should contain integers in [1, n_intervals]. If the value i is equal to n_intervals, then there is no censoring for sample i. If censoring = c < n_intervals, then the observation of sample i is stopped at interval c, that is, the row c - 1 of the corresponding matrix. The last n_intervals - c rows are then set to 0.
n_cases : int
(read-only)
Number of samples
n_features : int
(read-only)
Number of features
n_coeffs : int
(read-only)
Total number of coefficients of the model
__init__
(n_intervals: int, n_lags: numpy.array)[source]¶Initialize self. See help(type(self)) for accurate signature.
fit
(features, labels, censoring=None)[source]¶Set the data into the model object.
features : List[{2d array, csr matrix containing float64
of shape (n_intervals, n_features)}] The features matrix
labels : List[{1d array, csr matrix of shape (n_intervals,)]
The labels vector
censoring : 1d array of shape (n_cases,)
The censoring vector
output : ModelSCCS
The current instance with given data
get_lip_best
() → float¶Returns the best Lipschitz constant, using all samples Warning: this might take some time, since it requires a SVD computation.
output : float
The best Lipschitz constant
get_lip_max
() → float¶Returns the maximum Lipschitz constant of individual losses. This is particularly useful for step-size tuning of some solvers.
output : float
The maximum Lipschitz constant
get_lip_mean
() → float¶Returns the average Lipschitz constant of individual losses. This is particularly useful for step-size tuning of some solvers.
output : float
The average Lipschitz constant
grad
(coeffs: numpy.ndarray, out: numpy.ndarray = None) → numpy.ndarray¶Computes the gradient of the model at coeffs
coeffs : numpy.ndarray
Vector where gradient is computed
out : numpy.ndarray
or None
If
None
a new vector containing the gradient is returned, otherwise, the result is saved inout
and returned
output : numpy.ndarray
The gradient of the model at
coeffs
Notes
The fit
method must be called to give data to the model,
before using grad
. An error is raised otherwise.
loss
(coeffs: numpy.ndarray) → float¶Computes the value of the goodness-of-fit at coeffs
coeffs : numpy.ndarray
The loss is computed at this point
output : float
The value of the loss
Notes
The fit
method must be called to give data to the model,
before using loss
. An error is raised otherwise.
loss_and_grad
(coeffs: numpy.ndarray, out: numpy.ndarray = None) → tuple¶Computes the value and the gradient of the function at
coeffs
coeffs : numpy.ndarray
Vector where the loss and gradient are computed
out : numpy.ndarray
or None
If
None
a new vector containing the gradient is returned, otherwise, the result is saved inout
and returned
loss : float
The value of the loss
grad : numpy.ndarray
The gradient of the model at
coeffs
Notes
The fit
method must be called to give data to the model,
before using loss_and_grad
. An error is raised otherwise.