tick.survival.ModelSCCS

class tick.survival.ModelSCCS(n_intervals: int, n_lags: numpy.array)[source]

Discrete-time Self Control Case Series (SCCS) likelihood. This class provides first order information (gradient and loss) model.

Parameters

n_intervals : int

Number of time intervals observed for each sample.

n_lags : numpy.ndarray, shape=(n_features,), dtype=”uint64”

Number of lags per feature. The model will regress labels on the last observed values of the features over the corresponding n_lags time intervals. n_lags values must be between 0 and n_intervals - 1.

Attributes

features : list of numpy.ndarray or list of scipy.sparse.csr_matrix,

list of length n_cases, each element of the list of shape=(n_intervals, n_features) The list of features matrices.

labels : list of numpy.ndarray,

list of length n_cases, each element of the list of shape=(n_intervals,) The labels vector

censoring : numpy.ndarray, shape=(n_cases,), dtype=”uint64”

The censoring data. This array should contain integers in [1, n_intervals]. If the value i is equal to n_intervals, then there is no censoring for sample i. If censoring = c < n_intervals, then the observation of sample i is stopped at interval c, that is, the row c - 1 of the corresponding matrix. The last n_intervals - c rows are then set to 0.

n_cases : int (read-only)

Number of samples

n_features : int (read-only)

Number of features

n_coeffs : int (read-only)

Total number of coefficients of the model

__init__(n_intervals: int, n_lags: numpy.array)[source]

Initialize self. See help(type(self)) for accurate signature.

fit(features, labels, censoring=None)[source]

Set the data into the model object.

Parameters

features : List[{2d array, csr matrix containing float64

of shape (n_intervals, n_features)}] The features matrix

labels : List[{1d array, csr matrix of shape (n_intervals,)]

The labels vector

censoring : 1d array of shape (n_cases,)

The censoring vector

Returns

output : ModelSCCS

The current instance with given data

get_lip_best() → float

Returns the best Lipschitz constant, using all samples Warning: this might take some time, since it requires a SVD computation.

Returns

output : float

The best Lipschitz constant

get_lip_max() → float

Returns the maximum Lipschitz constant of individual losses. This is particularly useful for step-size tuning of some solvers.

Returns

output : float

The maximum Lipschitz constant

get_lip_mean() → float

Returns the average Lipschitz constant of individual losses. This is particularly useful for step-size tuning of some solvers.

Returns

output : float

The average Lipschitz constant

grad(coeffs: numpy.ndarray, out: numpy.ndarray = None) → numpy.ndarray

Computes the gradient of the model at coeffs

Parameters

coeffs : numpy.ndarray

Vector where gradient is computed

out : numpy.ndarray or None

If None a new vector containing the gradient is returned, otherwise, the result is saved in out and returned

Returns

output : numpy.ndarray

The gradient of the model at coeffs

Notes

The fit method must be called to give data to the model, before using grad. An error is raised otherwise.

loss(coeffs: numpy.ndarray) → float

Computes the value of the goodness-of-fit at coeffs

Parameters

coeffs : numpy.ndarray

The loss is computed at this point

Returns

output : float

The value of the loss

Notes

The fit method must be called to give data to the model, before using loss. An error is raised otherwise.

loss_and_grad(coeffs: numpy.ndarray, out: numpy.ndarray = None) → tuple

Computes the value and the gradient of the function at coeffs

Parameters

coeffs : numpy.ndarray

Vector where the loss and gradient are computed

out : numpy.ndarray or None

If None a new vector containing the gradient is returned, otherwise, the result is saved in out and returned

Returns

loss : float

The value of the loss

grad : numpy.ndarray

The gradient of the model at coeffs

Notes

The fit method must be called to give data to the model, before using loss_and_grad. An error is raised otherwise.