tick.hawkes.ModelHawkesSumExpKernLeastSq

class tick.hawkes.ModelHawkesSumExpKernLeastSq(decays: numpy.ndarray, n_baselines=1, period_length=None, approx: int = 0, n_threads: int = 1)[source]

Hawkes process model for sum-exponential kernels with fixed and given decays. It is modeled with least square loss:

\[\sum_{i=1}^{D} \left( \int_0^T \lambda_i(t)^2 dt - 2 \int_0^T \lambda_i(t) dN_i(t) \right)\]

where \(\lambda_i\) is the intensity:

\[\forall i \in [1 \dots D], \quad \lambda_i(t) = \mu_i(t) + \sum_{j=1}^D \sum_{t_k^j < t} \phi_{ij}(t - t_k^j)\]

where

  • \(D\) is the number of nodes

  • \(\mu_i(t)\) are the baseline intensities

  • \(\phi_{ij}\) are the kernels

  • \(t_k^j\) are the timestamps of all events of node \(j\)

and with a sum-exponential parametrisation of the kernels

\[\phi_{ij}(t) = \sum_{u=1}^{U} \alpha^u_{ij} \beta^u \exp (- \beta^u t) 1_{t > 0}\]

In our implementation we denote:

  • Integer \(D\) by the attribute n_nodes

  • Integer \(U\) by the attribute n_decays

  • Vector \(\beta \in \mathbb{R}^{U}\) by the parameter decays. This parameter is given to the model

Parameters

decays : numpy.ndarray, shape=(n_decays, )

An array giving the different decays of the exponentials kernels.

n_baselines : int, default=1

In this model baseline is supposed to be either constant or piecewise constant. If n_baseline > 1 then piecewise constant setting is enabled. In this case \(\mu_i(t)\) is piecewise constant on intervals of size period_length / n_baselines and periodic.

period_length : float, default=None

In piecewise constant setting this denotes the period of the piecewise constant baseline function.

approx : int, default=0 (read-only)

Level of approximation used for computing exponential functions

  • if 0: no approximation

  • if 1: a fast approximated exponential function is used

n_threads : int, default=-1 (read-only)

Number of threads used for parallel computation.

  • if int <= 0: the number of threads available on the CPU

  • otherwise the desired number of threads

Attributes

n_nodes : int (read-only)

Number of components, or dimension of the Hawkes model

n_decays : int (read-only)

Number of decays used in the sum-exponential kernel

baseline_intervals : np.ndarray, shape=(n_baselines)

Start time of each interval on which baseline is piecewise constant.

data : list of numpy.array (read-only)

The events given to the model through fit method. Note that data given through incremental_fit is not stored

__init__(decays: numpy.ndarray, n_baselines=1, period_length=None, approx: int = 0, n_threads: int = 1)[source]

Initialize self. See help(type(self)) for accurate signature.

fit(data, end_times=None)

Set the corresponding realization(s) of the process.

Parameters

events : list of list of np.ndarray

List of Hawkes processes realizations. Each realization of the Hawkes process is a list of n_node for each component of the Hawkes. Namely events[i][j] contains a one-dimensional numpy.array of the events’ timestamps of component j of realization i. If only one realization is given, it will be wrapped into a list

end_times : np.ndarray or float, default = None

List of end time of all hawkes processes that will be given to the model. If None, it will be set to each realization’s latest time. If only one realization is provided, then a float can be given.

grad(coeffs: numpy.ndarray, out: numpy.ndarray = None) → numpy.ndarray

Computes the gradient of the model at coeffs

Parameters

coeffs : numpy.ndarray

Vector where gradient is computed

out : numpy.ndarray or None

If None a new vector containing the gradient is returned, otherwise, the result is saved in out and returned

Returns

output : numpy.ndarray

The gradient of the model at coeffs

Notes

The fit method must be called to give data to the model, before using grad. An error is raised otherwise.

hessian(x)

Return model’s hessian

Parameters

x : np.ndarray, shape=(n_coeffs,)

Value at which the hessian is computed

Notes

For ModelHawkesExpKernLeastSq the value of the hessian does not depend on the value at which it is computed.

incremental_fit(events, end_time=None)

Incrementally fit model with data by adding one Hawkes realization.

Parameters

events : list of np.ndarray

The events of each component of the realization. Namely events[j] contains a one-dimensional np.ndarray of the events’ timestamps of component j

end_time : float, default=None

End time of the realization. If None, it will be set to realization’s latest time.

Notes

Data is not stored, so this might be useful if the list of all realizations does not fit in memory

loss(coeffs: numpy.ndarray) → float

Computes the value of the goodness-of-fit at coeffs

Parameters

coeffs : numpy.ndarray

The loss is computed at this point

Returns

output : float

The value of the loss

Notes

The fit method must be called to give data to the model, before using loss. An error is raised otherwise.

loss_and_grad(coeffs: numpy.ndarray, out: numpy.ndarray = None) → tuple

Computes the value and the gradient of the function at coeffs

Parameters

coeffs : numpy.ndarray

Vector where the loss and gradient are computed

out : numpy.ndarray or None

If None a new vector containing the gradient is returned, otherwise, the result is saved in out and returned

Returns

loss : float

The value of the loss

grad : numpy.ndarray

The gradient of the model at coeffs

Notes

The fit method must be called to give data to the model, before using loss_and_grad. An error is raised otherwise.