tick.hawkes.ModelHawkesExpKernLeastSq

class tick.hawkes.ModelHawkesExpKernLeastSq(decays: numpy.ndarray, approx: int = 0, n_threads: int = 1)[source]

Hawkes process model exponential kernels with fixed and given decays. It is modeled with least square loss:

\[\sum_{i=1}^{D} \left( \int_0^T \lambda_i(t)^2 dt - 2 \int_0^T \lambda_i(t) dN_i(t) \right)\]

where \(\lambda_i\) is the intensity:

\[\forall i \in [1 \dots D], \quad \lambda_i(t) = \mu_i + \sum_{j=1}^D \sum_{t_k^j < t} \phi_{ij}(t - t_k^j)\]

where

  • \(D\) is the number of nodes

  • \(\mu_i\) are the baseline intensities

  • \(\phi_{ij}\) are the kernels

  • \(t_k^j\) are the timestamps of all events of node \(j\)

and with an exponential parametrisation of the kernels

\[\phi_{ij}(t) = \alpha^{ij} \beta^{ij} \exp (- \beta^{ij} t) 1_{t > 0}\]

In our implementation we denote:

  • Integer \(D\) by the attribute n_nodes

  • Matrix \(B = (\beta_{ij})_{ij} \in \mathbb{R}^{D \times D}\) by the parameter decays. This parameter is given to the model

Parameters

decays : float or numpy.ndarray, shape=(n_nodes, n_nodes)

Either a float giving the decay of all exponential kernels or a (n_nodes, n_nodes) numpy.ndarray giving the decays of the exponential kernels for all pairs of nodes.

approx : int, default=0 (read-only)

Level of approximation used for computing exponential functions

  • if 0: no approximation

  • if 1: a fast approximated exponential function is used

n_threads : int, default=1

Number of threads used for parallel computation.

  • if int <= 0: the number of threads available on the CPU

  • otherwise the desired number of threads

Attributes

n_nodes : int (read-only)

Number of components, or dimension of the Hawkes model

data : list of numpy.array (read-only)

The events given to the model through fit method. Note that data given through incremental_fit is not stored

__init__(decays: numpy.ndarray, approx: int = 0, n_threads: int = 1)[source]

Initialize self. See help(type(self)) for accurate signature.

fit(data, end_times=None)

Set the corresponding realization(s) of the process.

Parameters

events : list of list of np.ndarray

List of Hawkes processes realizations. Each realization of the Hawkes process is a list of n_node for each component of the Hawkes. Namely events[i][j] contains a one-dimensional numpy.array of the events’ timestamps of component j of realization i. If only one realization is given, it will be wrapped into a list

end_times : np.ndarray or float, default = None

List of end time of all hawkes processes that will be given to the model. If None, it will be set to each realization’s latest time. If only one realization is provided, then a float can be given.

grad(coeffs: numpy.ndarray, out: numpy.ndarray = None) → numpy.ndarray

Computes the gradient of the model at coeffs

Parameters

coeffs : numpy.ndarray

Vector where gradient is computed

out : numpy.ndarray or None

If None a new vector containing the gradient is returned, otherwise, the result is saved in out and returned

Returns

output : numpy.ndarray

The gradient of the model at coeffs

Notes

The fit method must be called to give data to the model, before using grad. An error is raised otherwise.

hessian(x)

Return model’s hessian

Parameters

x : np.ndarray, shape=(n_coeffs,)

Value at which the hessian is computed

Notes

For ModelHawkesExpKernLeastSq the value of the hessian does not depend on the value at which it is computed.

incremental_fit(events, end_time=None)[source]

Incrementally fit model with data by adding one Hawkes realization.

Parameters

events : list of np.ndarray

The events of each component of the realization. Namely events[j] contains a one-dimensional np.ndarray of the events’ timestamps of component j

end_time : float, default=None

End time of the realization. If None, it will be set to realization’s latest time.

Notes

Data is not stored, so this might be useful if the list of all realizations does not fit in memory

loss(coeffs: numpy.ndarray) → float

Computes the value of the goodness-of-fit at coeffs

Parameters

coeffs : numpy.ndarray

The loss is computed at this point

Returns

output : float

The value of the loss

Notes

The fit method must be called to give data to the model, before using loss. An error is raised otherwise.

loss_and_grad(coeffs: numpy.ndarray, out: numpy.ndarray = None) → tuple

Computes the value and the gradient of the function at coeffs

Parameters

coeffs : numpy.ndarray

Vector where the loss and gradient are computed

out : numpy.ndarray or None

If None a new vector containing the gradient is returned, otherwise, the result is saved in out and returned

Returns

loss : float

The value of the loss

grad : numpy.ndarray

The gradient of the model at coeffs

Notes

The fit method must be called to give data to the model, before using loss_and_grad. An error is raised otherwise.