tick.hawkes.
ModelHawkesExpKernLeastSq
(decays: numpy.ndarray, approx: int = 0, n_threads: int = 1)[source]¶Hawkes process model exponential kernels with fixed and given decays. It is modeled with least square loss:
where \(\lambda_i\) is the intensity:
where
\(D\) is the number of nodes
\(\mu_i\) are the baseline intensities
\(\phi_{ij}\) are the kernels
\(t_k^j\) are the timestamps of all events of node \(j\)
and with an exponential parametrisation of the kernels
In our implementation we denote:
Integer \(D\) by the attribute n_nodes
Matrix \(B = (\beta_{ij})_{ij} \in \mathbb{R}^{D \times D}\) by the
parameter decays
. This parameter is given to the model
decays : float
or numpy.ndarray
, shape=(n_nodes, n_nodes)
Either a
float
giving the decay of all exponential kernels or a (n_nodes, n_nodes)numpy.ndarray
giving the decays of the exponential kernels for all pairs of nodes.
approx : int
, default=0 (read-only)
Level of approximation used for computing exponential functions
if 0: no approximation
if 1: a fast approximated exponential function is used
n_threads : int
, default=1
Number of threads used for parallel computation.
if
int <= 0
: the number of threads available on the CPUotherwise the desired number of threads
n_nodes : int
(read-only)
Number of components, or dimension of the Hawkes model
data : list
of numpy.array
(read-only)
The events given to the model through
fit
method. Note that data given throughincremental_fit
is not stored
__init__
(decays: numpy.ndarray, approx: int = 0, n_threads: int = 1)[source]¶Initialize self. See help(type(self)) for accurate signature.
fit
(data, end_times=None)¶Set the corresponding realization(s) of the process.
events : list
of list
of np.ndarray
List of Hawkes processes realizations. Each realization of the Hawkes process is a list of n_node for each component of the Hawkes. Namely
events[i][j]
contains a one-dimensionalnumpy.array
of the events’ timestamps of component j of realization i. If only one realization is given, it will be wrapped into a list
end_times : np.ndarray
or float
, default = None
List of end time of all hawkes processes that will be given to the model. If None, it will be set to each realization’s latest time. If only one realization is provided, then a float can be given.
grad
(coeffs: numpy.ndarray, out: numpy.ndarray = None) → numpy.ndarray¶Computes the gradient of the model at coeffs
coeffs : numpy.ndarray
Vector where gradient is computed
out : numpy.ndarray
or None
If
None
a new vector containing the gradient is returned, otherwise, the result is saved inout
and returned
output : numpy.ndarray
The gradient of the model at
coeffs
Notes
The fit
method must be called to give data to the model,
before using grad
. An error is raised otherwise.
hessian
(x)¶Return model’s hessian
x : np.ndarray
, shape=(n_coeffs,)
Value at which the hessian is computed
Notes
For ModelHawkesExpKernLeastSq
the value of the hessian
does not depend on the value at which it is computed.
incremental_fit
(events, end_time=None)[source]¶Incrementally fit model with data by adding one Hawkes realization.
events : list
of np.ndarray
The events of each component of the realization. Namely
events[j]
contains a one-dimensionalnp.ndarray
of the events’ timestamps of component j
end_time : float
, default=None
End time of the realization. If None, it will be set to realization’s latest time.
Notes
Data is not stored, so this might be useful if the list of all realizations does not fit in memory
loss
(coeffs: numpy.ndarray) → float¶Computes the value of the goodness-of-fit at coeffs
coeffs : numpy.ndarray
The loss is computed at this point
output : float
The value of the loss
Notes
The fit
method must be called to give data to the model,
before using loss
. An error is raised otherwise.
loss_and_grad
(coeffs: numpy.ndarray, out: numpy.ndarray = None) → tuple¶Computes the value and the gradient of the function at
coeffs
coeffs : numpy.ndarray
Vector where the loss and gradient are computed
out : numpy.ndarray
or None
If
None
a new vector containing the gradient is returned, otherwise, the result is saved inout
and returned
loss : float
The value of the loss
grad : numpy.ndarray
The gradient of the model at
coeffs
Notes
The fit
method must be called to give data to the model,
before using loss_and_grad
. An error is raised otherwise.