tick.hawkes.HawkesExpKern

class tick.hawkes.HawkesExpKern(decays, gofit='least-squares', penalty='l2', C=1000.0, solver='agd', step=None, tol=1e-05, max_iter=100, verbose=False, print_every=10, record_every=10, elastic_net_ratio=0.95, random_state=None)[source]

Hawkes process learner for exponential kernels with fixed and given decays, with many choices of penalization and solvers.

Hawkes processes are point processes defined by the intensity:

\[\forall i \in [1 \dots D], \quad \lambda_i(t) = \mu_i + \sum_{j=1}^D \sum_{t_k^j < t} \phi_{ij}(t - t_k^j)\]

where

  • \(D\) is the number of nodes

  • \(\mu_i\) are the baseline intensities

  • \(\phi_{ij}\) are the kernels

  • \(t_k^j\) are the timestamps of all events of node \(j\)

and with an exponential parametrisation of the kernels

\[\phi_{ij}(t) = \alpha^{ij} \beta^{ij} \exp (- \beta^{ij} t) 1_{t > 0}\]

In our implementation we denote:

  • Integer \(D\) by the attribute n_nodes

  • Vector \(\mu \in \mathbb{R}^{D}\) by the attribute baseline

  • Matrix \(A = (\alpha^{ij})_{ij} \in \mathbb{R}^{D \times D}\) by the attribute adjacency

  • Matrix \(B = (\beta_{ij})_{ij} \in \mathbb{R}^{D \times D}\) by the parameter decays. This parameter is given to the model

Parameters:

decays : float or np.ndarray, shape=(n_nodes, n_nodes)

The decays used in the exponential kernels. If a float is given, the initial point will be the matrix filled with this float.

gofit : {‘least-squares’, ‘likelihood’}, default=’least-squares’

Goodness-of-fit used for model’s fitting

C : float, default=1e3

Level of penalization

penalty : {‘l1’, ‘l2’, ‘elasticnet’, ‘nuclear’, ‘none’}, default=’l2’

The penalization to use. Default is ridge penalization. If nuclear is chosen, it is applied on the adjacency matrix.

solver : {‘gd’, ‘agd’, ‘bfgs’, ‘svrg’}, default=’agd’

The name of the solver to use

step : float, default=None

Initial step size used for learning. Used in ‘gd’, ‘agd’, ‘sgd’ and ‘svrg’ solvers

tol : float, default=1e-5

The tolerance of the solver (iterations stop when the stopping criterion is below it). If not reached the solver does max_iter iterations

max_iter : int, default=100

Maximum number of iterations of the solver

verbose : bool, default=False

If True, we verbose things, otherwise the solver does not print anything (but records information in history anyway)

print_every : int, default=10

Print history information when n_iter (iteration number) is a multiple of print_every

record_every : int, default=10

Record history information when n_iter (iteration number) is a multiple of record_every

elastic_net_ratio : float, default=0.95

Ratio of elastic net mixing parameter with 0 <= ratio <= 1.

  • For ratio = 0 this is ridge (L2 squared) regularization.

  • For ratio = 1 this is lasso (L1) regularization.

  • For 0 < ratio < 1, the regularization is a linear combination of L1 and L2.

Used in ‘elasticnet’ penalty

random_state : int seed, or None (default)

The seed that will be used by stochastic solvers. If None, a random seed will be used (based on timestamp and other physical metrics). Used in ‘sgd’, and ‘svrg’ solvers

Attributes:

n_nodes : int

Number of nodes / components in the Hawkes model

baseline : np.array, shape=(n_nodes,)

Inferred baseline of each component’s intensity

adjacency : np.ndarray, shape=(n_nodes, n_nodes)

Inferred adjacency matrix

coeffs : np.array, shape=(n_nodes * n_nodes + n_nodes, )

Raw coefficients of the model. Row stack of self.baseline and self.adjacency

Examples using tick.hawkes.HawkesExpKern