Content not yet available
This lecture has no active video or poster.
Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.
Real-world event sequences are often generated under different mechanisms and thus have clustering structures. Nonetheless, in the modeling and prediction of event sequences, most existing TPPs treat different event sequences independently, ignoring the inherent clustering structures among them. In this study, we design a novel semi-transductive temporal point process (ST-TPP) and learn it with a Gromov-Wasserstein barycentric (GWB) regularizer in the Maximum Likelihood Estimation (MLE) framework. In particular, given a set of event sequences, our method learns a neural TPP together with cluster centers of sequences. When computing the intensity function of an event sequence, the proposed neural TPP encodes the sequence history and the cluster center derived from other similar sequences jointly, leading to a semi-transductive modeling scheme. In the learning phase, besides maximizing the likelihood of event sequences, we leverage data-centric and knowledge-based kernel matrices to regularize sequence embeddings and derive cluster centers, leading to the proposed GWB regularizer. Experiments on various datasets demonstrate that the transductive modeling scheme of ST-TPP provides a novel approach to sharing information across different sequences, resulting in clustered sequence embeddings and competitive predictive performance.