Add and Thin: Diffusion for Temporal Point Processes
This page links to additional material for our paper
Add and Thin: Diffusion for Temporal Point Processes
by David Lüdke, Marin Biloš, Oleksandr Shchur, Marten Lienen, Stephan Günnemann
Published at Neural Information Processing Systems (NeurIPS) 2023
Links
[Paper | Poster (t.b.d.) | Github ]
Abstract
Autoregressive neural networks within the temporal point process (TPP) framework have become the standard for modeling continuous-time event data. Even though these models can expressively capture event sequences in a one-step-ahead fashion, they are inherently limited for long-term forecasting applications due to the accumulation of errors caused by their sequential nature. To overcome these limitations, we derive ADD-THIN, a principled probabilistic denoising diffusion model for TPPs that operates on entire event sequences. Unlike existing diffusion approaches, ADD-THIN naturally handles data with discrete and continuous components. In experiments on synthetic and real-world datasets, our model matches the state-of-the-art TPP models in density estimation and strongly outperforms them in forecasting.
Cite
Please cite our paper if you use the method in your own work:
@inproceedings{ luedke2023add,
title={Add and Thin: Diffusion for Temporal Point Processes},
author={David L{\"u}dke and Marin Bilo{\v{s}} and Oleksandr Shchur and Marten Lienen and Stephan G{\"u}nnemann},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=tn9Dldam9L}
}