Learning-augmented dynamic algorithms: going beyond fine-grained lower bounds using predictions
Dynamic algorithms maintain a solution to a computational problem on a constantly changing input data, without recomputing from scratch after every update. One of the difficulties that dynamic algorithms need to deal with is the fact that future updates are unknown. Given all the recent advances in learning-augmented algorithms, can we use black-box predictors – giving us some approximate knowledge about the yet unseen part of the updates sequence – to speed up dynamic algorithms?
In this talk I will discuss what fine-grained conditional lower bounds tell us about possibilities of such speedups. I will then give one example of a learning-augmented dynamic algorithm, specifically for the transitive closure problem.
The talk is based on a joint paper with Jan van den Brand, Sebastian Forster, and Yasamin Nazari.