We recently explored how we can take multiple seen image-to-image translators and reuse them to infer other unseen translations, in an approach we call
Read MoreAuthor: lherranz
Rotating networks to prevent catastrophic forgetting
In contrast to humans, neural networks tend to quickly forget previous tasks when trained on a new one (without revisiting data from previous tasks).
Read MoreGenerative adversarial networks and image-to-image translation
Yet another post about generative adversarial networks (GANs), pix2pix and CycleGAN. You can already find lots of webs with great introductions to GANs (such as here,
Read MoreDeep network compression and adaptation
Network compression and adaptation are generally considered as two independent problems, with compressed networks evaluated on the same dataset, and domain-adapted or task-adapted networks
Read More