Frame predictions extrapolated from UCF101 (model comparison)

Michael Mathieu, Camille Couprie, Yann LeCun

This page shows video predictions using the method presented in the paper Deep multi-scale video prediction beyond mean square error. The two frames with a red border are the predictions, the other ones are the input (real) frames. The second prediction is obtained by using the first prediction as an input.

This page shows a comparison of different methods. The models are explained in detail in the paper.

The main results can be found here.

Adversarial+GDL

Adversarial

Gradient Difference Loss (GDL)

L2

L2 single scale

L1

Adversarial+GDL

Adversarial

Gradient Difference Loss (GDL)

L2

L2 single scale

L1

Adversarial+GDL

Adversarial

Gradient Difference Loss (GDL)

L2

L2 single scale

L1

Adversarial+GDL

Adversarial

Gradient Difference Loss (GDL)

L2

L2 single scale

L1

Adversarial+GDL

Adversarial

Gradient Difference Loss (GDL)

L2

L2 single scale

L1

Adversarial+GDL

Adversarial

Gradient Difference Loss (GDL)

L2

L2 single scale

L1

Adversarial+GDL

Adversarial

Gradient Difference Loss (GDL)

L2

L2 single scale

L1

Adversarial+GDL

Adversarial

Gradient Difference Loss (GDL)

L2

L2 single scale

L1

Adversarial+GDL

Adversarial

Gradient Difference Loss (GDL)

L2

L2 single scale

L1