Leaderboard

In this page we keep track of both published and unpublished results achieved on the CORe50 benchmark so that we can easily compare different Continuous Learning strategies.

For each of the Continuous Learning scenario you can find a different sortable (click on the column for a different order) table. The first column contains the avg. Accuracy of the 10 runs after the last training batch with the Mid-CaffeNet.The second column tell us how much variation we can have on the accuracy depending on the batch order (standard deviation of the 10 runs). Finally, the third column contains the asymptotic memory overhead we can expect from the strategy.

Best performing CL strategies to date on each scenario are highlighted in cyan.

New Instances (NI)

Strategy Avg. Acc. % Std. dev. Acc. % Mem. Overhead Ref.
Cumulative [1] 65,15% 0,66% O(#batch) [1]
LwF* [3] 59,42% 2,71% O(1) -
EWC* [2] 57,40% 3,80% O(1) -
Naive [1] 54,69% 6,18% - [1]

* Early unpublished results based on our current re-implementation. It is subject to change.

New Classes (NC)

Strategy Avg. Acc. % Std. dev. Acc. % Mem. Overhead Ref.
Cumulative [1] 64,65% 1,04% O(#batch) [1]
iCaRL* [4] 43,62% 0,66% O(1) -
CWR [1] 42,32% 1,09% O(1) [1]
LwF* [3] 27,60% 1,70% O(1) -
EWC* [2] 26,22% 1,18% O(1) -
Naive [1] 10,75% 0,84% - [1]

* Early unpublished results based on our current re-implementation. It is subject to change.

New Instances and Classes (NIC)

Strategy Avg. Acc. % Std. dev. Acc. % Mem. Overhead Ref.
Cumulative [1] 64,13% 0,88% O(#batch) [1]
CWR [1] 29,56% 0% O(1) [1]
LwF* [3] 28,94% 4,30% O(1) -
EWC* [2] 28,31% 4,30% O(1) -
Naive [1] 19,39% 2,90% - [1]

* Early unpublished results based on our current re-implementation. It is subject to change.


In the near future we plan to add other custom implementations of additional strategies such as iCARL, Synaptic Intelligence and GEM. We are also working on a better (tunable) evaluation metric which can return a single score value but taking into account all the important properties of a CL strategy (like minimal memory overhead) not just the Accuracy % ofter the last batch.

If you want to add your own strategy don't hesitate to write us an e-mail with your results or just make a PR through GitHub! :-)

References

[1] Vincenzo Lomonaco and Davide Maltoni. "CORe50: a new Dataset and Benchmark for Continuous Object Recognition". Proceedings of the 1st Annual Conference on Robot Learning, PMLR 78:17-26, 2017.
[2] James Kirkpatrick & All. "Overcoming catastrophic forgetting in neural networks". Proceedings of the National Academy of Sciences, 2017, 201611835.
[3] Li Zhizhong, and Derek Hoiem. ""Learning without forgetting". European Conference on Computer Vision. Springer International Publishing, 2016.
[4] Rebuffi Sylvestre-Alvise, Alexander Kolesnikov and Christoph H. Lampert. "iCaRL: Incremental classifier and representation learning." arXiv preprint arXiv:1611.07725, 2016.