Andris Ambainis, Yuval Filmus and François Le Gall

STOC 2015

Coppersmith and Winograd gave an $O(n^{2.376})$ algorithm for matrix multiplication in 1990. Their algorithm relies on an identity known as the Coppersmith–Winograd identity. Analyzing the identity as-is using Strassen’s laser method and an ingenious construction, Coppersmith and Winograd obtained an $O(n^{2.388})$ algorithm. The tensor square of the basic identity leads to the improved algorithm.

Recently there has been a surge of activity in the area. Stothers, Vassilevska-Williams and Le Gall studied higher and higher tensor powers of the basic identity, culminating in Le Gall’s $O(n^{2.3728639})$ algorithm. How far can this approach go?

We describe a framework, *laser method with merging*, which encompasses all the algorithms just described, and is at once more general and amenable to analysis. We show that taking the $N$th tensor power for an arbitrary $N$ cannot obtain an algorithm with running time $O(n^{2.3725})$ for the exact identity used in state-of-the-art algorithms.

@inproceedings{AFLG2015,

author = {Ambainis, Andris and Filmus, Yuval and Le Gall, Franc{c}ois},

title = {Fast matrix multiplication: limitations of the {C}oppersmith--{W}inograd method},

booktitle = {47th Annual Symposium on the Theory of Computing ({STOC} 2015)},

year = {2015},

pages = {585--593}

}

copy to clipboard
author = {Ambainis, Andris and Filmus, Yuval and Le Gall, Franc{c}ois},

title = {Fast matrix multiplication: limitations of the {C}oppersmith--{W}inograd method},

booktitle = {47th Annual Symposium on the Theory of Computing ({STOC} 2015)},

year = {2015},

pages = {585--593}

}