Decade Analysis

From Algorithm Wiki
Revision as of 11:55, 10 October 2022 by Admin (talk | contribs) (Created page with "This analysis presents the average annualized improvement rate for problem sizes of 1 thousand, 1 million, and 1 billion and contrasts them with the average improvement rate in hardware as measured by the SPECInt benchmark.\cite{CompArch} As these graphs show, there are two large clusters of algorithm families and then some intermediate values. The first cluster, representing just under half the families, shows little to no improvement even for large problem sizes. Th...")
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

This analysis presents the average annualized improvement rate for problem sizes of 1 thousand, 1 million, and 1 billion and contrasts them with the average improvement rate in hardware as measured by the SPECInt benchmark.\cite{CompArch}

As these graphs show, there are two large clusters of algorithm families and then some intermediate values. The first cluster, representing just under half the families, shows little to no improvement even for large problem sizes. These algorithm families maybe ones that have received little attention, ones that have already achieved the mathematically optimal implementations (and thus are unable to further improve), those that remain intractable for problems of this size, or something else. In any case, these problems have experienced little algorithmic speedup --- and thus improvements, perhaps from hardware or approximate / heuristic approaches would be the most important sources of progress for these algorithms.

Fig2.png

The second cluster of algorithms, consisting of 14% of the families, has yearly improvement rates greater than 1,000% per year. These are algorithms that benefited from an exponential speed-up, for example when the initial algorithm had exponential time complexity, but later improvements made the problem solvable in polynomial time. As this high improvement rate makes clear, early implementations of these algorithms would have been impossibly slow for even moderate size problems, but the algorithmic improvement has made larger data feasible. For these families, algorithm improvement has far outstripped improvements in computer hardware.

This figure also shows how large an effect problem size has on the improvement rate. In particular, for n=1 thousand, only 18% of families had improvement rates faster than hardware, whereas 82% had slower rates. But, for n=1 million and n=1 billion, 30% and 43% improved faster than hardware. Correspondingly, the median algorithm family improved 6% per year for $n=1$ thousand, but 15% per year for n=1 million, and 28% per year for n=1 billion. At a problem size of n=1.06 trillion, the median algorithm improved faster than hardware performance.