The algorithms considered of high importance in the 1970s [in Knuth] aren't necessarily the ones of high importance today.
Curious about this, please share just one example.
These days, there are a lot more kinds of trees (and/or more tree algorithms) then there are in the original Knuth v2. (I can't comment on the revised edition, or v3, alas.)When I was looking at Sedgewick's class, I did a lot of "how come they never mentioned this back in MY data structures class? Oh. It wasn't invented yet." :-(
Also, algorithms tend to be described as performing relative to the number of things being operated on:
t = k1 * f(n) + k2To a large extent, f(n) describes how good your algorithm is, and is of much interest to Computer Scientists ("
f(n) = k3*log2(n)" is SO much better than "
f(n) = k3 * n3"). The k's come largely from how fast your computer is, and how well you code the algorithm (of much interest to Computer
Engineers and coders.). As computers have gotten faster and n's have gotten larger, algorithms are considered, described, and implemented in ways that they wouldn't have been in the 70s (commonly using recursion and dynamic allocation being examples.)