The Art of Computer Programming

the-art-of-computer-programming

The Art of Computer Programming

The Art of Computer Programming (TAOCP) is an exhaustive monograph composed by computer researcher Donald Knuth that covers numerous sorts of programming algorithms and their examination.

Knuth started the venture, initially imagined as a solitary book with twelve parts, in 1962. The initial three volumes of what was then expected to be a seven-volume set were distributed in 1968, 1969, and 1973. Work started vigorously on Volume 4 of every 1973, except was suspended in 1977 for chip away at typesetting. Composing of the last duplicate of Volume 4A started in longhand in 2001, and the primary online pre-fascicle, 2A, showed up later in 2001.[1] The main distributed portion of Volume 4 showed up in soft cover as Fascicle 2 of every 2005. The hardback Volume 4A, joining Volume 4, Fascicles 0–4, was distributed in 2011. Volume 4, Fascicle 6 (“Satisfiability”) was delivered in December 2015; Volume 4, Fascicle 5 (“Mathematical Preliminaries Redux; Backtracking; Dancing Links”) was delivered in November 2019.

Fascicles 5 and 6 are relied upon to make up the initial 66% of Volume 4B. Knuth has not reported any assessed date for arrival of Volume 4B, despite the fact that his strategy utilized for Volume 4A is to deliver the hardback volume at some point after arrival of the soft cover fascicles contained in it. Close term distributer gauges put the delivery date at May or June of 2019, which end up being off base.

History

In the wake of winning a Westinghouse Talent Search grant, Knuth selected at the Case Institute of Technology (presently Case Western Reserve University), where his presentation was remarkable to the point that the workforce casted a ballot to grant him an expert of science upon his finishing of the baccalaureate degree. Throughout his late spring excursions, Knuth was recruited by the Burroughs Corporation to compose compilers, procuring more in his late spring a very long time than full teachers accomplished for a whole year.[4] Such endeavors made Knuth a subject of conversation among the science department, which included Richard S. Varga.

In January 1962, when he was an alumni understudy in the science department at Caltech, Knuth was drawn closer by Addison-Wesley to compose a book about compiler plan, and he proposed a bigger degree. He thought of a rundown of 12 part titles the exact day. In the late spring of 1962 he dealt with a FORTRAN compiler for UNIVAC. During this time, he likewise thought of a numerical investigation of straight examining, which persuaded him to give the material a quantitative methodology. In the wake of accepting his PhD in June 1963, he started dealing with his original copy, of which he completed his first draft in June 1965, at 3000 transcribed pages.[5] He had expected that around five written by hand pages would convert into one printed page, yet his distributer said rather that about ​1 1⁄2 transcribed pages meant one printed page. This implied he had around 2000 printed pages of material, which intently coordinates the size of the initial three distributed volumes. The distributer was anxious about tolerating such a task from an alumni understudy. Now, Knuth got uphold from Richard S. Varga, who was the logical counselor to the distributer. Varga was visiting Olga Taussky-Todd and John Todd at Caltech. With Varga’s eager underwriting, the distributer acknowledged Knuth’s extended plans. In its extended variant, the book would be distributed in seven volumes, each with only a couple chapters.[6] Due to the development in the material, the arrangement for Volume 4 has since extended to incorporate Volumes 4A, 4B, 4C, 4D, and perhaps more.

In 1976, Knuth arranged a second release of Volume 2, expecting it to be typeset once more, yet the style of type utilized in the primary version (called hot sort) was not, at this point accessible. In 1977, he chose to invest some energy making something more appropriate. After eight years, he got back with TEX, which is presently utilized for all volumes.

The proposal of a supposed Knuth reward check worth “one hexadecimal dollar” (100HEX base 16 pennies, in decimal, is $2.56) for any blunders found, and the amendment of these mistakes in resulting printings, has added to the exceptionally cleaned and still-definitive nature of the work, long after its first distribution. Another attribute of the volumes is the variety in the trouble of the activities. Knuth even has a mathematical trouble scale for rating those activities, differing from 0 to 50, where 0 is unimportant, and 50 is an open inquiry in contemporary examination. [7]

Assembly language in the book

All examples in the books use a language called “MIX assembly language”, which runs on the hypothetical MIX computer. Currently, the MIX computer is being replaced by the MMIX computer, which is a RISC version. Software such as GNU MDK exists to provide emulation of the MIX architecture. Knuth considers the use of assembly language necessary for the speed and memory usage of algorithms to be judged.

Volumes

Completed

  • Volume 1 – Fundamental Algorithms
  • Chapter 1 – Basic concepts
  • Chapter 2 – Information structures
  • Volume 2 – Seminumerical Algorithms
  • Chapter 3 – Random numbers
  • Chapter 4 – Arithmetic
  • Volume 3 – Sorting and Searching
  • Chapter 5 – Sorting
  • Chapter 6 – Searching
  • Volume 4A – Combinatorial Algorithms
  • Chapter 7 – Combinatorial searching (part 1)

Planned

  • Volume 4B… – Combinatorial Algorithms (chapters 7 & 8 released in several subvolumes)
  • Chapter 7 – Combinatorial searching (continued)
  • Chapter 8 – Recursion
  • Volume 5 – Syntactic Algorithms (as of 2017, estimated for release in 2025)
  • Chapter 9 – Lexical scanning (also includes string search and data compression)
  • Chapter 10 – Parsing techniques
  • Volume 6 – The Theory of Context-Free Languages
  • Volume 7 – Compiler Techniques

Chapter outlines

Completed

Volume 1 – Fundamental Algorithms

  • Chapter 1 – Basic concepts
    • 1.1. Algorithms
    • 1.2. Mathematical Preliminaries
      • 1.2.1. Mathematical Induction
      • 1.2.2. Numbers, Powers, and Logarithms
      • 1.2.3. Sums and Products
      • 1.2.4. Integer Functions and Elementary Number Theory
      • 1.2.5. Permutations and Factorials
      • 1.2.6. Binomial Coefficients
      • 1.2.7. Harmonic Numbers
      • 1.2.8. Fibonacci Numbers
      • 1.2.9. Generating Functions
      • 1.2.10. Analysis of an Algorithm
      • 1.2.11. Asymptotic Representations
        • 1.2.11.1. The O-notation
        • 1.2.11.2. Euler’s summation formula
        • 1.2.11.3. Some asymptotic calculations
    • 1.3 MMIX (MIX in the hardback copy but updated by fascicle 1)
      • 1.3.1. Description of MMIX
      • 1.3.2. The MMIX Assembly Language
      • 1.3.3. Applications to Permutations
    • 1.4. Some Fundamental Programming Techniques
      • 1.4.1. Subroutines
      • 1.4.2. Coroutines
      • 1.4.3. Interpretive Routines
        • 1.4.3.1. A MIX simulator
        • 1.4.3.2. Trace routines
      • 1.4.4. Input and Output
      • 1.4.5. History and Bibliography
  • Chapter 2 – Information Structures
    • 2.1. Introduction
    • 2.2. Linear Lists
      • 2.2.1. Stacks, Queues, and Deques
      • 2.2.2. Sequential Allocation
      • 2.2.3. Linked Allocation
      • 2.2.4. Circular Lists
      • 2.2.5. Doubly Linked Lists
      • 2.2.6. Arrays and Orthogonal Lists
    • 2.3. Trees
      • 2.3.1. Traversing Binary Trees
      • 2.3.2. Binary Tree Representation of Trees
      • 2.3.3. Other Representations of Trees
      • 2.3.4. Basic Mathematical Properties of Trees
        • 2.3.4.1. Free trees
        • 2.3.4.2. Oriented trees
        • 2.3.4.3. The “infinity lemma”
        • 2.3.4.4. Enumeration of trees
        • 2.3.4.5. Path length
        • 2.3.4.6. History and bibliography
      • 2.3.5. Lists and Garbage Collection
    • 2.4. Multilinked Structures
    • 2.5. Dynamic Storage Allocation
    • 2.6. History and Bibliography

Volume 2 – Seminumerical Algorithms

  • Chapter 3 – Random Numbers
    • 3.1. Introduction
    • 3.2. Generating Uniform Random Numbers
      • 3.2.1. The Linear Congruential Method
        • 3.2.1.1. Choice of modulus
        • 3.2.1.2. Choice of multiplier
        • 3.2.1.3. Potency
      • 3.2.2. Other Methods
    • 3.3. Statistical Tests
      • 3.3.1. General Test Procedures for Studying Random Data
      • 3.3.2. Empirical Tests
      • 3.3.3. Theoretical Tests
      • 3.3.4. The Spectral Test
    • 3.4. Other Types of Random Quantities
      • 3.4.1. Numerical Distributions
      • 3.4.2. Random Sampling and Shuffling
    • 3.5. What Is a Random Sequence?
    • 3.6. Summary
  • Chapter 4 – Arithmetic
    • 4.1. Positional Number Systems
    • 4.2. Floating Point Arithmetic
      • 4.2.1. Single-Precision Calculations
      • 4.2.2. Accuracy of Floating Point Arithmetic
      • 4.2.3. Double-Precision Calculations
      • 4.2.4. Distribution of Floating Point Numbers
    • 4.3. Multiple Precision Arithmetic
      • 4.3.1. The Classical Algorithms
      • 4.3.2. Modular Arithmetic
      • 4.3.3. How Fast Can We Multiply?
    • 4.4. Radix Conversion
    • 4.5. Rational Arithmetic
      • 4.5.1. Fractions
      • 4.5.2. The Greatest Common Divisor
      • 4.5.3. Analysis of Euclid’s Algorithm
      • 4.5.4. Factoring into Primes
    • 4.6. Polynomial Arithmetic
      • 4.6.1. Division of Polynomials
      • 4.6.2. Factorization of Polynomials
      • 4.6.3. Evaluation of Powers
      • 4.6.4. Evaluation of Polynomials
    • 4.7. Manipulation of Power Series

Volume 3 – Sorting and Searching

  • Chapter 5 – Sorting
    • 5.1. Combinatorial Properties of Permutations
      • 5.1.1. Inversions
      • 5.1.2. Permutations of a Multiset
      • 5.1.3. Runs
      • 5.1.4. Tableaux and Involutions
    • 5.2. Internal sorting
      • 5.2.1. Sorting by Insertion
      • 5.2.2. Sorting by Exchanging
      • 5.2.3. Sorting by Selection
      • 5.2.4. Sorting by Merging
      • 5.2.5. Sorting by Distribution
    • 5.3. Optimum Sorting
      • 5.3.1. Minimum-Comparison Sorting
      • 5.3.2. Minimum-Comparison Merging
      • 5.3.3. Minimum-Comparison Selection
      • 5.3.4. Networks for Sorting
    • 5.4. External Sorting
      • 5.4.1. Multiway Merging and Replacement Selection
      • 5.4.2. The Polyphase Merge
      • 5.4.3. The Cascade Merge
      • 5.4.4. Reading Tape Backwards
      • 5.4.5. The Oscillating Sort
      • 5.4.6. Practical Considerations for Tape Merging
      • 5.4.7. External Radix Sorting
      • 5.4.8. Two-Tape Sorting
      • 5.4.9. Disks and Drums
    • 5.5. Summary, History, and Bibliography
  • Chapter 6 – Searching
    • 6.1. Sequential Searching
    • 6.2. Searching by Comparison of Keys
      • 6.2.1. Searching an Ordered Table
      • 6.2.2. Binary Tree Searching
      • 6.2.3. Balanced Trees
      • 6.2.4. Multiway Trees
    • 6.3. Digital Searching
    • 6.4. Hashing
    • 6.5. Retrieval on Secondary Keys

Volume 4A – Combinatorial Algorithms, Part 1

  • Chapter 7 – Combinatorial Searching
    • 7.1. Zeros and Ones
      • 7.1.1. Boolean Basics
      • 7.1.2. Boolean Evaluation
      • 7.1.3. Bitwise Tricks and Techniques
      • 7.1.4. Binary Decision Diagrams
    • 7.2. Generating All Possibilities
      • 7.2.1. Generating Basic Combinatorial Patterns
        • 7.2.1.1. Generating all n-tuples
        • 7.2.1.2. Generating all permutations
        • 7.2.1.3. Generating all combinations
        • 7.2.1.4. Generating all partitions
        • 7.2.1.5. Generating all set partitions
        • 7.2.1.6. Generating all trees
        • 7.2.1.7. History and further references

Planned

Volume 4B, 4C, 4D – Combinatorial Algorithms

  • Chapter 7 – Combinatorial Searching (continued)
    • 7.2. Generating all possibilities (continued)
      • 7.2.2. Backtrack programming (published in Fascicle 5)
        • 7.2.2.1. Dancing links (published in Fascicle 5)
        • 7.2.2.2. Satisfiability (published in Fascicle 6)
        • 7.2.2.3. Constraint satisfaction
        • 7.2.2.4. Hamiltonian paths (online draft in pre-fascicle 8A)
        • 7.2.2.5. Cliques
        • 7.2.2.6. Covers (Vertex cover, Set cover problem, Exact cover, Clique cover)
        • 7.2.2.7. Squares
        • 7.2.2.8. A potpourri of puzzles (online draft in pre-fascicle 9B)
        • 7.2.2.9. Estimating backtrack costs (chapter 6 of “Selected Papers on Analysis of Algorithms”, and pre-fascicle 5b in Section 7.2.2 under the heading “Running time estimates”)
      • 7.2.3. Generating inequivalent patterns (includes discussion of Pólya enumeration theorem)
    • 7.3. Shortest paths
    • 7.4. Graph algorithms
      • 7.4.1. Components and traversal
      • 7.4.2. Special classes of graphs
      • 7.4.3. Expander graphs
      • 7.4.4. Random graphs
    • 7.5. Network algorithms
      • 7.5.1. Distinct representatives
      • 7.5.2. The assignment problem
      • 7.5.3. Network flows
      • 7.5.4. Optimum subtrees
      • 7.5.5. Optimum matching
      • 7.5.6. Optimum orderings
    • 7.6. Independence theory
      • 7.6.1. Independence structures
      • 7.6.2. Efficient matroid algorithms
    • 7.7. Discrete dynamic programming (see also Transfer-matrix method)
    • 7.8. Branch-and-bound techniques
    • 7.9. Herculean tasks (aka NP-hard problems)
    • 7.10. Near-optimization
  • Chapter 8 – Recursion (chapter 22 of “Selected Papers on Analysis of Algorithms”)

Volume 5 – Syntactic Algorithms

as of 2017, estimated for release in 2025

  • Chapter 9 – Lexical scanning (includes also string search and data compression)
  • Chapter 10 – Parsing techniques