-
Notifications
You must be signed in to change notification settings - Fork 15
Elements of Computing Systems Chapter 12
We began by reading through some of Murray’s commits, paying particular attention to those which highlighted discrepancies between the book’s advice and the actual behaviour of the reference compiler. (For example: the book says push constant 1; neg
but the compiler does push constant 0; not
; the book says add address and index, but the compiler adds index and address; the book says to set up that
before calculating the value in an array assignment, but the compiler calculates the value first and stores it temporarily.) The general feeling was that it would’ve taken less work to write a “correct” compiler if we’d been given tests for the behaviour of its emitted code (cf. the vm-translator
tests) instead of having to make it match the output of the reference compiler.
Suitably overawed by the magnitude of Murray’s achievement, we agreed to merge his commits into the compilation-engine
branch and then land the whole compilation engine onto master
.
Before starting on the exercises, we discussed a couple of points of confusion in the chapter. Tom confessed to not fully understanding this paragraph on page 251:
This [recursive division] algorithm may be considered suboptimal since each multiplication operation also requires O(n) addition and subtraction operations. However, careful inspection reveals that the product
2 * q * y
can be computed without any multiplication. Instead, we can rely on the value of this product in the previous recursion level, and use addition to establish its current value.
Clarification was not immediately forthcoming.
Tom was also unclear about the intended initial state of the heap in the “Improved Memory Allocation Algorithm” on page 254. A moment’s discussion indicated that page 255 clarifies this with freeList = heapBase; freeList.length = heapLength; freeList.next = null
, i.e. the initial heap is one huge segment filling the available memory.
We briefly discussed the best way to approach the Chapter 12 project: did we want to implement the entire Jack OS? If not, which parts, and in what order? The group was generally most interested in the memory allocator, which was the first part of the project anyway, so we decided to follow instructions and see how far through the project we got.
We refreshed our understanding of the VM emulator by mucking about with it on the command line. Leo eventually tired of this, and retaliated by writing a makefile to compile the skeleton Memory.jack
into Memory.vm
and run MemoryTest.tst
through the emulator. This gave us a meaningful failure message and allowed us to proceed with the implementation of the Memory
class.
We found it easy to implement the “Basic Memory Allocation Algorithm” (pages 253–254) by just incrementing the free pointer in alloc
and doing nothing in deAlloc
. We considered upgrading to the “Improved Memory Allocation Algorithm” — which actually frees memory when not in use — but there wasn’t enough collective enthusiasm to implement something more complicated before it was required.
It was also straightforward to write poke
and peek
by following the “hacking trick” on page 270, namely assigning zero to an Array
variable and then using array access to read and write memory directly. This provided a glimpse into Jack’s laid-back typing discipline: method calls rely on the type of a variable in order to decide what code to execute, but other operations (e.g. assignment) don’t care at all.
After some housekeeping to avoid duplication in the test directories and makefile, we added the skeleton Array
implementation and quickly fleshed it out.
This again involved appreciating Jack’s indifference towards types. As page 265 points out, Array.new
is unusual in being a function
rather than a constructor
; it doesn’t automatically allocate an Array
instance, it just returns the int
address that it gets from Memory.alloc
, which the caller then treats as a legitimate Array
. In particular this means that the “instance method” dispose
is really called directly on an int
, so Memory.deAlloc(this)
is the appropriate implementation. (We think. It’s hard to tell whether this really works, because our Memory.deAlloc
doesn’t do anything.)
As we began running short of time, we decided to pick on divide
as the most potentially interesting part of the Math
class, particularly given the unanswered question about it that had come up earlier.
That question was how to implement this pseudocode…
divide(x, y):
// Integer part of x/y, where x >= 0 and y > 0
if y > x return 0
q = divide(x, 2 * y)
if (x - 2 * q * y) < y
return 2 * q
else
return 2 * q + 1
…without performing the multiplication 2 * q * y
each time divide
is recursively called. The book suggests this can be done with addition (see above) but doesn’t elaborate further.
We decided to investigate by drawing on the whiteboard a kind of stack trace of the values of x
, y
, q
, 2 * q * y
, x - (2 * q * y) < y
and divide
’s result when calculating divide(95, 3)
:
x |
y |
q |
2 * q * y |
x - (2 * q * y) < y |
divide(x, y) |
---|---|---|---|---|---|
95 | 3 | 15 | 90 | no | 31 |
95 | 6 | 7 | 84 | no | 15 |
95 | 12 | 3 | 72 | no | 7 |
95 | 24 | 1 | 48 | no | 3 |
95 | 48 | 0 | 0 | no | 1 |
95 | 96 | — | — | — | 0 |
This made it much clearer to us that 2 * q * y
at a given level of recursion is always the sum of y
and 2 * q * y
at the next level down (e.g. 84 = 12 + 72), which suggested an alternative pseudocode implementation:
divide(x, y):
if y > x return [0, 0]
y_next = 2 * y
q, qy2_next = divide(x, y_next)
qy2 = y_next + qy2_next
if (x - qy2) < y
return [2 * q, qy2]
else
return [2 * q + 1, qy2]
By this time we were tired, triumphant, and half an hour over time, so we decided to stop for the night rather than implement this in Jack. As we left, we expressed reservations about the efficiency of the memory allocations and deallocations required to support the multiple return values in this new divide
.
-
We noticed that some of our copies of the book say “Nand to Tetris Companion” on their cover, while others don’t. Perhaps a measure to placate angry readers who expected the book to ever mention Tetris?
-
Kevin mentioned that the link on page 278, http://www.idc.ac.il/tecs, is out of date; the book’s web site is at http://www.nand2tetris.org/ now.
-
Kevin also noticed that the book’s downloadable source code does not appear to include the source of their Jack compiler.
-
Leo taught (some of) us about phony targets in makefiles.
-
Leo has started using a nice new colour scheme.
-
Chris P emailed the list with another write-up.
Thanks to Leo and Geckoboard for hosting, and thanks to Murray’s heroic efforts for finally propelling us beyond Chapter 11.
- Home
- Documentation
- Choosing a Topic
- Shows & Tells
- Miscellaneous
- Opt Art
- Reinforcement Learning: An Introduction
- 10 Technical Papers Every Programmer Should Read (At Least Twice)
- 7 More Languages in 7 Weeks
- Lua, Day 1: The Call to Adventure
- Lua, Day 2: Tables All the Way Down
- Lua, Day 3
- Factor, Day 1: Stack On, Stack Off
- Factor, Day 2: Painting the Fence
- Factor, Day 3: Balancing on a Boat
- Elm, Day 1: Handling the Basics
- Elm, Day 2: The Elm Architecture
- Elm, Day 3: The Elm Architecture
- Elixir, Day 1: Laying a Great Foundation
- Elixir, Day 2: Controlling Mutations
- Elixir, Day 3: Spawning and Respawning
- Julia, Day 1: Resistance Is Futile
- Julia, Day 2: Getting Assimilated
- Julia, Day 3: Become One With Julia
- Minikanren, Days 1-3
- Minikanren, Einstein's Puzzle
- Idris Days 1-2
- Types and Programming Languages
- Chapter 1: Introduction
- Chapter 2: Mathematical Preliminaries
- Chapter 3: Untyped Arithmetic Expressions
- Chapter 4: An ML Implementation of Arithmetic Expressions
- Chapter 5: The Untyped Lambda-Calculus
- Chapters 6 & 7: De Bruijn Indices and an ML Implementation of the Lambda-Calculus
- Chapter 8: Typed Arithmetic Expressions
- Chapter 9: The Simply-Typed Lambda Calculus
- Chapter 10: An ML Implementation of Simple Types
- Chapter 11: Simple Extensions
- Chapter 11 Redux: Simple Extensions
- Chapter 13: References
- Chapter 14: Exceptions
- Chapter 15: Subtyping – Part 1
- Chapter 15: Subtyping – Part 2
- Chapter 16: The Metatheory of Subtyping
- Chapter 16: Implementation
- Chapter 18: Case Study: Imperative Objects
- Chapter 19: Case Study: Featherweight Java
- The New Turing Omnibus
- Errata
- Chapter 11: Search Trees
- Chapter 8: Random Numbers
- Chapter 35: Sequential Sorting
- Chapter 58: Predicate Calculus
- Chapter 27: Perceptrons
- Chapter 9: Mathematical Research
- Chapter 16: Genetic Algorithms
- Chapter 37: Public Key Cryptography
- Chapter 6: Game Trees
- Chapter 5: Gödel's Theorem
- Chapter 34: Satisfiability (also featuring: Sentient)
- Chapter 44: Cellular Automata
- Chapter 47: Storing Images
- Chapter 12: Error-Correcting Codes
- Chapter 32: The Fast Fourier Transform
- Chapter 36: Neural Networks That Learn
- Chapter 41: NP-Completeness
- Chapter 55: Iteration and Recursion
- Chapter 19: Computer Vision
- Chapter 61: Searching Strings
- Chapter 66: Church's Thesis
- Chapter 52: Text Compression
- Chapter 22: Minimum spanning tree
- Chapter 64: Logic Programming
- Chapter 60: Computer Viruses
- Show & Tell
- Elements of Computing Systems
- Archived pages