-
Notifications
You must be signed in to change notification settings - Fork 15
iCE2Tetris
Afternoon! Some of you will remember the heady days of 2014 when this club got together to work through the NAND to Tetris course. I still think it was one of my favourite club activities we ever did and I have a lot of fond memories of working through the book in the old Geckoboard office.
Late last year I decided to dip my toes into the FPGA world and bought myself an "iCEBreaker" board, which is a fairly cheap, entry-level FPGA that has the advantage of being entirely built on an open source toolchain, which means not having to deal with massive horrible IDEs from FPGA vendors, which had always put me off. I basically had zero experience with electronics but I'd been fascinated by FPGAs and wanted to learn more.
So naturally, when I received my iCEBreaker in February just a few weeks before lockdown began, I started learning enough about it that I could eventually aim to replicate the NAND to Tetris project on it. Just as we in the club developed a simulated computer out of CPU, RAM, ALU etc, I would do the same, except on real hardware. I hooked up the FPGA to a VGA display, and for input I implemented the PS/2 protocol and found an old keyboard online.
After months of hard work, I have now open sourced my project, which can be found here: https://github.com/leocassarani/ice2tetris. The repository also includes a C++ simulator (in the
sim
directory) which allowed me to try out the hardware design without running it on real hardware -- this involved (for example) taking key presses on my Mac and translating them into PS/2 signals so the simulated hardware can interpret them, and the same for the VGA display output.I ended up implementing a much more sophisticated CPU than what's in the book, with various optimisations to take advantage of the way the "Hack" architecture works. Running on a 25MHz clock (and on actual hardware!), this is a lot faster than the simulators we were using back in the club days. The game of Pong that ships with the book is so fast that it's hard to see the ball (though I got very good at it after test-playing it for months). I remember that towards the end of the book, @h-lame had tried to write a Mandelbrot implementation but we found that the simulator was too slow to render it properly; if you still have the source code let me know because it would be really cool to try it out on real hardware.
At the same time, I was unhappy with the fact that in the book (spoiler alert), no one ever actually gets to play Tetris!! So I dusted off my knowledge of the "Jack" language and built the most full-featured version of Tetris I could fit within the punishingly low 32,768-instruction limit of the Hack architecture: https://github.com/leocassarani/Tetris.jack
Initially I was using the compiler, "VM translator" and assembler that we built in Ruby as part of the club, but I pretty quickly found that I needed tooling that could optimise my code to run faster and fit into a smaller binary if I wanted to compile large programs like Tetris with it. So naturally I had to write my own Jack optimising compiler in Rust, which encapsulates all the different stages that we wrote separately as part of the book (Jack to VM, VM to assembly, assembly to binary): https://github.com/leocassarani/jackc
Finally, when you're spending weeks working on a large project in Jack, it can get very frustrating to be editing these programs without help from your editor, so naturally I ended up writing a syntax highlighting plugin for Vim: https://github.com/leocassarani/jack.vim
-- Leo Cassarani, 13:24, Wednesday, 14th October 2020
That gif was taken while running in my the simulator (the code in the sim directory of the ice2tetris repo) which means that, as I said, my key presses were being reinterpreted as PS/2 signals, the VGA output from the simulated hardware was being interpreted by my SDL implementation, and the ROM (the Tetris compiled binary) was being fed to the CPU by a simulated flash chip (over an SPI interface, which I'd never heard about until I started this). (edited)
That part was really fun too but I wish I'd known about simulating a Verilog design before spending three months doing it all "by hand" on real hardware, which is notoriously hard to debug
It did help me find a bunch of bugs though and when I later went to add some more optimisations to the CPU, it was good to be able to test them in the simulator first
There's a whole other thing where if you pass a certain flag into the simulator you can get a (huge) trace file showing the value (1 or 0) of each signal inside the computer at any one time, really useful for debugging
-- Leo Cassarani, 16:38, Wednesday, 14th October 2020
Fans of #rust may be interested in how I was able to define an
asm!
macro (not to be confused with the one by the same name that ships with Rust) so I could have an easier way to generate snippets of Hack assembly code. For example, this is the code that is inserted into any program that uses equality comparison (==): https://github.com/leocassarani/jackc/blob/30a98b6e5439d30b5b1dd9d7ba3434a165a5d1c8/src/vm/translator.rs#L294-L315fn eq() -> Vec<Instruction> { vec![ asm!(("EQ")), asm!(@"R14"), asm!(M = D), asm!(@"SP"), asm!(AM = M - 1), asm!(D = M), asm!(A = A - 1), asm!(D = M - D), asm!(M = 0), asm!(@"END_EQ"), asm!(D;JNE), asm!(@"SP"), asm!(A = M - 1), asm!(M = -1), asm!(("END_EQ")), asm!(@"R14"), asm!(A = M), asm!(0;JMP), ] }It's really impressive how much that looks like Hack assembly (if anyone remembers what that was like). For example defining a new label is done with
(LABEL)
and in my macro, you can just doasm!(("LABEL"))
. A and C instructions (@42
andM = D
or0;JMP
respectively) can also be represented in exactly the same format.The macro definition is here if anyone's curious: https://github.com/leocassarani/jackc/blob/30a98b6e5439d30b5b1dd9d7ba3434a165a5d1c8/src/asm/instruction.rs#L4-L26
(When we did it in Ruby we just did it all with string interpolation: https://github.com/computationclub/vm-translator/blob/9198f319d6d2557856aa52c18092f10f371f768c/lib/code_writer.rb#L47-L66)
output.puts <<-EOF @SP // SP-- MD=M-1 // Load M[SP] A=M D=M // Load M[SP-1] A=A-1 // Subtract M[SP-1] from M[SP] D=M-D // If the result satisfies command then jump to (TRUE) @#{true_label} D;J#{command.upcase} // Load M[SP] @SP A=M-1 // M[SP-1] = 0 M=0 EOF
-- Leo Cassarani, 18:34, Wednesday, 14th October 2020
Hmmm
Do we remember it looking like the mandelbrot set at all?
ha! that .. that might be it
- Home
- Documentation
- Choosing a Topic
- Shows & Tells
- Miscellaneous
- Opt Art
- Reinforcement Learning: An Introduction
- 10 Technical Papers Every Programmer Should Read (At Least Twice)
- 7 More Languages in 7 Weeks
- Lua, Day 1: The Call to Adventure
- Lua, Day 2: Tables All the Way Down
- Lua, Day 3
- Factor, Day 1: Stack On, Stack Off
- Factor, Day 2: Painting the Fence
- Factor, Day 3: Balancing on a Boat
- Elm, Day 1: Handling the Basics
- Elm, Day 2: The Elm Architecture
- Elm, Day 3: The Elm Architecture
- Elixir, Day 1: Laying a Great Foundation
- Elixir, Day 2: Controlling Mutations
- Elixir, Day 3: Spawning and Respawning
- Julia, Day 1: Resistance Is Futile
- Julia, Day 2: Getting Assimilated
- Julia, Day 3: Become One With Julia
- Minikanren, Days 1-3
- Minikanren, Einstein's Puzzle
- Idris Days 1-2
- Types and Programming Languages
- Chapter 1: Introduction
- Chapter 2: Mathematical Preliminaries
- Chapter 3: Untyped Arithmetic Expressions
- Chapter 4: An ML Implementation of Arithmetic Expressions
- Chapter 5: The Untyped Lambda-Calculus
- Chapters 6 & 7: De Bruijn Indices and an ML Implementation of the Lambda-Calculus
- Chapter 8: Typed Arithmetic Expressions
- Chapter 9: The Simply-Typed Lambda Calculus
- Chapter 10: An ML Implementation of Simple Types
- Chapter 11: Simple Extensions
- Chapter 11 Redux: Simple Extensions
- Chapter 13: References
- Chapter 14: Exceptions
- Chapter 15: Subtyping – Part 1
- Chapter 15: Subtyping – Part 2
- Chapter 16: The Metatheory of Subtyping
- Chapter 16: Implementation
- Chapter 18: Case Study: Imperative Objects
- Chapter 19: Case Study: Featherweight Java
- The New Turing Omnibus
- Errata
- Chapter 11: Search Trees
- Chapter 8: Random Numbers
- Chapter 35: Sequential Sorting
- Chapter 58: Predicate Calculus
- Chapter 27: Perceptrons
- Chapter 9: Mathematical Research
- Chapter 16: Genetic Algorithms
- Chapter 37: Public Key Cryptography
- Chapter 6: Game Trees
- Chapter 5: Gödel's Theorem
- Chapter 34: Satisfiability (also featuring: Sentient)
- Chapter 44: Cellular Automata
- Chapter 47: Storing Images
- Chapter 12: Error-Correcting Codes
- Chapter 32: The Fast Fourier Transform
- Chapter 36: Neural Networks That Learn
- Chapter 41: NP-Completeness
- Chapter 55: Iteration and Recursion
- Chapter 19: Computer Vision
- Chapter 61: Searching Strings
- Chapter 66: Church's Thesis
- Chapter 52: Text Compression
- Chapter 22: Minimum spanning tree
- Chapter 64: Logic Programming
- Chapter 60: Computer Viruses
- Show & Tell
- Elements of Computing Systems
- Archived pages