-
Notifications
You must be signed in to change notification settings - Fork 15
The New Turing Omnibus Chapter 12 Error Correcting Codes
Leo volunteered to shepherd this meeting and we began by discussing the applications of error-correcting codes using the example from the book of communications with spacecraft.
We discussed the constraints of the problem and how that might affect any potential solutions:
- We only want to send a message once as re-sending can take a huge amount of time;
- With that in mind, we need to not only be able to detect errors but correct them;
- We want to tolerate interference anywhere in the message (e.g. cosmic radiation flipping random bits)
Leo then began by explaining the Hadamard matrix and how to build successive matrices from previous ones using the Cartesian product:
Leo then related the Hadamard matrix to our problem as it is particularly well-suited for generating a language of code words for use in error correction.
Intuitively, we want each word in our language to be as distinct as possible so that we can easily identify them even if they are corrupted.
e.g. If our code words were like so:
000001 = 0
000010 = 1
Then if we receive the message 100000
then we can't easily identify the original word.
However, if our code words were like so:
000000 = 0
111111 = 1
Then our message 100000
is much more likely to be 0
than it is 1
due to the number of errors in the message.
Designing such a language is actually quite difficult but the Hadamard matrix is ideal for such a task as each of its rows differ from one another as much as possible.
To better explore how successive Hadamard matrices change (and see the differences between rows), Leo demonstrated his browser-based visualisation:
We then used the H3 Hadamard matrix to pick a message (the number 5) and flip one bit and attempt to recover it.
Using the Hamming distance, we counted the number of errors between our corrupted message and each row to determine the correct code:
We discussed the relationship between the size of the matrix and how many errors various Hadamard matrices can tolerate (as beyond a certain point there will be multiple possible words with the same likelihood):
With this understood, we attempted the first problem given in the chapter which was to generate a matrix for 6-bit words. We couldn't use the Hadamard matrix to do this as they generate words that are powers of 2 so we took to the whiteboard instead:
Joel's proposed a solution and wagered that there were no further words:
In order to test this, we decided to write a program in Ruby to solve the problem for us. James proposed a way for us to do this:
An early version of our Ruby program to generate solutions for the 6-bit problem:
- Leo's visualisation of the Hadamard matrix
- Leo's Reed-Muller encoder/decoder
- Leo's noisy: "a command-line utility that injects a customisable amount of random noise into a stream of bytes coming from standard input"
- James' noise-proxy: "a TCP proxy with configurable noise in both directions"
- Kevin's noisy_proxy
- Tom's implementation of Hadamard matrices in Ruby
>> matrix = hadamard_one = [[1, 1], [1, -1]]
=> [
[1, 1],
[1, -1]
]
>> matrix = matrix.map { |row| row.map { |cell| hadamard_one.map { |one_row| one_row.map { |one_cell| one_cell * cell } } } }.flat_map { |row| row.transpose.map(&:flatten) }
=> [
[1, 1, 1, 1],
[1, -1, 1, -1],
[1, 1, -1, -1],
[1, -1, -1, 1]
]
>> matrix = matrix.map { |row| row.map { |cell| hadamard_one.map { |one_row| one_row.map { |one_cell| one_cell * cell } } } }.flat_map { |row| row.transpose.map(&:flatten) }
=> [
[1, 1, 1, 1, 1, 1, 1, 1],
[1, -1, 1, -1, 1, -1, 1, -1],
[1, 1, -1, -1, 1, 1, -1, -1],
[1, -1, -1, 1, 1, -1, -1, 1],
[1, 1, 1, 1, -1, -1, -1, -1],
[1, -1, 1, -1, -1, 1, -1, 1],
[1, 1, -1, -1, -1, -1, 1, 1],
[1, -1, -1, 1, -1, 1, 1, -1]
]
>> matrix = matrix.map { |row| row.map { |cell| hadamard_one.map { |one_row| one_row.map { |one_cell| one_cell * cell } } } }.flat_map { |row| row.transpose.map(&:flatten) }
=> [
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1],
[1, -1, 1, -1, 1, -1, 1, -1, 1, -1, 1, -1, 1, -1, 1, -1],
[1, 1, -1, -1, 1, 1, -1, -1, 1, 1, -1, -1, 1, 1, -1, -1],
[1, -1, -1, 1, 1, -1, -1, 1, 1, -1, -1, 1, 1, -1, -1, 1],
[1, 1, 1, 1, -1, -1, -1, -1, 1, 1, 1, 1, -1, -1, -1, -1],
[1, -1, 1, -1, -1, 1, -1, 1, 1, -1, 1, -1, -1, 1, -1, 1],
[1, 1, -1, -1, -1, -1, 1, 1, 1, 1, -1, -1, -1, -1, 1, 1],
[1, -1, -1, 1, -1, 1, 1, -1, 1, -1, -1, 1, -1, 1, 1, -1],
[1, 1, 1, 1, 1, 1, 1, 1, -1, -1, -1, -1, -1, -1, -1, -1],
[1, -1, 1, -1, 1, -1, 1, -1, -1, 1, -1, 1, -1, 1, -1, 1],
[1, 1, -1, -1, 1, 1, -1, -1, -1, -1, 1, 1, -1, -1, 1, 1],
[1, -1, -1, 1, 1, -1, -1, 1, -1, 1, 1, -1, -1, 1, 1, -1],
[1, 1, 1, 1, -1, -1, -1, -1, -1, -1, -1, -1, 1, 1, 1, 1],
[1, -1, 1, -1, -1, 1, -1, 1, -1, 1, -1, 1, 1, -1, 1, -1],
[1, 1, -1, -1, -1, -1, 1, 1, -1, -1, 1, 1, 1, 1, -1, -1],
[1, -1, -1, 1, -1, 1, 1, -1, -1, 1, 1, -1, 1, -1, -1, 1]
]
Thanks to Leo and Geckoboard for both hosting and shepherding the meeting, to everyone who brought snacks and to James C, Tom and Kevin for the software they wrote before the meeting.
- Home
- Documentation
- Choosing a Topic
- Shows & Tells
- Miscellaneous
- Opt Art
- Reinforcement Learning: An Introduction
- 10 Technical Papers Every Programmer Should Read (At Least Twice)
- 7 More Languages in 7 Weeks
- Lua, Day 1: The Call to Adventure
- Lua, Day 2: Tables All the Way Down
- Lua, Day 3
- Factor, Day 1: Stack On, Stack Off
- Factor, Day 2: Painting the Fence
- Factor, Day 3: Balancing on a Boat
- Elm, Day 1: Handling the Basics
- Elm, Day 2: The Elm Architecture
- Elm, Day 3: The Elm Architecture
- Elixir, Day 1: Laying a Great Foundation
- Elixir, Day 2: Controlling Mutations
- Elixir, Day 3: Spawning and Respawning
- Julia, Day 1: Resistance Is Futile
- Julia, Day 2: Getting Assimilated
- Julia, Day 3: Become One With Julia
- Minikanren, Days 1-3
- Minikanren, Einstein's Puzzle
- Idris Days 1-2
- Types and Programming Languages
- Chapter 1: Introduction
- Chapter 2: Mathematical Preliminaries
- Chapter 3: Untyped Arithmetic Expressions
- Chapter 4: An ML Implementation of Arithmetic Expressions
- Chapter 5: The Untyped Lambda-Calculus
- Chapters 6 & 7: De Bruijn Indices and an ML Implementation of the Lambda-Calculus
- Chapter 8: Typed Arithmetic Expressions
- Chapter 9: The Simply-Typed Lambda Calculus
- Chapter 10: An ML Implementation of Simple Types
- Chapter 11: Simple Extensions
- Chapter 11 Redux: Simple Extensions
- Chapter 13: References
- Chapter 14: Exceptions
- Chapter 15: Subtyping – Part 1
- Chapter 15: Subtyping – Part 2
- Chapter 16: The Metatheory of Subtyping
- Chapter 16: Implementation
- Chapter 18: Case Study: Imperative Objects
- Chapter 19: Case Study: Featherweight Java
- The New Turing Omnibus
- Errata
- Chapter 11: Search Trees
- Chapter 8: Random Numbers
- Chapter 35: Sequential Sorting
- Chapter 58: Predicate Calculus
- Chapter 27: Perceptrons
- Chapter 9: Mathematical Research
- Chapter 16: Genetic Algorithms
- Chapter 37: Public Key Cryptography
- Chapter 6: Game Trees
- Chapter 5: Gödel's Theorem
- Chapter 34: Satisfiability (also featuring: Sentient)
- Chapter 44: Cellular Automata
- Chapter 47: Storing Images
- Chapter 12: Error-Correcting Codes
- Chapter 32: The Fast Fourier Transform
- Chapter 36: Neural Networks That Learn
- Chapter 41: NP-Completeness
- Chapter 55: Iteration and Recursion
- Chapter 19: Computer Vision
- Chapter 61: Searching Strings
- Chapter 66: Church's Thesis
- Chapter 52: Text Compression
- Chapter 22: Minimum spanning tree
- Chapter 64: Logic Programming
- Chapter 60: Computer Viruses
- Show & Tell
- Elements of Computing Systems
- Archived pages