## Coding Theory (Part 2 of 3) – Perfect Error Correction

### Introduction

Welcome to the second installment of this three-part series on coding theory. If you have not had the opportunity to read the first piece, it is highly recommended that you do before continuing on. It is available here: http://hideoushumpbackfreak.com/post/2016/07/30/Coding-Theory-(Part-1-of-3)-Coding-Theory-Defined

It’s rare to find concepts simple yet adroit at the same time. However, Hamming’s contributions to coding theory “fits the bill”. This post begins with a brief introduction to Hamming and a short history lesson before diving into Hamming Distance, and Perfect Codes. Additionally, it delves into a few simple math concepts requisite for understanding the final post. These concepts all come together in the final installment by providing examples of how to generate and decode the most powerful and efficient error correcting codes in use today.

### Richard Hamming

Richard Hamming was an American mathematician that lived from 1915 thru 1998. Early in his career, he programmed IBM calculating machines for the infamous Manhattan project. Concerned about the pernicious effect he may be having on humanity, he abandoned the Manhattan project to work for Bell Laboratories in 1946. Hamming’s tenure at Bell Laboratories was illustrious. His contributions during that time include Hamming codes, Hamming matrix, Hamming window, Hamming numbers, Hamming bound, and Hamming distance. The impact of these discoveries had irrevocable implications on the fields of computer science and telecommunications. After leaving Bell Laboratories in 1976, Hamming went into academia until his death in 1998.

### The Inception of Error Correcting Codes

The world of computation was very different back in 1947. At that time, producing modest (by today’s standards) calculations could take days. Just like today, machines of yore operated on bit strings with parity bits to ensure data fidelity. However, upon detecting erroneous data, the machines had no choice but to halt computation and return an error result. Imagine the frustration of being 47 hours into a 48-hour program and having it error out due to an anomaly introduced by noise. This is the dilemma Richard Hamming faced.

In 1950, Hamming published a paper that would serve as the basis for modern coding theory. He postulated that it was possible to not only detect, but correct errors in bit strings by calculating the number of bits disparate between valid codes and the erroneous code. This came to be known as Hamming Distance.

### Hamming Distance

The Hamming distance between two codewords is simply the number of bits that are disparate between two bit strings as demonstrated in figure one. Typically, hamming distance is denoted by the function d(x,y) where x and y are codewords. This concept seems incredibly mundane on the surface, but it’s the inception of a whole new paradigm in error correcting codes; specifically, Nearest Neighbor error correction.

Nearest neighbor error correction involves first defining codewords, typically denoted as C, that are known to both the source and sink. Any received codeword not contained in C is obviously the result of noise. Upon identifying an erroneous codeword, nearest neighbor decoding calculates the Hamming distance between it and every codeword contained in C. The codeword with the smallest Hamming distance has a high probability of being correct. See figure two.

The quality of error correction is heavily dependent on choosing efficient codewords. d(C) denotes Minimum Hamming Distance: that is the smallest hamming distance between any two code words contained within C. If a code has a minimum hamming distance of one (d(C)=1) then nearest neighbor error correction is futile. If it has a large hamming distance, such as 10 (d(C)=10 ), then error correction is powerful.

Hamming represented the relationship between minimum hamming distance and the quality of error correction with two concise equations. A particular code can detect a maximum k errors in a codeword if d(C) <= k + 1 and correct a maximum of k errors if d(C) >= 2k + 1. For example, a code with d(C) = 10 can detect a maximum of nine errors and correct a maximum of four as demonstrated in figure 3.

An important fact to note is that the equations above represent the maximum bounds of error detection and correction. It is possible to create a code with a minimum hamming distance that falls short of these bounds. In reality, it’s difficult to create a code that effectuates the bounds. There are special codes, known as Perfect Codes, that meet this criterion as well as demonstrate some other desirable traits.

### Perfect Codes

Generating an efficient code is a formidable task because it involves three competing principals as shown in figure four. First, short codewords reduce the size of data transmissions. Likewise, as shown in the previous section, the greater the minimum Hamming distance, the greater the codes ability to detect and correct errors. However, there are a limited number of codewords of a specified length that also have a specified minimum Hamming distance.

The Hamming Bound equation demonstrates these competing principals concisely. The equation is shown in figure five, where |C| is the upper bound number of codewords, n is the length of the codewords, and k is the maximum number of errors it is capable of correcting. Any code that achieves the upper bound of the equation is known as a Perfect Code. As a side note, Richard Hamming developed a perfect code known now as Hamming Codes.

### Conclusion

This concludes the second installment of this three-part series on coding theory. Richard Hamming created error correcting codes that addressed the problem of brittle computations in the 1950s. However, they still permeate modern computer science. The concept of Hamming Distance incepted Nearest Neighbor error correction. The quality of error correction is dependent on the Hamming Bound, which is an equation that expresses the three competing goals of an effective code.

Make sure to check back for the final installment of this series. To date, the posts have covered mostly supporting concepts. However, the concluding piece agglomerates all ideas into a cohesive whole with an example. As always, thank you for reading and feel free to contact me with questions or comments.

## Coding Theory (Part 1 of 3) – Coding Theory Defined

Coding theory stands as a cornerstone for most of computer science. However, many programmers today have a diminutive understanding of the field at best. This three-part series of blog posts describes what coding theory is and delves into Richard Hamming’s contributions. Although derived in the 1950s, Hamming’s ideas are so visionary that they still permeate modern coding applications. If a person truly comprehends Hamming’s work, they can fully appreciate coding theory and its significance to computer science.

This first installment of the series defines coding theory, error detecting codes, and error correcting codes. These are all important supporting concepts required to fully appreciate future articles. Although this is aimed at the novice, it will provide a good review for the more seasoned computer scientist.

### Coding Theory Defined

Computer systems store information as a series of bits. Coding theory is the study of encoding, transmitting, and decoding said information in a reliable manner. More succinctly: moving bits with fidelity. This appears elementary from the cursory view. What’s difficult about transferring ones and zeros across some communications medium? As figure one illustrates, the answer is the noise introduced by the communications channel.

Recall that computer systems store data as a strings of bits. Each bit has two possible values. These values are often represented as 1/0, true/false, on/off, or even high/low. Regardless of the nomenclature used to represent them, they are nothing more than the absence or presence of a voltage from a computer’s perspective. Noise, including everything from electrical interference to a scratched disk surface, can make these values ambiguous to a machine.

As a grossly simplified example, suppose a computer expects either a zero or five-volt signal. A zero-volt signal goes into one side of the channel and distortion causes 2.6 volts to come out the other side. Therein lies the ambiguity. The machine can only interpret the signal as a one and rely on coding techniques to sort it out.

One important point to remember is that coding theory is requisite due to shortcomings in modern computer hardware. If contemporary machines could transmit data reliably, coding theory would be superfluous. It’s not that building such equipment is impossible. The technology to build reliable machines exists. It’s just not practical. Such computers would be slow and exorbitantly expensive. Richard Hamming stated: "We use coding theory to escape the necessity of doing things right because it’s too expensive to do it right" (Source).

Coding theory addresses the inadequacies of machines by building fault tolerance directly into transmitted data in the form of error detecting and error correcting codes.

### Error Detecting Codes

Aptly named, error detecting codes enable receivers to determine if accepted information contains errors. This is possible by appending calculated verification data to the data source before transmission. The sender calculates verification data by running the source data through a deterministic algorithm which typically produces either a hash, checksum, or parity bit. Upon receipt, the receiver runs the same algorithm on the information received. If the data produced by the receiver matches the verification data, it’s safe to assume the accepted information is unadulterated. Figure two shows the process more concisely.

The concept of using codes for error detection is actually quite old. Scribes in antiquity would sum the number of words in paragraphs and pages and use those values to detect transcription errors. In that case, the original scroll is the source and the produced scroll is the sink. The scribe himself is the communication channel and source of noise. The algorithm used to generate the verification data is the process of counting the words. Obviously, error detecting is more complex in modern times but the general principal remains unchanged.

Error detection goes beyond simply detecting errors introduced by noise; it can also detect information tampering by malicious third parties. Although fascinating, all of that minutiae is beyond the scope of this post. For brevity, this article explores the simplest type of error detecting codes: parity bits. This is the only error coding concept particularly germane to future installments in this series.

A parity bit (aka check bit) is a verification bit appended to the end of a codeword. The parity bit equals zero if there are an even number of ones and one if there are an odd number. Figure 3 illustrates this concept. As a side note, what is described above is technically an “even” parity bit. The bit value will be the opposite in the case of an odd parity bit. The remainder of this article assumes even parity bits.

Parity bits can only detect an odd number of errors. Consider the seven bit codewords above. The parity bit is only useful if there are one, three, five, or seven errors. If there are two, four, or six error, the parity bit indicates success. One method for mitigating this is by arranging the data into a matrix and generating parity bits in multiple dimensions as shown in figure four.

The example shown above has two dimensions; however, it’s possible to add parity bits in unlimited dimensions. While it’s fairly easy to imagine a matrix with three dimensions, it’s arduous to visualize a matrix with more than that. Regardless, it’s mathematically feasible. A future article in this series examines this in more detail.

Error detecting codes inform the receiver of errors during the transmission of information. Knowing there is an error, the receivers can easily make a request to resend data. Many systems work exactly like this. The next section explores how coding theory takes this one step farther by not only detecting errors, but correcting them as well.

### Error Correcting Codes

The previous section describes how receivers request a resend upon detecting errors with error detecting codes. Unfortunately, there are applications where this isn’t an option. For instance, imagine trying to communicate with a satellite in deep space when the transmission process could take months. Another example is data stored on a disk that may degrade over time. It’s impossible to ask for a retransmission from the source because the source itself is corrupted. Yet another example is broadcast systems where there is no backchannel to facilitate resend requests. These are just a few examples. For such cases there are error correcting codes which not only inform the receiver of errors, they contain enough information to fix them.

The simplest form of error correcting codes are repetition codes. As the name implies, the message is simply replicated multiple times. The decoder determines the correct bits by choosing the majority. Figure five illustrates the concept. The amount of duplication is implementation dependent; however, less than thrice is not effective.

There are more elegant and efficient error correction paradigms than repetition codes. However, they are still in use in some modern system due to the ease of implementation. The main take away from this section is simply what error correcting codes are. Future installments examine them in greater detail.

### Conclusion

This concludes the first installment of this three-part series on coding theory. This article introduced coding theory, error detecting codes, and error correcting codes. In short, the concepts required to fully appreciate future posts. Future installments dig into details of coding theory and explore the works of Richard Hamming, who revolutionized the field in the 1950s.

Make sure to come back for the next article because that’s when things start to get exciting. The post digs into some fascinating math and the more ingenious methods used for error correction. As always, thank you for reading and feel free to contact me with questions or comments.

## Turing Machine Simulation in C#

### Introduction

Alan Turing’s idea of building a universal computing machine was truly revolutionary and it changed the face of the world. One extraordinarily simple machine capable of carrying out any computation seems impossible. This blog post is a laconic look at the history and operation of the Universal Turing Machine. The goal is to whet the appetite of the uninitiated and inspire them to seek out a deeper understanding of computation.

### History

In 1936, Alan Turing published his famous “On Computable Numbers” paper. It was not his intention to invent a computer. In fact, the machine outlined in the paper was not initially meant to be implemented mechanically. The Automatic Computing Machine (now known as Universal Turing Machine), as he termed it, was nothing more than a mathematic concept. Turing did all of his computation with a fountain pen and paper.

Turing’s true ambition was to solve the famous “Entscheidungsproblem”. In a nutshell, he used the concept of an automatic computing machine to prove that there are some sequences of numbers that are impossible to compute. His proof is closely tied to the notorious Halting Problem.

In Turing’s time, the word computer had a very different meaning. Computer was a job description. Mathematicians painstakingly wrote out precise directions for computers. This way, a person with little background in arithmetic could perform computation without really understanding the overall implications of what they were doing. These instructions are what inspired Turing and they are reflected in the implementation of his automatic computing machine.

What’s most amazing about Turing’s contemplation of the automatic computing machine is that more than eighty years later, we’ve yet to come up with a more powerful computational model. We have more efficient means of computation, but none of them can solve any problems that the original machine could not. Extraordinary to say the least!

### Universal Turing Machine

The Universal Turing Machine is elementary. If you’ve ever built a state machine, this will all start to look very familiar. The machine only has 4 components.

1. Paper Tape
2. Printer
3. Scanner
4. Transition Table

The paper tape is a theoretically infinite piece of paper sectioned off into squares. Each square is meant to hold a single character. The printer and scanner are mounted together on a head that move as a unit. The printer can print a single character to the tape and the scanner can read a single character from the tape. The head can only move one square of tape at a time. The real magic happens in the transition table, or “table of states of mind” as Turing called them (remember Turing’s idea of a computer was a human). The table is a list of instructions in the form of: If the machine is in state A and the scanner reads character B, print character C, move the head in D direction and place the machine in state E. ABCDE are all variables.

In order to perform a computation, information goes on the tape, the machine gets an initial state, the transition table is loaded, and it’s “off to the races” following the instructions in the transition table. Once the machine reaches its’ final state, the tape will hold the computed answer. That is all that is required to perform any computation. Such exquisite simplicity!

### Programmability

Turing’s was not the first computation machine. In fact, many had come before him. However, his machine had something truly exceptional: programmability. A physical machine could conceivably contain a single physical transition table. Turing created a physical transition table that could read any number of logical transition tables from the tape. This was truly the “Hello World” of computer programming.

Turing used something he coined “Standard Description” to create an initial transition table capable of reading any other transition table from the tape. Many consider this to be the world’s first programming language. Concepts extracted from his work such as symbol tables and m-functions are still in use today.

The notion of a machine that can emulate any other machine is known as Turing Completeness. The ramifications of this are seemingly endless. The whole of the computer industry is built on this concept. The sheer power and simplicity are mind boggling.

### Simulation

Now comes the fun part! Let’s build a Turing machine simulation in C#. Please take note that this is only a thought experiment to demonstrate the basic structure of a Turing machine. Some of the more convoluted details such as e-tape/f-tape, encoding, and reading instructions from the tape are left out for the sake of brevity. That being said, with an infinite amount of time it’s entirely possible to compute any possible computation using this simulation and a proper initial transition table.

I’m only giving a short synopsis of the code below. Everything is written in a somewhat functional style with no mutation. I’m sure this could be up for debate, but if you choose to debate it you’re missing the point of the post… If you have any questions or comments, feel free to comment or contact me directly. I’d love to speak with you about it. The full code is available on github: https://github.com/dalealleshouse/TuringMachine

First, we have a Head that consists of the tape, printer, and scanner. The tape is just an IEnumerable of characters that automatically grows as needed (fun fact: Turing was also instrumental in the idea of recursive enumeration). Each character in the IEnumerable represents a single square on the paper tape. The class has methods that read from the tape (scanner), write to the tape (printer), and move the head position. One special thing to note is the override of the ToString method which will return the data on the tape in a human readable form.

using System;
using System.Collections.Generic;
using System.Linq;

namespace TuringMachine
{
{
public const char Blank = '_';

{
if (tape == null) throw new ArgumentNullException(nameof(tape));

var safeData = tape as char[] ?? tape.ToArray();

Tape = safeData;
}

public IEnumerable Tape { get; }

public int HeadPosition { get; }

? new Head(new[] { Blank }.Concat(Tape), 0)

{
switch (direction)
{
return MoveLeft();
return this;
return MoveRight();
default:
throw new ArgumentOutOfRangeException(nameof(direction), direction, null);
}
}

public override string ToString() => $@"Tape: {Tape.Select(GetChar).Aggregate((agg, next) => agg + next)}"; private string GetChar(char c, int index) => index == HeadPosition ?$"({c})" : c.ToString();
}
}


Next, let’s turn our attention to the transition table. Below is a POCO to hold transition data. A transition table is an IEnumerable of transition objects. State is an integer value and HeadPostion is an Enum composed of Left, No Move, and Right.

namespace TuringMachine
{
public class Transition
{
{
InitialState = initialState;
Write = write;
NextState = nextState;
}

public int InitialState { get; }

public char Read { get; }

public char Write { get; }

public int NextState { get; }
}
}


The last piece of the puzzle is the machine itself. A machine is initialized with a state, a Head object, and a transition table. The interesting part of this class is the step method. For our purposes, I’ve defined any state less than 0 to denote a special stop state (error or halt). If the machine is in a stop state, it will no longer process transitions. If the machine is unable to find a suitable transition, it goes into an error state (fun fact: the fact that this machine can go into an error state makes it an undeterministic machine as opposed to a deterministic machine). If it finds a suitable transition, it applies the defined data and returns a reconfigured machine.

using System;
using System.Collections.Generic;
using System.Linq;

namespace TuringMachine
{
public class Machine
{
{
if (transitionTable == null) throw new ArgumentNullException(nameof(transitionTable));

State = state;
TransitionTable = transitionTable;
}

public int State { get; }

public IEnumerable TransitionTable { get; }

public Machine Step()
{
if (State < 0) return this;

return
TransitionTable
TuringMachine.State.Error))
.Select(
.First();
}

public Machine Run()
{
var m = this;

while (m.State >= 0)
m = m.Step();

return m;
}
}
}


That’s all we need! Let’s start off easy by declaring a transition table that will add two positive numbers. We’ll represent the numbers on the tape by a series of “1” characters separated by a blank. Our transition table is below.

using System.Collections.Generic;
using System.Resources;

namespace TuringMachine
{
public static class TransitionTableGenerator
{
public static IEnumerable Addition() => new[]
{
new Transition(0, Tape.Blank, Tape.Blank, HeadDirection.Right, 0),
new Transition(0, '1', '1', HeadDirection.Right, 1),
new Transition(1, Tape.Blank, '1', HeadDirection.Right, 2),
new Transition(1, '1', '1', HeadDirection.Right, 1),
new Transition(2, Tape.Blank, Tape.Blank, HeadDirection.Left, 3),
new Transition(2, '1', '1', HeadDirection.Right, 2),
new Transition(3, Tape.Blank, Tape.Blank, HeadDirection.Left, 3),
new Transition(3, '1', Tape.Blank, HeadDirection.Left, 4),
new Transition(4, Tape.Blank, Tape.Blank, HeadDirection.NoMove, State.Halt),
new Transition(4, '1', '1', HeadDirection.Left, 4)
};
}
}


The test below shows the machine in action adding 3 and 2. I highly recommend stepping through the code in order to really grok all of this. The answer is returned on the tape as a series of 5 “1” characters.

[TestMethod]
{
const string expected = "Head: (_)11111__";
var sut = new TuringMachine.Machine(
0,

var result = sut.Run();
}


OK, that’s not exactly an impressive result. Let’s see if we can do something slightly more complicated. Using the same input, let’s multiply the two numbers. All we need is a different transition table.

using System.Collections.Generic;
using System.Resources;

namespace TuringMachine
{
public static class TransitionTableGenerator
{
public static IEnumerable Multiplication() => new[]
{
new Transition(0, Tape.Blank, Tape.Blank, HeadDirection.Right, 1),
new Transition(0, '1', Tape.Blank, HeadDirection.Right, 2),
new Transition(1, Tape.Blank, Tape.Blank, HeadDirection.Right, 14),
new Transition(1, '1', Tape.Blank, HeadDirection.Right, 2),
new Transition(2, Tape.Blank, Tape.Blank, HeadDirection.Right, 3),
new Transition(2, '1', '1', HeadDirection.Right, 2),
new Transition(3, Tape.Blank, Tape.Blank, HeadDirection.Left, 15),
new Transition(3, '1', Tape.Blank, HeadDirection.Right, 4),
new Transition(4, Tape.Blank, Tape.Blank, HeadDirection.Right, 5),
new Transition(4, '1', '1', HeadDirection.Right, 4),
new Transition(5, Tape.Blank, '1', HeadDirection.Left, 6),
new Transition(5, '1', '1', HeadDirection.Right, 5),
new Transition(6, Tape.Blank, Tape.Blank, HeadDirection.Left, 7),
new Transition(6, '1', '1', HeadDirection.Left, 6),
new Transition(7, Tape.Blank, '1', HeadDirection.Left, 9),
new Transition(7, '1', '1', HeadDirection.Left, 8),
new Transition(8, Tape.Blank, '1', HeadDirection.Right, 3),
new Transition(8, '1', '1', HeadDirection.Left, 8),
new Transition(9, Tape.Blank, Tape.Blank, HeadDirection.Left, 10),
new Transition(9, '1', '1', HeadDirection.Left, 9),
new Transition(10, Tape.Blank, Tape.Blank, HeadDirection.Right, 12),
new Transition(10, '1', '1', HeadDirection.Left, 11),
new Transition(11, Tape.Blank, Tape.Blank, HeadDirection.Right, 0),
new Transition(11, '1', '1', HeadDirection.Left, 11),
new Transition(12, Tape.Blank, Tape.Blank, HeadDirection.Right, 12),
new Transition(12, '1', Tape.Blank, HeadDirection.Right, 13),
new Transition(13, Tape.Blank, Tape.Blank, HeadDirection.NoMove, State.Halt),
new Transition(13, '1', Tape.Blank, HeadDirection.Right, 13),
new Transition(14, Tape.Blank, Tape.Blank, HeadDirection.NoMove, State.Halt),
new Transition(14, '1', Tape.Blank, HeadDirection.Right, 14),
new Transition(15, Tape.Blank, Tape.Blank, HeadDirection.Left, 16),
new Transition(15, '1', Tape.Blank, HeadDirection.Left, 15),
new Transition(16, Tape.Blank, Tape.Blank, HeadDirection.NoMove, State.Halt),
new Transition(16, '1', Tape.Blank, HeadDirection.Left, 16)

};
}
}


Below is the multiplication test. The answer is indicated by a string of 6 “1” characters on the tape.

[TestMethod]
public void MultiplyTwoNumbers()
{
const string expected = "Head: ______(_)111111";
var sut = new TuringMachine.Machine(
0,
TransitionTableGenerator.Multiplication());

var result = sut.Run();
}


### Conclusion

I hope you enjoyed this whirlwind tour of Turing machines as much as I enjoyed writing it. There are much deeper concepts just under the surface and I hope I’ve piqued your interest enough to make you hungry for more. My ambition is that this post sends you on a journey into the exciting world of computability. Don’t be satisfied with merely knowing enough to write programs; keep digging until you master the domain. I’ll leave you with a couple book recommendations that I’m sure you’ll find invigorating.

• Understanding Computation: From Simple Machines to Impossible Programs - Tom Stuart
• Introduction to Automata Theory, Languages, and Computation - John E. Hopcroft , Rajeev Motwani, Jeffrey D. Ullman

If there is enough interest, I may do a follow up to this post on how to do pattern matching and some simple regular expression parsing using the Turing machine simulation.

As always, thanks for reading and I would love to hear from you.

## Easy Immutable Objects in C#

Aristotle said, “Change in all things is sweet.” With all due respect to the great philosopher, he obviously never had to write software…

Head on over to Stack Overflow and search for “Immutable Object Patterns in C#” and you will see that this is a point of contention for many developers. Creating immutable objects in C# is cumbersome at best. What if I told you there is a practically effortless way to accomplish this with about one quarter of the code that is used in typical examples. I knew that would get your attention! Hang on tight while I walk you through an exciting demonstration!

As a side note, I’m not going to try to sell anyone on immutability in this post. There are countless other articles out there with that purpose. From here forward, I’m going to assume the reader understands why and when they need immutable objects.

### The Problem

It’s a typical Friday night, when the phone rings. It’s NASA again (sigh…), and they need another brilliant celestial program in the key of C#. So much for relaxing, duty calls… After contemplating the specs, it’s obvious that an immutable planet class is a requirement. Let’s get started.

### Typical C# Solution

In order to create the planet object in C#, you must create a constructor that accepts all property values and then sets those property values in the constructor body. This is a fairly recent addition to C#. In older versions you also had to create backing fields for each property. Additionally, it’s wise to override the equals method because by default C# uses reference equality. This could be a topic for debate and your mileage may vary; however, for the sake of argument let’s just assume we want value equality for planet objects. Another debatable topic may be whether to use a struct or a class. This example uses a class because C# will always create default parameterless constructors for structs. It doesn’t really make sense to have an instance of a planet that cannot change and is initialized without values. The result is the code below.

public class Planet
{
public Planet(
string name,
decimal massKg,
decimal equatorialDiameterKm,
decimal polarDiameterKm,
decimal equatorialCircumferenceKm,
decimal orbitalDistanceKm,
decimal orbitPeriodEarthDays,
decimal minSurfaceTemperatureCelsius,
decimal maxSurfaceTemperatureCelsius)
{
this.Name = name;
this.MassKg = massKg;
this.EquatorialDiameterKm = equatorialDiameterKm;
this.PolarDiameterKm = polarDiameterKm;
this.EquatorialCircumferenceKm = equatorialCircumferenceKm;
this.OrbitalDistanceKm = orbitalDistanceKm;
this.OrbitPeriodEarthDays = orbitPeriodEarthDays;
this.MinSurfaceTemperatureCelsius = minSurfaceTemperatureCelsius;
this.MaxSurfaceTemperatureCelsius = maxSurfaceTemperatureCelsius;
}

public string Name { get; }

public decimal MassKg { get; }

public decimal EquatorialDiameterKm { get; }

public decimal PolarDiameterKm { get; }

public decimal EquatorialCircumferenceKm { get; }

public decimal OrbitalDistanceKm { get; }

public decimal OrbitPeriodEarthDays { get; }

public decimal MinSurfaceTemperatureCelsius { get; }

public decimal MaxSurfaceTemperatureCelsius { get; }

public override bool Equals(object obj)
{
if (ReferenceEquals(null, obj))
{
return false;
}

if (ReferenceEquals(this, obj))
{
return true;
}

return obj.GetType() == this.GetType() && this.Equals((Planet)obj);
}

protected bool Equals(Planet other)
=>
string.Equals(this.Name, other.Name) && this.MassKg == other.MassKg && this.EquatorialDiameterKm == other.EquatorialDiameterKm
&& this.PolarDiameterKm == other.PolarDiameterKm && this.EquatorialCircumferenceKm == other.EquatorialCircumferenceKm
&& this.OrbitalDistanceKm == other.OrbitalDistanceKm && this.OrbitPeriodEarthDays == other.OrbitPeriodEarthDays
&& this.MinSurfaceTemperatureCelsius == other.MinSurfaceTemperatureCelsius && this.MaxSurfaceTemperatureCelsius == other.MaxSurfaceTemperatureCelsius;

public override int GetHashCode()
{
unchecked
{
var hashCode = (this.Name?.GetHashCode() ?? 0);
hashCode = (hashCode * 397) ^ this.MassKg.GetHashCode();
hashCode = (hashCode * 397) ^ this.EquatorialDiameterKm.GetHashCode();
hashCode = (hashCode * 397) ^ this.PolarDiameterKm.GetHashCode();
hashCode = (hashCode * 397) ^ this.EquatorialCircumferenceKm.GetHashCode();
hashCode = (hashCode * 397) ^ this.OrbitalDistanceKm.GetHashCode();
hashCode = (hashCode * 397) ^ this.OrbitPeriodEarthDays.GetHashCode();
hashCode = (hashCode * 397) ^ this.MinSurfaceTemperatureCelsius.GetHashCode();
hashCode = (hashCode * 397) ^ this.MaxSurfaceTemperatureCelsius.GetHashCode();
return hashCode;
}
}
}

Wow, that’s a lot of code. Looking ahead, let’s imagine what it will take to add a new property in the future. All the following changes are required:

1. Constructor arguments
2. Constructor body
3. Properties
4. Equals method
5. GetHashCode method

That’s a ton of changes with several opportunities for error. All that typing! My fingers are sore!

### A Better Way

Fortunately, there is a better way to create the planet class! The answer is, use F#. I know what you’re thinking, “Wait, hold the presses! Look at the title of this blog post!”. Hold on, allow me to elucidate. I’ll first put you at ease by telling you that you don’t even need to learn F# to use this solution. You just need to create a simple F# project and reference it from your C# project. Because F# is a dot net language, F# libraries are entirely accessible to C#.

F# has a construct known as a record type that mimics the behavior of the C# class shown above. The best part is that record types are effortless to define. Below is the code required to create a planet class that behaves identically to the C# class defined above.

type Planet = { Name: string; MassKg: decimal; EquatorialDiameterKm: decimal; PolarDiameterKm: decimal; EquatorialCircumferenceKm: decimal; OrbitalDistanceKm: decimal; OrbitPeriodEarthDays: decimal; MinSurfaceTemperatureCelsius: decimal; MaxSurfaceTemperatureCelsius: decimal }

You don’t get much more concise and maintainable than that! Adding new properties is a relative breeze.

What I like most about this solution is that it acts as a “gateway drug”. There are many situations where F# offers significant advantages. However, many developers are intimidated by taking a big leap into a new language. How do you sell it to your boss? Where do you begin? This is a very pain free way to get your F# “foot in door”.

### Conclusion

Using an F# library for immutable objects reduces required code by an amazing amount. Additionally, it eradicates error prone maintenance. With this new weapon in our arsenal, we can knock out that program for NASA and be done in time to curl up on the couch with a cigar and scotch before bed. Yay for humanity!

If you need more concrete examples, I encourage you to go have a look at the accompanying github repository: https://github.com/dalealleshouse/ImmutableExample. Download the source code, and notice that there are 4 projects. A C# domain project, an F# domain project, a C# planet repository project, and a test project. Change the reference in the planet repository project back and forth from the C# and F# projects in order to demonstrate the solution.

I hope this solution saves you as much time and trouble as it has me. As always, thank you for reading and feel free to contact me with any questions.

## Recursion – Good, Bad, or Indifferent?

For developers using imperative languages (C, C++, C#, Java, Python, etc…), recursion can seem arcane. It’s just not something encountered regularly. However, recursion is a staple of functional programming. Given the functional programming renaissance, it may be time to “bone up”. Even if you have no interest in learning a functional language, functional concepts have many benefits that can be applied to imperative programming (i.e. LINQ, yeah, who doesn’t LOVE that). This blog post is meant to give a very broad overview of recursion for developers who don’t use it much.

### WTF is Recursion?

In order to understand recursion one must first understand recursion. Yeah, I know, that’s a really old joke. Moving on…

There are heaps of rather byzantine definitions floating around. However, for our purposes we are going to say that a recursive method is one that calls back to itself. This is best demonstrated via example. Consider the following function that calculates the greatest common divisor of 2 numbers.

private int GreatestCommonDivisor(int x, int y)
{
return y == 0 ? x : GreatestCommonDivisor(y, x % y);
}

Make sure you fully grok that function before reading on. There is a lot happening in those few lines of code.

Most recursive functions can also be written iteratively. In other words, recursive functions can be rewritten using looping constructs (while, for, etc…). See the example below.

private int GreatestCommonDivisor(int x, int y)
{
while (true)
{
if (y == 0) break;

var remainder = x % y;
x = y;
y = remainder;
}

return x;
}

The two functions above are commensurate. So, why would one choose one over the other? Read on! More about this in the next section.

In imperative programming languages, recursive functions should be avoided in most cases (please, no hate mail about how this isn’t true 100% of the time). Recursive functions are less efficient than their iterative counterparts. Additionally, they are subject to the perils of stack overflows. Why is this true? I’m glad you asked. Let’s consider a greatly simplified version of what happens when a function is invoked.

1. function arguments, variables, and return address are pushed on the stack
2. control jumps to the function
3. function code runs
4. function results are copied into the return value
5. stack is rewound to it’s previous position (memory is reclaimed)
6. control jumps back to the caller

Often times, all of the above consumes more resources than looping. Therefore, recursive method calls are less performant.

The bigger problem is that the call stack is allocated a relatively meager amount of memory (varies wildly depending on the platform). Exceeding this causes the dreaded stack overflow exception to rear it’s ugly head. Each function call eats up space on the stack like a donkey eating a waffle. That space isn’t reclaimed until the function returns. As a total SWAG, I wouldn’t trust a recursion depth of more than about a thousand. Of course, this number varies wildly depending on the platform and amount of stack space the function consumes.

Methods that employ iteration do not suffer from any of the problems outlined in this section. In spite of that, there are still some good reasons to use recursion. Read on my friend!

### The Good

As demonstrated above, recursive functions are generally shorter and more elegant. In the words of the great Donald Knuth, “premature optimization is the root of all evil”. Unless your application is extremely performance critical, chances are you will never notice the few extra microseconds. The added benefit of terse and expressive code far outweighs the performance hit. Obviously, stack overflow exceptions aren’t going away. However, if the estimated recursion depth is modest there is no reason to avoid recursion. Put succinctly, if your use case is not performance critical and the estimated recursion depth is reasonable, it’s a good candidate for recursion.

Now wait a minute, I said that stack overflows aren’t going away but I also said that recursion is a staple for functional programming. How can both of these statements be true? Three words: tail call optimization. Many functional languages employ this witchery. In a nutshell, here’s what happens. If a function’s return expression is a result of a function call, the stack frame is reused instead of pushing a new one on the call stack. That’s all there is to it! Wouldn’t it be wondrous if imperative languages did this? Maybe, someday…