One of Mathematica's coolest list manipulation techniques is the Reap/Sow pattern. Using these functions together allows you to build up multiple collections at once in a decoupled manner.
To understand this idea and when it might make a difference, consider how you would sort a list of integers by divisor. What I mean is, given the list \[\{12,13,14,15,21,24,28,41,47,49,55\}\] sort it into the following lists based on divisor: \[\begin{align} 2 &\rightarrow \{12,14,24,28\} \\ 3 &\rightarrow \{12,15,21,24\} \\ 5 &\rightarrow \{15,55\} \end{align}\] (and here it's okay if one item shows up in two lists)
You might think to do it by using a for-loop and storing each item in a named list depending on what it matches. But if you want a variable list of divisors? In this case, managing the list of results can get a little tricky.
The Reap/Sow pattern presents a different approach: when important values are computed inside a function they can be "sown" with Sow, which means they bubble up the call stack until a matching Reap expression is encountered. You can use a Reap expression to process or discard sown values as you see fit.
An implementation of the divisor sorting algorithm using Reap and Sow might look something like this:
SortByDivisors[numbers_,divisors_]:=Reap[
For[i=1, i \(\leq\) Length[numbers], i++,
For[j=1, j \(\leq\) Length[divisors],j++
n = numbers[[i]];
d = divisors[[i]];
If[Mod[n,d] == 0, Sow[n,d]];
];
];, n_, Rule][[2]];
And it will return a collection of Rules, each of which has a divisor as the key and a corresponding list of multiples as the value.
Sunday, March 31, 2013
Sunday, March 17, 2013
Employing "Map" to Make Mathematica More Elegant
One quick way to make your Mathematica code more elegant is to use the Map command in place of For loops when building up arrays. Map is not an outright replacement for For loops, but it can be very helpful when you are trying to transform one data set into another.
For a simple example, let's say that you need to compute the sine of a set of angles. If you were doing this in the style of a procedural programming language, your code might look like the following:
input = {0,\(\frac{\pi}{3}\),\(\frac{2\pi}{3}\),\(\pi\)};Now, that really doesn't look so bad. It's only three lines of code. However, it's only one idea: the transformation of a single data set. It would be more elegant if we could express this one idea in a single -- readable -- line of code. Fortunately, this is exactly the task for which Map was intended:
output = Array[0,Length[input]];
For[i = 1, i \(\leq\) Length[input], i++,
output[[i]] = Sin[input[[i]]];
];
output = Map[Sin,input];This statement applies the Sin command to each element of the input list and captures the results in an output list in corresponding order. It is equivalent to the previous example in terms of behavior and speed, yet it is superior in terms of clarity because it expresses the idea more compactly.
Of course, for a single loop, the payoff in clarity will be minimal. On the other hand, if you make it a habit to express transformations in this fashion, the benefits you reap from code simplicity will grow along with your project.
Sunday, March 3, 2013
Using Linear Algebra to Teach Linear Algebra
Linear Algebra is supposed to be the study of linear transformations between vector spaces. However, it can be hard to tell that from the way Linear Algebra classes usually start -- i.e. a disconnected, unmotivated survey of row manipulation operations.
To be fair, this discussion isn't entirely unmotivated. It's usually presented in the context of Gaussian elimination for the purpose of solving a system of equations. While that's certainly not inaccurate, presenting the material only from that perspective unnecessarily narrows its scope in the mind of the student, making it harder to generalize later. The problem is three-fold:
For example, let's start with the following matrix:
\[\left(\begin{array}{ccc} a & b & c \\ d & e & f \\ g & h & i \end{array}\right)\]
Now suppose we want to interchange Row 1 with Row 2. We can do this by multiplying on the left using a special matrix designed for interchanging those rows:
\[
\left(\begin{array}{ccc} 0 & 1 & 0 \\ 1 & 0 & 0 \\ 0 & 0 & 1 \end{array}\right) *
\left(\begin{array}{ccc} a & b & c \\ d & e & f \\ g & h & i \end{array}\right) =
\left(\begin{array}{ccc} d & e & f \\ a & b & c \\ g & h & i \end{array}\right)
\]
Another common row manipulation operation is to add a scalar multiple of one row to another. Let's say we want to triple Row 1 and add those values to Row 3. Again, We can achieve this via left multiplication with a special matrix designed for that purpose:
\[
\left(\begin{array}{ccc} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 3 & 0 & 1 \end{array}\right) *
\left(\begin{array}{ccc} a & b & c \\ d & e & f \\ g & h & i \end{array}\right) =
\left(\begin{array}{ccc} a & b & c \\ d & e & f \\ 3a + g & 3b + h & 3c + i \end{array}\right)
\]
Two questions arise here: Primarily, how are these special matrices constructed? Also, what is the advantage to even doing any of this?
Constructing these matrices becomes obvious once we invoke one of the fundamental principles of Linear Algebra: the matrix representation of any linear transformation comes from applying that transformation to the identity matrix.
So, if you'll notice, our matrix for swapping rows 1 and 2 was constructed by simply swapping rows 1 and 2 of the identity matrix. Likewise, our matrix from adding the triple of row 1 to row 3 was constructed by tripling row 1 of the identity matrix and adding it to row 3 of the identity matrix.
That also partially answers the question "What is the advantage?". As a pedagogical tool, this would provide an early opportunity to teach the core notions of Linear Algebra without bogging the student down in what is frequently perceived as accounting homework.
However, there is a further advantage in that tedious row manipulation algorithms can be represented compactly as products of their corresponding matrices. Not only does this allow for an early discussion of the composition of linear transformations, but taking a giant list of row operations and expressing it compactly as a single matrix is an excellent way to demonstrate that Linear Algebra is Powerful.
To be fair, this discussion isn't entirely unmotivated. It's usually presented in the context of Gaussian elimination for the purpose of solving a system of equations. While that's certainly not inaccurate, presenting the material only from that perspective unnecessarily narrows its scope in the mind of the student, making it harder to generalize later. The problem is three-fold:
- row manipulation is presented as something that is specifically "for" equation solving
- the row manipulation operations are presented as external algorithms
- the matrix concept is treated as a passive thing (a data structure), rather than an active thing (a transformation).
For example, let's start with the following matrix:
\[\left(\begin{array}{ccc} a & b & c \\ d & e & f \\ g & h & i \end{array}\right)\]
Now suppose we want to interchange Row 1 with Row 2. We can do this by multiplying on the left using a special matrix designed for interchanging those rows:
\[
\left(\begin{array}{ccc} 0 & 1 & 0 \\ 1 & 0 & 0 \\ 0 & 0 & 1 \end{array}\right) *
\left(\begin{array}{ccc} a & b & c \\ d & e & f \\ g & h & i \end{array}\right) =
\left(\begin{array}{ccc} d & e & f \\ a & b & c \\ g & h & i \end{array}\right)
\]
Another common row manipulation operation is to add a scalar multiple of one row to another. Let's say we want to triple Row 1 and add those values to Row 3. Again, We can achieve this via left multiplication with a special matrix designed for that purpose:
\[
\left(\begin{array}{ccc} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 3 & 0 & 1 \end{array}\right) *
\left(\begin{array}{ccc} a & b & c \\ d & e & f \\ g & h & i \end{array}\right) =
\left(\begin{array}{ccc} a & b & c \\ d & e & f \\ 3a + g & 3b + h & 3c + i \end{array}\right)
\]
Two questions arise here: Primarily, how are these special matrices constructed? Also, what is the advantage to even doing any of this?
Constructing these matrices becomes obvious once we invoke one of the fundamental principles of Linear Algebra: the matrix representation of any linear transformation comes from applying that transformation to the identity matrix.
So, if you'll notice, our matrix for swapping rows 1 and 2 was constructed by simply swapping rows 1 and 2 of the identity matrix. Likewise, our matrix from adding the triple of row 1 to row 3 was constructed by tripling row 1 of the identity matrix and adding it to row 3 of the identity matrix.
That also partially answers the question "What is the advantage?". As a pedagogical tool, this would provide an early opportunity to teach the core notions of Linear Algebra without bogging the student down in what is frequently perceived as accounting homework.
However, there is a further advantage in that tedious row manipulation algorithms can be represented compactly as products of their corresponding matrices. Not only does this allow for an early discussion of the composition of linear transformations, but taking a giant list of row operations and expressing it compactly as a single matrix is an excellent way to demonstrate that Linear Algebra is Powerful.
Subscribe to:
Posts (Atom)