Sunday, July 31, 2011

Mathematical Knitting Network

More than 5,000 mathematicians come annually to the Joint Mathematics Meetings (JMM), held in January. In recent years, amidst the usual lectures, poster sessions, job interviews, and much else, these gatherings have also featured an evening event for those particularly interested in mathematical crafting.

Devoted to knitting, crocheting, beading, needlework, paper folding, and more, this informal session is organized by sarah-marie belcastro (Smith College) and Carolyn A. Yackel (Mercer University).

belcastro and Yackel edited and contributed to the book Making Mathematics with Needlework: Ten Papers and Ten Projects (A K Peters, 2007), which contains not only instructions for creating mathematical objects but also insights into the underlying mathematics.

The JMM knitting network event brings together a wide variety of people, both experts and beginners. Participants and projects can vary widely from year to year.

In the realm of counted cross stitchMary Day Shepherd (Northwest Missouri State University) creates painstakingly woven symmetry patterns. For the type of cloth and technique that she uses, the fabric is a grid of squares, and one cross stitch covers one square of the fabric. The only possible subdivision of this square is with a stitch that "covers" half a square on the diagonal, Shepherd says.

These features constrain the number of symmetry patterns that you can weave. Of the 17 possible wallpaper patterns, for example, only 12 can be done in counted cross stitch.

The 12 wallpaper patterns that can be done in counted cross stitch needlework.
Courtesy of Mary D. Shepherd.

Shepherd has also worked on frieze and rosette symmetry patterns. Rosette patterns, for example, give a nice visualization of the symmetries of a square (technically, the group D4 and all its subgroups), she says. See Shepherd's article "Groups, Symmetry and Other Explorations with Cross Stitch" (Word document) and her chapter "Symmetry Patterns in Cross Stitch" in the book Making Mathematics with Needlework.

Rosette patterns for visualizing the symmetries of a square (the dihedral group of the square).
Courtesy of Mary D. Shepherd.

David Jacob "Jake" Wildstrom (University of Louisville) has a passion for crocheting in relief.

One of the few fractals that's amenable to crochet is the Sierpinski triangle. Wildstrom has turned this remarkable geometric figure into blankets, wispy shawls, and even a hat. His instructions for crocheting such figures can be found in the chapter "The Sierpinski Variations: Self-Similar Crochet" in Making Mathematics with Needlework. More information is available at Wildstrom's "crochetgeek" website.

Jake Wildstrom's relief crocheting has turned a fractal known as the Sierpinksi triangle into a shawl.
Photo by I. Peterson.

Tom Hull (Western New England University) specializes in mathematical origami design.

One of Tom Hull's modular origami creations.
Photo by I. Peterson.

Laura M. Shea strings tiny crystal beads to form polyhedra or geometric tilings.

This beadwork bracelet, created by Laura Shea, is based on a triangular tiling.
Photo by I. Peterson.

Creating polyhedra with beads is an interesting way to learn the properties of regular and semi-regular solids, Shea says. In a bead polyhedron, each face becomes open space, each edge becomes one bead, and each vertex becomes a thread void. The resulting structure is light and open.

The "Plato Bead," created by Laura Shea, is a dodecahedron. A bead stands in for each of this polyhedron's 30 edges. Each of the 20 vertices becomes a void surrounded by three beads and thread. The 12 faces of the form become open spaces.
Courtesy of Laura Shea.

The Bridges website shows additional examples of Shea's work.

No mathematical crafts session of the knitting network at JMM would be complete without a Möbius strip—that mind-bending, one-sided, one-edged mathematical object.

Josh Holden (Rose-Hulman Institute of Technology) displays a Möbius band that he crocheted.
Photo by I. Peterson.

Carolyn Yackel works on her knitting during a JMM knitting network session.
Photo by I. Peterson.

Originally posted Jan. 27, 2007

References:

belcastro, s.-m., and C. Yackel. 2006. About knitting . . .Math Horizons 14(November):24-27.

Klarreich, E. 2006. Crafty geometryScience News 170(Dec. 23&30):411-413.

Saturday, July 30, 2011

Divide-and-Conquer Multiplication

Most people know just one way to multiply two large numbers by hand. Typically, they learned it in elementary school. They're often surprised to find that there are a variety of ways to do multiplications, and each such algorithm has advantages and disadvantages. Moreover, grade-school multiplication can be far from the best method available in certain contexts.

Slight differences in the efficiency of multiplication algorithms can make a huge difference when calculators or computers do the work. Computers worldwide perform enormous numbers of multiplications each day. In most computers, each operation consumes mere microseconds, but multiplied by the number of computations performed, the differences in time taken can be significant. So, the general question of how quickly two n-bit numbers can be multiplied has not only great theoretical importance but also practical relevance.

Indeed, when it comes to multiplying two numbers, the best (or fastest) way to do it is often far from obvious.

One particularly intriguing and efficient multiplication algorithm was developed in the late 1950s by Anatolii Alexeevich Karatsuba, now at the Steklov Institute of Mathematics in Moscow.

Karatsuba's "divide-and-conquer" multiplication algorithm has its roots in a method that Carl Friedrich Gauss (1777–1855) introduced involving the multiplication of complex numbers.

A complex number is an expression of the form a + bi, where a and b are real numbers, and i has the property that i2 = –1. The real number a is called the real part of the complex number, and the real number b is the imaginary part. When the imaginary part b is 0, the complex number is just the real number a.

Suppose you want to multiply two complex numbers, a + bi and c + di. To do so, you use the following rule: (a + bi)(c + di) = [ac – bd] + [ad + bc]i.

For example: (2 + 3i)(1 + 2i) = [2 – 6] + [4 + 3]i = (–4 + 7i).

Expressed in terms of a program, you would input abc, and d and output ac – bd and ad + bc.

Computationally, multiplying two digits is much more costly than is adding two digits. Suppose then that multiplying two real numbers costs \$1 and adding them costs a penny. To obtain ac – bd and ad + bc requires four multiplications and two additions, for a total of \$4.02.

Is there a cheaper way to obtain the output from the input? The Gauss algorithm offers an alternative approach. Here's how the computation can be done for \$3.05, with three multiplications and five additions.

x1 = a + b
x2 = d
x3 = x1 x2 = ac + ad + bc + bd
x4 = ac
x5 = bd
x6 = x4 – x5 = ac – bd
x7 = x3 – x4 – x5 = bc + ad

So, the Gauss algorithm saves one multiplication out of four.

Karatsuba's divide-and-conquer multiplication algorithm takes advantage of this saving.

Consider a multiplication algorithm that parallels the way multiplication of complex numbers works. Roughly speaking, the idea is to divide a given multiplication problem into smaller subproblems, solve them recursively, then glue the subproblem answers together to obtain the answer to the larger problem.

Suppose that you want to compute 12345678 * 21394276.

Break each number into two 4-digit parts.

a = 1234, b = 5678, c = 2139, d = 4276

Find the products bdbcad, and ac.

bd = 5678 * 4276
bc = 5678 * 2139
ac = 1234 * 2139

Break each of the resulting numbers into 2-digit parts. Repeat the calculations with these parts. For example, break 1234 * 2139 into 12, 34, 21, and 39. Find the appropriate products.

12 * 21, 12 * 39, 34 * 21, 34 * 39

Repeat the same steps with these 2-digit numbers, breaking each one into 1-digit parts and finding the appropriate products. For example, 12 * 21 gives the multiplications 1 * 2, 1 * 1, 2 * 2, and 2 * 1.

In this divide-and-conquer algorithm, given two decimal numbers x and y, where x = a10n/2 b and y = c10n/2 + d, you have xy = ac10n + (ad + bc)10n/2 bd.

So, for n = 2, ac = 1 * 2 = 2, ad = 1 * 1 = 1, bc = 2 * 2 = 4, bd = 2 * 1 = 2, you obtain:

12 * 21 = 2 * 102 + (1 + 4)101 + 2 = 252

Similarly, for 12 * 39, you get:

1 * 3 = 3, 1 * 9 = 9, 2 * 3 = 6, 2 * 9 = 18
3 * 102 + 9 * 101 + 6 * 101 + 18 * 1 = 468

You get 12 * 21 = 252, 12 * 39 = 468, 34 * 21 = 714, 34 * 39 = 1326.

This allows you to compute 1234 * 2139.
252 * 104 + 468 * 102 + 714 * 102 + 1326 * 1 = 2639526.

Similarly, you obtain:

1234 * 4276 = 5276584
5678 * 2139 = 12145242
5678 * 4276 = 24279128

Hence, 12345678 * 21394276
= 2639526 * 108 + 5276584 * 104 + 12145242 * 104 + 24279128 * 1
= 264126842539128.

Karatsuba's insight was to apply the Gauss method to this divide-conquer-and-glue approach, replacing some multiplications with extra additions. For large numbers, decimal or binary, Karatsuba's algorithm is remarkably efficient. It's the sort of thing that your computer might be doing behind the scenes to get an answer to you a split second faster.

It's not the fastest known multiplication algorithm, but it's a significant improvement on grade-school multiplication. Indeed, properly implemented, Karatsuba multiplication beats grade-school multiplication even for 16-digit numbers and is way better for 32-digit numbers.

And we probably haven't heard that last word on multiplication. More innovations in computer arithmetic may yet lie ahead.

Originally posted Feb. 12, 2007.

Reference:

Karatsuba, A.A. 1995. The complexity of computations. Proceedings of the Steklov Institute of Mathematics 211:169-183.

Monday, July 4, 2011

Sliding Around

Try the following sliding-coin puzzle.

Place four coins on the bottom row of circles (G, D, E, and R), leaving the letters MARTIN exposed. Your challenge is to slide the coins along the paths joining the circles so that the four coins cover the top row of circles (M, T, I, and blank), exposing the letters GARDNER.

Sounds simple? The catch is that at no time are two coins allowed next to each other when a path joins the two circles. For example, you can't move the G coin initially because the only allowed move would put it next to the D coin. The D coin can move to the T circle but not to the A circle or the I circle. You also have to slide the coins one at a time, all the way from one circle to another. And, where paths cross, you can't switch from one path to another en route.

Robert A. Hearn invented this puzzle in honor of Martin Gardner, mathemagician extraordinaire.

In traditional sliding-coin puzzles, a set of coins laid out in a certain arrangement must be slid into a new configuration in the fewest possible moves. It's been proved for these types of puzzles that it's always computationally "easy" to determine whether a given puzzle is solvable.

Hearn's Martin Gardner coin-sliding puzzle, however, is a different sort of beast. It's more akin to sliding-block puzzles, such as the infamous 15 puzzle than to traditional sliding-coin puzzles.

It happens to take 25 moves to solve the Martin Gardner puzzle. In exploring how hard it might be solve such puzzles in general, Hearn discovered that there's a relatively simple way to generate arbitrarily difficult instances. Indeed, it's easy to make puzzles for which it's computationally very difficult to determine whether there's a solution at all.

More formally, it turns out the Hearn's new type of puzzle belongs to a category of computational problems described as PSPACE-complete. This means that you should expect that, as the number of circles and paths gets larger, the amount of computer time required to solve the puzzle increases exponentially in the worst possible case, even though the amount of memory required increases merely algebraically with size.

Here's another puzzle for you to try.

Place four coins on the red circles. Slide the coins along the paths so that there is a coin on every starred circle. At no time may any two coins be adjacent—occupying two circles joined by a path. You should be able to solve this puzzle in 28 moves.

Hearn has tried a number of variations, such as using tokens or coins of different colors. "Then the start and end configurations have specific places for each color of token," Hearn says. "This makes more complex puzzles possible on smaller graphs." Mathematically, a graph is an array of vertices or nodes (circles in this case) connected by edges (paths).

In this puzzle, each of four tokens has a different color, initially placed as shown by the colored dots. Your goal is to move each token to a space containing a circle of the same color. Solving this particular puzzle requires 116 moves.

Hearn's Martin Gardner coin-sliding puzzle also inspired puzzle designer James Stephens, who runs the PuzzleBeast website, to create a computer-generated set of puzzles called the "Meandering Monk Maze."

Reference:

Hearn, R.A. 2005. The complexity of sliding-block puzzles and plank puzzles. In Tribute to a Mathemagician, B. Cipra, E.D. Demaine, M.L. Demaine, and T. Rodgers, eds. A K Peters.

Originally posted October 24, 2005

Illustrations and puzzles courtesy of Bob Hearn