Friday, April 22, 2016

regular polygons in rings, part two

In the last two posts, I was playing around with rings of polygons, relying on a formula developed in this post from last summer. I realize that the explanation offered in that original post was not so great, so I am going to try and better explain what the "regular polygon ring formula" k = 2n/(n -(2+2m)) means, and mention a few more interesting patterns.

The first thing to notice, is that some regular polygons can be fitted together nicely in a ring where there is one edge (along the interior of the ring) between adjacent polygons, like these:

And some regular polygons can be fitted together skipping over two edges along the interior, like these:


However, some regular polygons cannot be put into such arrangements for a given skip count.



So, a reasonable question to ask is which regular polygons can be arranged in a ring by skipping over a fixed number of edges. More precisely, can k regular n-gons be place in a ring formed by skipping over m edges (along the inside of the ring) between adjacent n-gons?

Before answering that, it is good to know a couple of things about angles in regular n-gons. The angles at the vertex of a regular n-gon is equal to (n-2)pi/n. A nice proof of this involves cutting the n-gon into triangles that all share one vertex of the original n-gon: (2-n) triangles each contribute a total angle measure of pi, which is then divided evenly among the n vertices of the original n-gon. Also, if you form a wedge in the n-gon, from its center to two adjacent vertices, the angle formed will be 2pi/n. These two angle facts will help us come up with formulas for which n-gons can form the rings described above.


Let's start with just skipping one edge (= 1) when forming the ring. Like for the pentagons shown below.


Consider k n-gons arranged in a ring so that there 1 side is skipped between adjacent polygons.

If you connect the centers of the k n-gons, you obtain a bigger interior polygon, a regular k-gon.

On the one hand, since it is a regular k-gon, its interior angles are (k-2)pi/k (the first fact about angles in regular polygons mentioned above). On the other hand, if you look at the angle as it sits within each n-gon, you can see that the angle is formed by connecting the midpoints of two sides, one side apart, to the center. From this, you can see that this angle is also equal to 4pi/n (from the second fact about angles in regular polygons mentioned above).

Equating these two gives you the rule k = 2n/(n-4).



We want k and n that are integers, so we can graph the function  k = 2n/(n-4) and see what works. If we choose n = 4, things are undefined, which corresponds in our geometric model to an infinite ring of squares:


The only actual solutions we get are for n equal to 5, 6, 8, and 12. After this k is approaching the limit of 2, which geometrically corresponds to the fact that you can't make a ring with just two polygons.

What if we skip over two edges?



Here again, you form a regular polygon by connecting the centers of the k n-gons.

Again, since it is a regular k-gon, it will have interior angles of (k-2)pi/k.

However, now looking at this angle within the n-gon you can see it is formed by connecting the midpoints of two sides, two sides apart, to the center. So the angle is also 6pi/n.

Equating (k-2)pi/k and 6pi/n  gives you the rule k = 2n/(n-6).





For the "skip 2 edges" case, if we choose n = 6, we get an infinite ring of hexagons, and our function  k = 2n/(n-6) is undefined.


We can find which polygons work by finding the integer solutions to our rule. There are six polygons that work following the skip 2 edges rule of ring construction, as shown on the graph below. We can stop at the 18-gon, with it we've hit the minimum value of k = 3.



In general, we can try a "skip m edges" rule, and following the same reasoning, and note that the angles of the central polygon will be (2 + 2m)pi/n (see diagram below). And, as before, equating this with (k-2)pi/k, obtain the relationship k = 2n/(n - (2 + 2m)).

This also covers the case m = 0, where we make a ring skipping zero edges fitting the regular n-gons snugly together, and find (as maybe you knew already) that this can only be done for the triangle, the square, and the hexagon.

Looking at the graphs and solutions for several values of m, it looks as if there are some linear relationships between n and k that hold true for all m.



Playing connect the dots, there about six relationships that jump out right away: three lines sloping upwards, and three horizontal constant relationships (a fixed value for k), that slice through each m curve and give integer solutions.


For example, consider the line k = 2n; when substituted into the general formula k = 2n/(n - (2 + 2m)), this gives us n = 2m + 3. What does this mean? One way of putting it is, for any number of skips (m) we can always find an n-gon that can form a ring that is twice the size of n. Or, put another way, for any odd n, you can place n-gons in a ring double the size of n, and to do this you will have to skip over m = (n-3)/2 sides. Six triangles can form a ring skipping no sides (m=0), 10 pentagons can form a ring skipping one side, and fourteen heptagons can form a ring skipping two sides.


Similarly, the line k = n connects with all the m curves when n = 2m + 4, so for any even n, you can place n n-gons in a ring (with m = (n - 4)/2 skips), as described in this post.

The other relationships mentioned above (and there are more that were not mentioned) also reveal interesting patterns in how rings of regular polygons can be formed. Some questions you can look at would be: What values n allow you to create a ring of 3, 4, or 6 n-gons? Which "skip values" allow you to create a ring of 10 polygons?

Monday, April 18, 2016

rings of regular polygons in rings

As mentioned here and here, you can arrange regular polygons in regular rings.

If n is even, then n regular n-gons can be placed in a ring with each n-gon centered at the vertex of a another regular n-gon, where there are m = (n - 4)/2 edges between adjacent edges of the adjacent n-gons.

For example, squares can form a ring around a square, skipping no edges, while hexagons can form a ring around a hexagon, skipping one edge along the interior of the ring between adjacent hexagons.


Things look a bit more interesting for octagons and decagons, which due to the skipping of edges form a star shape.


So if a regular n-gon can form a ring around another regular n-gon, what happens if you make rings out of those (i.e. make a ring out of the red rings above)?

For n = 4 or 6, you get a tessellation - the small squares or hexagons will neatly fit up against each other and tile the plane (if you keep going). For n = 8, the octagons form patterns like this.


If we only show the initial small octagons, we get the pattern below, where the small octagons do not overlap, but form a nice pattern with various star-shaped holes.


We can only do one ring of rings of decagons before they start to overlap. In the ring of rings of decagons below, some edges of the secondary decagons were left in, for aesthetic effect.


Thursday, April 14, 2016

tetradecagons and heptagons

Taking another look at regular polygons in rings, here are some regular heptagons (7 sides) centered at the vertices of a regular tetradecagon (14 sides), and some regular tetradecagons centered at the vertices of a regular heptagon. The smaller polygons form rings where there are a fixed number of edges skipped between each pair of adjacent polygons.





When placing the heptagons around the tetradecagon, there are two edges between adjacent heptagons, and when placing the tetradecagons around the heptagon, there are four edges between the adjacent tetradecagons. In general, when placing polygons in rings like these, if you can place the regular n-gons  at the verticies of a regular k-gon, having adjacent n-gons sharing an edge while skipping over m edges in between if k = 2n/(n - (2+ 2m)).

Tuesday, March 8, 2016

dividing polynomials: the backwards reverse tabular method

Among the Lakota people, the heyoka is a contrarian, jester, satirist or sacred clown. The heyoka speaks, moves and reacts in an opposite fashion to the people around them. - Wikipedia, Heyoka

A couple of earlier posts (this one and this one) describe how to use the grid method for dividing polynomials, and were intended to be helpful for people learning or teaching the topic. The current post shows some surprising things that happen when you mess around with the standard division algorithm, and is probably not quite as helpful. Specifically, if you divide in a "backwards" fashion, things can blow up spectacularly. If you are intrigued by the possibility of polynomial explosions, read on; for others: you've been warned.

When you divide single-variable polynomials g, the standard (euclidean) algorithm requires you to f to have a higher degree than g, and when you start dividing you take the term of the highest power of g and see what you should multiply that by to get the term with the highest power in f. This tells you the first term of your quotient q. In the example below, we look at the 3x term and ask how many times does it go into the 9x^3 term - the answer is 3x^2, which is the first term of our answer.


The rest of this example can be found in a more helpful post. The method illustrated above is sometimes called the "reverse tabular" method ("reverse" because using a table in the normal fashion of filling in the interior given the top row and left column is used for multiplying polynomials).

The standard approach of starting with the highest degree terms ensures that if you end up with a remainder, its degree will be less than the degree of the divisor, but also means that you can only divide when the degree of the divisor is less than the dividend. In the above example, 3x-2 is degree 1, which is less than the dividend which has degree 3; for this division problem it turns out that you will get a degree 0 remainder (remainder is 5).

But what if you did it backwards? Instead of starting with the highest degree terms in each polynomial, pick each one up by the tail and start with the smallest degree terms?  Flipping the polynomials around so they are backwards while using a grid makes it reasonable to call this the "backwards reverse tabular method."

The simple case - divisor is a factor

If you are in one of those lovely situations where the dividend is higher degree than the divisor, and there is no remainder (the divisor is a factor of the dividend), you will get the same answer as using the standard algorithm, but your table will look predictably different (the steps for the forwards direction are shown here).


Starting with the lowest term means that the table is filled in backwards, but otherwise nothing changes, you still get the same answer. This is not what happens in the case where standard division would give you a remainder.

The remainder case

In the standard "forwards" method, you work your way down and the terms that you are trying to clean up are of lesser and lesser degree. Anything leftover at the end (the remainder) will then necessarily be of lower degree than the highest term of the divisor.

However, following the backwards method, the terms you are trying to eliminate get bigger and bigger, and will just keep going if you can't tidy them up. Here's a division problem with a remainder, first forwards:


And now the same problem, backwards:


You'll find that you have to keep adding columns to your grid to keep up with the new terms that keep getting generated, so instead of a polynomial with a rational expression as a remainder we get...  an infinite series. This seems odd: how is that thing on the right hand side equal to the left hand side? Is the equality only true when the right hand side converges? All these worries go away if we treat these as formal power series.

Why would we ever divide polynomials this way?

Dividing the indivisible

Although this is giving us some alarming results, following the backwards method allows us to do something where the standard algorithm won't allow us to do anything. If the degree of the divisor is bigger than the degree of the dividend, the standard algorithm tells us to stop right away: nothing can be done. However, armed with our new backwards method we can start dividing (and, as we saw above, we might never stop).

Let's try an example where the numerator is just 1. In the first step below, we fill in what we want as our product, namely 1. However, to get this, we need to fill in a 1 on the top row (step 2), and multiplying this by the whole left column gives us extra terms that we now need to get rid of. In step 3, we see that we will need an x in the top row to eliminate the x that we introduced in the previous step.


And you will have to keep adding more columns to fill in new terms to eliminate the terms that you introduced in the previous two steps.


Those coefficients seem to be multiplying like rabbits. No kidding: it's the Fibonacci sequence. So, we can say that the expression on the left produces the sequence of coefficients on the right, and the rational expression on the left is called the generating function for the sequence of coefficients of the resulting power series.

Here's another neat one, in the same vein:



In this case the coefficients of the resulting power series are the triangular numbers. But there is something cool about the divisor as well: the divisor's coefficients are from the third row of Pascal's triangle (with alternating signs), and the triangular numbers are from the third column.



If you try this with the fourth row of Pascal's triangle, you'll find you get a power series whose coefficients are the fourth column (the tetrahedral numbers).


Most of the time, you'll want to divide polynomials in the usual way, and get the usual answers; sometimes you can learn a bit by doing things backwards.

Thursday, February 18, 2016

triangle games: three jugs and the towers of hanoi

I was prompted to pull  "Geometry Revisited" (GR) by Coxeter and Greitzer off the shelf after reading John Conway and Alex Ryba's "The Stiener-Lehmus Theorem" (from The Best Writing on Mathematics, 2015, edited by Mircea Pitici). 

GR includes a very short section on "trilinear coordinates" which were used to in drawing up the triangle of triangles. As Coxeter and Grietzer explain,

As a welcome relief from the ordinary squared paper, used for plotting points with given Cartesian coordinates, one can sometimes buy "triangulated" paper, ruled with three systems of parallel lines, dividing the plane into a tessellation of small equilateral triangles. Such paper is convenient for plotting points that have given trilinear coordinates with respect to a (large) equilateral triangle.... These coordinates are ideal for representing any situation in which three variable quantities have a constant sum.

Representing shapes of triangles qualifies nicely as "a situation in which three variable quantities have a constant sum," as does the "three jug problem," which Coxeter and Grietzer use to show off the fun and usefulness of trilinears.

A Three Jug Problem

Suppose you have a three ungraduated jugs that hold 8 pints, 5 pints and 3 pints of liquid, respectively. You know the total volume that each jug can hold, but since they have no other measurements written on them, you can't use individual jugs to measure out quantities other than their full amount. Now suppose that the 8 pint jug is full and the others are empty; without loosing any liquid, how can you measure out 4 pints?

So, this problem fits nicely into the trilinear coordinate case: the total amount of liquid shared among the three jugs stays the same (8 pints), and each point (a, b, c), will represent the amount in the 8 pint, 5 pint, and 3 pint jug respectively. However, because of the different sizes of the jugs, not all of the trilinear points are accessible: The point (0,0,8) would represent all 8 pints in the 3 pint jug, so we draw solid lines on the triangle at (a, 5, c) and (a, b, 3) that cut off the 5 and 3 pint jugs. leaving us a parallelogram inside the triangle that represents all possible states of the jugs.

Our goal is to get one of the jugs to have just 4 pints in it. There are three solutions: (4,1,3), where there are 4 pints in the 8 pint jug 1 in the 5 pint jug and 3 in the 3 pint jug; (4,4,0), where there are 4 pints in the 8 pint jug and 4 in the 5 pint jug; and (1, 4, 3), where there is 1 pint in the 8 pint jug, 4 in the 5 pint jug, and 3 in the 3 pint jug.


A pour from one jug to another is shown by traveling from one edge of the parallelogram to another. So for example, starting with 8 pints in the large jug (8,0,0), we can pour 3 pints into the smallest jug and end up at (5,0,3).


Continuing along, bouncing from one side of the parallelogram to another, we have the first solution: (8,0,0), (5,0,3), (5,3,0), (2,3,3), (2,5,1), (7,0,1), (7,1,0), (4,1,3).


Starting off in the other direction, pouring from the 8 pint jug into the 5 pint jug leads to another solution: (8,0,0), (3,5,0), (3,2,3), (6,2,0), (6,0,2), (1,5,2), (1,4,3).



The Towers of Hanoi

The Three Jug problem reminded me of the Towers of Hanoi puzzle, where disks are exchanged between three rods.

The Towers of Hanoi consists of three rods onto which disks can be slid. A stack of disks of different sizes is on the first rod, stacked from largest to smallest, so that they form a cone. Your goal is to move all the disks onto another rod such that that a disk can never sit on top of a smaller disk and disks can only move one at a time.

Just as with the volume of liquid shared among the jugs, we can think of total weight of the disks as remaining constant as they shift between the rods. Movement between the jugs was restricted to either a complete filling or complete emptying of a jug, which gave the solution on the triangle the effect of bouncing from one side to the other. That is not the same situation as in the Hanoi problem, where movement between the rods is restricted by the stacking rules - but this rule might also lead to interesting patterns.

There are a number of ways of solving the Towers of Hanoi, but here is the one I like: if the rods are labeled a, b, c, to move the tower of disks from a to c, first move the tower without the last disk to b, move the last disk to c, and then move the tower back on top of the last disk already on c. If you are not familiar with recursion, you'll maybe not be convinced that this is a solution. Rest assured that by changing the problem of moving n disks into a problem of moving n-1 disks (twice), the problem is solved: You just need to keep applying the solution to each of the smaller sets of disks until the problem vanishes completely.

Lets imagine that for the n disk version, we have disks of weight 1, 2, 3, ... n. For n=3, we have disks of weight 1, 2, and 3, so we can play the game on a triangle with lengths of 6 (the total weight of all the disks).


Suppose that for rods a, b, c we use the coordinates (a,b,c) where the value of the letter represents the weight of the disks on that rod. Lets say we are trying to move the disks from a to c, which would require us to move from (6,0,0) to (0,0,6). If we follow the recursive solution above, we get a pattern like this:



If we had been playing with only two disks, we would have had a smaller triangle, and a simpler solution, You can see that the two disk solution appears twice in the three disk solution, but rotated, and these two copies of the smaller solution are linked by a move of largest disk. This is just as what you would expect from our algorithm (move the n-1 stack onto the 'spare' rod, move the n disk onto the target rod, then move the n-1 stack onto the target rod).


Here is the pattern up to n = 6.

Instead measuring the weight of the disks on the triangle, we could have just used each axis to count the number of disks - so for the 3 disk problem, we would have just has a triangle of sides length 3. This means that each move will have the same length (length 1), and some points that were distinct in the triangles above will now be the same.

The patterns look different, but they still exhibit the same recursive structure - each solution contains two copies of the previous solution joined by the move of the last disk.

So. trilinear coordinates provide a fun way of displaying triangle families, a helpful way of representing the classic 3 jug problem, and provide a way of seeing the "recursiveness" of the Towers of Hanoi solution. By the way, when I was manually trying to draw out solutions to the jug and Hanoi problems, I found the triangular graph paper here helpful.

Note: Trilinear coordinates are sometimes called triangluar coordinates, and sometimes barycentric - there are actually various types of these coordinate systems that are sometimes distinguished: see this note from the math forum.