2

oftenpaper.net

 2 years ago
source link: http://www.oftenpaper.net/sierpinski.htm
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

Constructing the Sierpinski triangle

Throughout my years playing around with fractals, the Sierpinski triangle has been a consistent staple. The triangle is named after Wacław Sierpiński and as fractals are wont the pattern appears in many places, so there are many different ways of constructing the triangle on a computer.

All of the methods are fundamentally iterative. The most obvious method is probably the triangle-in-triangle approach. We start with one triangle, and at every step we replace each triangle with 3 subtriangles:

  1. siermathgb3.png
12source

This triangle-in-triangle method strikes me as a disguised Lindenmayer system. L-systems are iterative symbol-based replacement mechanisms. There are a variety of more explicit L-system constructions for the triangle, such as the 'arrowhead' L-system (also see my L-systems program):

  1. sierlsys3.png
tablesource

There's the cellular automata approach, where the 'world' is a single array of bits and at each "instant" we alter a bit based on the state of it and its neighbors. If we plot the evolution of Rule 22 (and others), we get these patterns:

  1. sierca.png
tablesource

There are bound to be many elementary number-theoretic constructions of the Sierpinski triangle given that it looks like a percolation pattern (as in the cellular automata above). The Wikipedia article mentions that it appears in Pascal's Triangle when differentiating between even and odd numbers. Sure enough:

  1. sierpasc1.png
123source

If we look at these Pascal forms and reverse engineer the parity rules, we get Rule 22. Though it might depend on what exactly you're reverse engineering. We can generalize from even/odd to other moduli:

  1. Pascal's triangle mod 4

    sierpasc7.png
12source

The Wikipedia article for Pascal's triangle mentions that we can construct a 'Pascal matrix' using the matrix exponential:

e(00000100000200000300000⋱0)=(100001100012100133101464⋱)

"Ah, that makes sense." You say. Indeed, but what's cool is that we then have a pedantic way of specifying the Sierpinski triangle:

S≡e(00000100000200000300000⋱0)(mod 2)

This equation is in what's called "straight ballin'" form, and it gives us a fancy way of producing the triangle:

  1. draw[n_] := ArrayPlot[Mod[MatrixExp[DiagonalMatrix[Range[n], -1]], 2]];
    
imagesource

Heawt deaowg /drawl. It's not very performant though. The following is faster and arguably more elegant:

  1. draw[n_] := ArrayPlot[Mod[Array[Binomial, {n, n}, 0], 2]];
    
imagesource

Along these lines, it shouldn't be surprising that the Sierpinski pattern appears in other combinatorial expressions, such as the Stirling numbers:

  1. hypernomial1.png
tablesource

If we treat the rows produced by these combinatorial functions as arrays of bits, what sequence of numbers do the bits represent? There's a variety of ways to interpret this question, but here's one assortment:

  1. (Binomial13515175185255257…StirlingS1113355151517…StirlingS21137132955115209…Multinomial511341409273481321385257255…)
tablesource

The first, second, and fourth sequences are versions of each other, tautologically described in OEIS as A001317. The sequence for the Stirling numbers of the second kind doesn't seem to have any fame, but if you shift its bits around you can find A099901 and A099902.

The Wikipedia article for the Sierpinski triangle mentions its appearance in logic tables such as this one. If you stare blankly at that image long enough you'll notice it's a set-inclusion table. Take the subsets of a set and pair them against each other under set-inclusion (is subset A a subset of subset B?) and you will get that table.

Personally that's a more interesting interpretation than the binary logic one, though the apparent distinction between these subjects is likely just a matter of perspective. Another set-related Sierpinski pattern I found is set disjunction (when sets have no common elements):

  1. issubarein1.png
tablesource

One thing I noticed is that these set patterns depend on the order in which you place the subsets. It has to be the same order that you would get if you were constructing the subsets iteratively. I also wasn't able to find a straightforward ranking function that would order the sets into this iterative sequence. Mathematica's Combinatorica package refers to it as the binary ordering. I think I'm starting to understand what Gandalf meant when he said

" The Sierpinski triangle cannot-be wrought without heed to the creeping tendrils of recursion. Even the binomial coefficient has factorials which are recursively defined. "

MathWorld mentions a broader context for why binary logic can be used in the construction of the Sierpinski triangle. Namely the Lucas correspondence theorem which states that given two numbers written in a prime base,

n=nmpm+⋯+n1p1+n0p0   (0≤ni≤p) k=kmpm+⋯+k1p1+k0p0   (0≤ki≤p)

We can get their binomial coefficient modulo that prime by performing binomial coefficients digit-wise and multiplying the results.

(nk)=m∏i=0(niki)(mod p)

The binomial coefficient (nk) represents the number of k-element subsets of a set of n elements. If we're using zeros and ones, then:

  1. (00)=1(01)=0(10)=1(11)=1
tablesource

The factorial definition is interesting in this case.

(nk)=n!k!(n−k)!

Notice that if we have (01), we get the factorial of a negative number in the denominator. By sticking with the recursive definition of the factorial, the conclusion is that the denominator is some flavor of ∞, so you have 1∞=0. (0! is defined as 1).

The binary operation I found in our little binary binomial table was NOTing n, ANDing the result with k, and then NOTing that: ¬(¬n∧k)=n∨¬k. Also notice it's equivalent to the greater than or equal to operation n≥k.

If by some stroke of luck we happen to have the two numbers stored in binary on our computer, these operations can be performed atomically on the numbers as a whole. And since we're multiplying everything at the end, any presence of (01) in the original numbers means the binomial is congruent to 2. The only trick would be tracking whatever the most significant bit of either number was.

  1. sierbin1.png
imagesource

There's a lot of related patterns:

  1. sierbin2.png
imagesource

And look what I found!

2b∨¬2b=true

If we're looking for a one- or two-liner that's one- or two-linear in languages beside Mathematica, we'd have trouble doing better than the chaos game algorithm, which goes like this:

1 start at any point. call it p
2 pick one of the three vertices at random
3 find the point halfway between p and that vertex
4 call that point p and draw it
5 goto 2
  1. sierchaos2.png
imagesource?

The chaos game doesn't render as crisply as a lot of the other methods, especially without transparency effects, but it has the advantage of being highly performant. It runs about one million points per second on my laptop. Mind you this is with Mathematica's RNG, which is not your everyday math.rand().

One thing I realized is that the randomness isn't actually a necessary aspect of the general algorithm. It's used as an approximating force (or perhaps something a bit more subtle than that). Otherwise with enough spacetime on your computer you can just perform all possible half-distancings:

  1. sierfull5.png
imagesource?

These images look basically the same. Not surprising since they're both point-based. But I gander the distinction between these two algorithms may have been more than just an issue of curiousity 20 years ago. I still remember my first computer, the alien-processored TI-85, chugging away furiously for a good half a minute before the triangle became clear.

Notice that this specific algorithm is actually just a minor modification of the triangle-in-triangle algorithm. The difference is that polygon vertices are here rendered as points. This modification is possible because of Mathematica's symbolic semantics. The symbol Polygon is meaningless until it's processed by the Graphics function. Until then, we can perform structural operations such as replacing it by the Point symbol. In fact the following is completely valid:

axiom = triangle[{{0, 0}, {1, Sqrt[3]}/2, {1, 0}}];

next[prev_] := prev /. triangle[{p1_, p2_, p3_}] :> {
     triangle[{p1, (p1 + p2)/2, (p1 + p3)/2}],
     triangle[{p2, (p2 + p3)/2, (p1 + p2)/2}],
     triangle[{p3, (p1 + p3)/2, (p2 + p3)/2}]};

draw[n_] := Graphics[Nest[next, N@axiom, n] /.  triangle :> Polygon ];

triangle here doesn't have any meaning, ever, until we replace it:

  1. triangle[pts_] :> Line[RandomChoice[pts, RandomInteger[{2, 3}]]]
    sierpure5.png
12345

Sidenote. What do you get when you methodically build a Lisp on top of symbolic replacement semantics? You get the Mathematica language, of which Mathematica and Mathics appear to be the only incarnations.

Let's say you forgot how to multiply matrices. Well, just type in some symbols and see the results empirically:

{{a, b}, {c, d}} . {{e, f}, {g, h}} // MatrixForm

(ae+bgaf+bhce+dgcf+dh)

If that's still confusing, you can use strings, colored text, graphics, images, etc. instead of symbols. In fact if you have a Tron zapper you can even zap your cat into Mathematica and have him fill up one of those matrix slots, for the advancement of science.

  1. binomialcat3.png
The MatrixThe Source

There's poor Mr. Scruples. Our neighbor will miss him.

The exponential identity for the Pascal matrix is not difficult to understand based on the series definition of the exponential function:

ex=x00!+x11!+x22!+x33!+x44!+x55!+⋯

You could work out the matrix arithmetic by hand, or you could do this:

power[n_, p_] := MatrixPower[
    DiagonalMatrix[ToString /@ Range[n], -1], p] // MatrixForm;

Grid[Partition[Table[power[6, p], {p, 1, 6}], 3]] /. 0 -> "\[CenterDot]"

(⋅⋅⋅⋅⋅⋅⋅1⋅⋅⋅⋅⋅⋅⋅2⋅⋅⋅⋅⋅⋅⋅3⋅⋅⋅⋅⋅⋅⋅4⋅⋅⋅⋅⋅⋅⋅5⋅⋅⋅⋅⋅⋅⋅6⋅)(⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅12⋅⋅⋅⋅⋅⋅⋅23⋅⋅⋅⋅⋅⋅⋅34⋅⋅⋅⋅⋅⋅⋅45⋅⋅⋅⋅⋅⋅⋅56⋅⋅)(⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅123⋅⋅⋅⋅⋅⋅⋅234⋅⋅⋅⋅⋅⋅⋅345⋅⋅⋅⋅⋅⋅⋅456⋅⋅⋅)(⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅1234⋅⋅⋅⋅⋅⋅⋅2345⋅⋅⋅⋅⋅⋅⋅3456⋅⋅⋅⋅)(⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅12345⋅⋅⋅⋅⋅⋅⋅23456⋅⋅⋅⋅⋅)(⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅123456⋅⋅⋅⋅⋅⋅)

These are the first 6 powers of the subdiagonal matrix. You can see that the diagonal gets multiplied by subsequently shifted versions of itself, so the calculation ends up creating factorial products. For example, 3x4x5x6 (in the fourth power) can be written in terms of factorials as 6!/2!. If we factor in the denominator from the series for e, we have

6!4!2!

From the factorial definition of the binomial coefficient:

(nk)=n!k!(n−k)!

We see that this particular slot in the matrix is (64). The binomial coefficient itself is of course directly related to Pascal's triangle. Also notice that every power of the matrix has its numbers on a different diagonal, so when we sum up all the powers there is no interaction to account for. Every term in the series is a distinct diagonal of Pascal's triangle.

Powers of matrices have a well-known interpretation in terms of graph walks/probabilities. I didn't find anything interesting along this line though. What about graphs represented by the Sierpinski matrix itself?

(1⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅11⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅1⋅1⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅1111⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅1⋅⋅⋅1⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅11⋅⋅11⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅1⋅1⋅1⋅1⋅⋅⋅⋅⋅⋅⋅⋅⋅11111111⋅⋅⋅⋅⋅⋅⋅⋅1⋅⋅⋅⋅⋅⋅⋅1⋅⋅⋅⋅⋅⋅⋅11⋅⋅⋅⋅⋅⋅11⋅⋅⋅⋅⋅⋅1⋅1⋅⋅⋅⋅⋅1⋅1⋅⋅⋅⋅⋅1111⋅⋅⋅⋅1111⋅⋅⋅⋅1⋅⋅⋅1⋅⋅⋅1⋅⋅⋅1⋅⋅⋅11⋅⋅11⋅⋅11⋅⋅11⋅⋅1⋅1⋅1⋅1⋅1⋅1⋅1⋅1⋅1111111111111111)

Those were more interesting:

  1. siertetrakawhata10.png
12source

Note this is a 3D graph layout. It has some pretty symmetries. I did some tiresome work trying to figure out what polyhedron it might be.

Tooltip[PolyhedronData[#], #] & /@ Select[
  PolyhedronData[], PolyhedronData[#, "VertexCount"] == 14 &]

After much time, I find. It's the tetrakis hexahedron:

  1. siertetrakawhata5.png
imagesource

I'm certain it's this particular figure because we can just build a graph from its vertex data and then do a graph isomorphism check. And look, we can run this polyhedron grapherizer willy-nilly allabouts, like on the Archimedean solids:

  1. siertetrakawhata4.png
tablesource

Here are the first few powers of the Sierpinski matrix:

  1. (1⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅11⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅1⋅1⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅1111⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅1⋅⋅⋅1⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅11⋅⋅11⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅1⋅1⋅1⋅1⋅⋅⋅⋅⋅⋅⋅⋅⋅11111111⋅⋅⋅⋅⋅⋅⋅⋅1⋅⋅⋅⋅⋅⋅⋅1⋅⋅⋅⋅⋅⋅⋅11⋅⋅⋅⋅⋅⋅11⋅⋅⋅⋅⋅⋅1⋅1⋅⋅⋅⋅⋅1⋅1⋅⋅⋅⋅⋅1111⋅⋅⋅⋅1111⋅⋅⋅⋅1⋅⋅⋅1⋅⋅⋅1⋅⋅⋅1⋅⋅⋅11⋅⋅11⋅⋅11⋅⋅11⋅⋅1⋅1⋅1⋅1⋅1⋅1⋅1⋅1⋅1111111111111111)(1⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅21⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅2⋅1⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅4221⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅2⋅⋅⋅1⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅42⋅⋅21⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅4⋅2⋅2⋅1⋅⋅⋅⋅⋅⋅⋅⋅⋅84424221⋅⋅⋅⋅⋅⋅⋅⋅2⋅⋅⋅⋅⋅⋅⋅1⋅⋅⋅⋅⋅⋅⋅42⋅⋅⋅⋅⋅⋅21⋅⋅⋅⋅⋅⋅4⋅2⋅⋅⋅⋅⋅2⋅1⋅⋅⋅⋅⋅8442⋅⋅⋅⋅4221⋅⋅⋅⋅4⋅⋅⋅2⋅⋅⋅2⋅⋅⋅1⋅⋅⋅84⋅⋅42⋅⋅42⋅⋅21⋅⋅8⋅4⋅4⋅2⋅4⋅2⋅2⋅1⋅16884844284424221)(1⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅31⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅3⋅1⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅9331⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅3⋅⋅⋅1⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅93⋅⋅31⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅9⋅3⋅3⋅1⋅⋅⋅⋅⋅⋅⋅⋅⋅279939331⋅⋅⋅⋅⋅⋅⋅⋅3⋅⋅⋅⋅⋅⋅⋅1⋅⋅⋅⋅⋅⋅⋅93⋅⋅⋅⋅⋅⋅31⋅⋅⋅⋅⋅⋅9⋅3⋅⋅⋅⋅⋅3⋅1⋅⋅⋅⋅⋅27993⋅⋅⋅⋅9331⋅⋅⋅⋅9⋅⋅⋅3⋅⋅⋅3⋅⋅⋅1⋅⋅⋅279⋅⋅93⋅⋅93⋅⋅31⋅⋅27⋅9⋅9⋅3⋅9⋅3⋅3⋅1⋅812727927993279939331)(1⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅41⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅4⋅1⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅16441⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅4⋅⋅⋅1⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅164⋅⋅41⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅16⋅4⋅4⋅1⋅⋅⋅⋅⋅⋅⋅⋅⋅641616416441⋅⋅⋅⋅⋅⋅⋅⋅4⋅⋅⋅⋅⋅⋅⋅1⋅⋅⋅⋅⋅⋅⋅164⋅⋅⋅⋅⋅⋅41⋅⋅⋅⋅⋅⋅16⋅4⋅⋅⋅⋅⋅4⋅1⋅⋅⋅⋅⋅6416164⋅⋅⋅⋅16441⋅⋅⋅⋅16⋅⋅⋅4⋅⋅⋅4⋅⋅⋅1⋅⋅⋅6416⋅⋅164⋅⋅164⋅⋅41⋅⋅64⋅16⋅16⋅4⋅16⋅4⋅4⋅1⋅2566464166416164641616416441)
powerssource

There's a lot of patterns here. For one, the powers of the Sierpinski matrix are Sierpinski matrices! This isn't necessarily interesting though. The powers of a triangular matrix are going to be triangular. But the numbers follow a curious sequence of powers. For example, in the third power we have the sequence {1, 3, 3, 9, 3, 9, 9, 27, 3, ... }. And this sequence occurs in every column and every row of the matrix, if you hop over the zeros. We can normalize the powers to find:

  1. (0⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅10⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅1⋅0⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅2110⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅1⋅⋅⋅0⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅21⋅⋅10⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅2⋅1⋅1⋅0⋅⋅⋅⋅⋅⋅⋅⋅⋅32212110⋅⋅⋅⋅⋅⋅⋅⋅1⋅⋅⋅⋅⋅⋅⋅0⋅⋅⋅⋅⋅⋅⋅21⋅⋅⋅⋅⋅⋅10⋅⋅⋅⋅⋅⋅2⋅1⋅⋅⋅⋅⋅1⋅0⋅⋅⋅⋅⋅3221⋅⋅⋅⋅2110⋅⋅⋅⋅2⋅⋅⋅1⋅⋅⋅1⋅⋅⋅0⋅⋅⋅32⋅⋅21⋅⋅21⋅⋅10⋅⋅3⋅2⋅2⋅1⋅2⋅1⋅1⋅0⋅4332322132212110)
matrixsource

This is the sequence in terms of the exponent, and it applies to each power of the Sierpinski matrix, including the first power. For example, 3 to the power of each of {0, 1, 1, 2, 1, 2, 2, 3, 1, ...} is {1, 3, 3, 9, 3, 9, 9, 27, 3, ...}. This power sequence appears in OEIS as the number of ones in the binary representation of n, among other descriptions.

Here is a totally practical application of all of this. A pretty array of buttons:

  1. switchboard2.png
switchboardsource

The Towers of Hanoi is a variation on the sticks-in-holes game where instead of putting sticks in holes, you put holes around sticks. Thus the game is ultimately a quaint philosophical remark on the roles of the sexes. But for our purposes there is a claim on the internets that the states of the game form Sierpinski triangle-like graphs:

  1. sierhanoi1.png
1234source

Which, as you can see, is a lie if I've ever seen one (internets, you are now on notice). Then again, if you fiddle with the layout and you squint a bit, you can kinda see it, but it's the sort of Sierpinski triangle that Maddox would stamp a huge red F over. To be clear, each vertex represents a single state of the game, and vertices are connected if there is a legal move between those states.

The nice thing about this algorithm is that at each step it just blindly constructs all possibilities, which is easy, and then afterwards removes the ones that aren't valid, which is also easy. Point being it works in broad strokes. And at the end of it we have a map to follow if we ever get stuck. You can do this sort of thing for all sorts of things, like say Rubik's cube. Though I don't know if the combinatorics are favorable in its case. The Towers of Hanoi can be played with more than three sticks:

  1. hanoiGraph[state[{}, {}, {}, Range[4]]]
    sierhanoi7.png
123456789

"WHAT THE HELL IS THAT", you say. Indeed, it's messy because it's a low-D rendering. We can also play variations of the game that allow multiple holes of the same diameter, or variations where we adjust the rules a bit. In higher dimensions you can see the structure better:

  1. sierhanoi3D6.png
12source

Although the 3-stick Hanoi graphs merely resemble Sierpinski graphs, it would be folly to ignore that resemblance given the thread of recursion that runs through both. We can create Sierpinski graphs easily, by once again reusing our polygon-in-polygon approach and this time replacing the Polygon[{p1, p2, p3}] expression with {p1 <-> p2, p2 <-> p3, p3 <-> p1}:

  1. siergraph5.png
1234source?

There's the Sierpinski triangle I know and love; the graph of. You might think it doesn't look good. But you don't realize it's a Sierpinski triangle wearing a cape made of Sierpinski triangles. Not only does it not not look good, it looks completely badass. Because we're using the coordinates of the points as vertices, we can straightforwardly recover the regular Sierpinski layout:

  1. siergraph9.png
12source

The point of spending 1 or 2 LOC's worth of developer time to convert our geometric Sierpinski triangle into a graph is so that we can ask questions about the graph. Like for example, what are its Hamiltonicness and Eulerity quotients? What is the average degree of the graph, in Celsius? In Kelvin? Frankly most of these questions are boring, and I don't really know anything about graphs. But here is a picture of the line graphs of the first few Sierpinski iterations:

  1. siergraph11.png
123source?

Also the minimal zig zag of the triangle, notable because it looks like a bunch of resistors (no doubt the inspiration for certain papers). And its minimal criss cross. I don't really see anything though. Do you see anything? I don't see anything. These graphs just vertices and edges to me.

They do raise a question though. What game (or what anything) does the Sierpinski graph represent? I wasn't able to produce the Sierpinski triangle from any variation of the Hanoi game beyond the first couple of trivial iterations. In any case, through the extensive research I've done here I've found that layered graph layouts are pretty:

  1. sierhanoilayered1.png
12345source

Chaos

One of the nice things about the chaos game algorithm is that we can easily generalize it to more than three points. To begin with, we can place equiangular points on a circle using cos and sin (see also my screwing around with polygons).

  1. sierderb5.png
456757source

These are drawn with 10 million points. The last two are drawn with 50 million points. The key to the quality here is giving the points transparency so that varying degrees of overlap/nearness form different shades. Higher vertex counts clearly have some structure, but it becomes blurry for one reason or another. You might be able to pull out the structure better with a more methodical approach and some image trickery.

If you play around with pentagons in a vector editor (Mathematica itself has basic vector editing capabilities), you will find this figure:

sierderb51.png

I've highlighted one of the inner pentagons. You can see that this figure reproduces the faded stellation pattern in the center of the chaos game rendition. So the chaos game algorithm remains consistent in this geometric fashion: At each vertex of the figure, attach a copy of the larger figure, but with sidelength one-half of the original (note the red edge in the above image).

This also explains why the 4-vertex rendering is a block. And since we now have the geometric rule, we can turn to an explicit geometric construction to see if we can make the structure of these chaos games clearer. After some hiccups, I was able to get something working:

  1. sierring345672.png
3?table5616source

The snowflake has all sorts of symmetries, probably because 6=2×3. It even has 3D grids and cubes. It's an infinite cubic matryoshka snowflake. And there is a lot of amazing detail in these drawings.

At this point I should mention that all of the code snippets on this page are self-contained. If you have Mathematica you can copy-paste this and start producing these figures.

The chaos game has another generalization. Instead of moving halfway between the active point and the randomly-chosen vertex, we can move 1/3rd of the way, or 3/2 of the way, etc:

  1. siermodratio1.png
tablesource

In the case where we're just adding the numbers, we get a normal n-directional random walk. Of course, the geometric approach has its own similar generalization:

  1. siergeommodratio3.png
tablesource

One of the things you might try to do, if you're me, is adjust the ratio until the corners match up:

  1. siergeommodratio2.png
imagesource

Look! There are Koch snowflake figures that form in the negative space. The boundary becomes snowflaked. A Koch snowflake can easily be made with an L-system construction:

    1. sierkoch1.png
    imagesource
hexagonalpentagonal

With some minoradjustments we get our pentagonal snowflake. If we do the same procedure for the hexagonal chaos game we get the familiar triangular snowflake. All of the geometries seem to create Koch snowflakes, which makes sense given that indentations are triangles.

Of course, there are much more interesting generalizations we can come up with than simple ratios:

  1. sierbizarroplot1.png
123source

Some of these drawings remind me of the kind of fractal scattering found in the more deterministic algorithms. I wonder what kind of relation there is. The best distance function I found was logarithm-based:

  1. sierrander1x.png
12345source

All of these images are from the same distance function. The 'holes' on the inward-folded leaves of this one are interesting. It's like a fractal Klein bottle thing goin on there. If my computer was worth more than my car, as it some day will be, I would burn a lot of lightning-sequestered power in my mad scientist laboratory in the process of rendering different distance functions. There's a lot of pretty pictures in these simple chaos games. As it stands all this lightning is going to waste.

The geometric approach, not one to have been served, decides to go Tron:

    1. siertron12.png
    1234source
tableimages

Or Asteroids. Same thing.

The most interesting place I've seen the chaos game is in genetics. The idea is that instead of randomly picking the vertex at each step, you let the letters of the genetic code pick for you. There are 4 letters in DNA: A, T, G, C. So you run a chaos game with 4 vertices. If some sequence of DNA is AAAATC, your active point will approach the point labeled A 4 times, then it will approach the T point, then the C point.

If the DNA sequence is completely random, you will just recreate our beautiful block, which I have named the Charcoal Diamond:

sierderb4.png

But what you get is not random, as this chaos plot shows:

  1. siergenetics7.png
imagesource

This is a chaos game plot of an arbitrarily-chosen 8 million basepair sequence from our chromosome X (for scale, a typical protein is encoded in only a few hundred basepairs). You might insensibly think this happens because the letters occur with different frequencies, but that's not the case. The following is a chaos game plot of a sequence that was randomly generated over the same statistical frequencies as the above sequence:

  1. siergenetics8.png
1234source

The letters do occur in different frequencies, but that doesn't make any interesting patterns. If you move the letters around you get a pattern related in this case to the fact that the frequencies are bilateral, but otherwise it's just a glorified chessboard. And look what happens when we do the same vertex movearounding for our genetic code:

  1. siergenetics10.png
imagesource

The original paper does a good job explaining how the chaos plot is something of a "fractal subsequence histogram." Assume your active point is anywhere in the entire square, and the next move is toward the bottom-left corner. Because you move halfway toward that corner (instead of, say, only one third or one fifth of the way), you will land inside that corner's quadrant regardless of where your point was to begin with.

Furthermore, you can apply this argument to subquadrants. It's easy to see this if you "work backwards." The formula for going from the current active point toward the next vertex is

pi+1=12(pi+v)

By the DeLorean transform, we can go backwards like this:

pi−1=2pi−v

So, reversing all the points in a particular subquadrant:

  1. sierfluxcapacitor1.png
123source

You can see here that for all the points which were just ordered to move toward the little red dot in the bottom-left corner, those that landed in the gray square had to have come from the top-left quadrant of the main square (the region with gray dots). So for the points in that gray region, we not only know that they were ordered to move toward the bottom-left vertex in the last step, but also that they were ordered to move toward the top-left vertex in the step before that.

And so on. The points in this little gray region were ordered to move previously toward the bottom-left, and before that the top-left, and before that the bottom-right. All points in that square have that history. So going back to our genetic chaos plot:

  1. siergenetics10.png

What those big holes mean is that CG is a rare sequence. As we just saw, a point can only get to that big empty square by coming from the bottom-right quadrant and going toward the top-left vertex. And since that square is so empty, there are rarely any points that are available to go toward other subsquares, such as this one, and so on.

This accounts for the texture of the first chaos plot as well. It just looks more wacked out because the CG vertices are adjacent, so the empty squares touch each other and create those staggered serrations. A simple histogram confirms our suspicions of CG Paucity — a.k.a. biology's Dark Energy:

  1. siergenhisto8.png
imagesource

If you sample subsequences instead of individual letters, and use those samples to simulate a genetic sequence, what's the smallest subsequence-sampling size you can get away with while still faithfully reproducing the texture of the chaos plot?

Asked differently, what length of subsequence is it that accounts for the texture of the chaos plot? Here is a graph of our DNA letters with pair-wise sequences labeled by probability:

  1. siermark2.png
imagesource

This is a graph of what's called a Markov chain, but don't quote me on the formalities. (Mathematica 9 has built-in Markov whatitswhats, but I'm using version 8). The point is we can generate a sequence whose letter-to-letter statistics are the same as those of our original DNA by following the graph probaballistically:

  1. siergenetics13.png
pseudorealsource

You can see that, while similar, the fake plot immediately stands out as too Hollywood compared to the verisimilous beauty of the real data. The most notable distinction between them, besides the grain, is the dark diagonal that crosses A and T in the real plot, presumably because those two letters have a lot of interplay. That it's not replicated by our pseudosequence may mean there are a relatively large amount of ATA, TAT subsequences.

So it looks like subsequences of length 2 aren't sufficient. We could generalize our Markovizer, but what I think is actually interesting here is the grain. We can do some image processing to see if we can bring it out:

  1. siergenetics14.png
1234source

My intuition here is to subtract the Hollywood plot from the real plot (as images) in order to highlight artifacts that are due to longer subsequence patterns. The two horizontal streaks are at the one-third and two-thirds marks of the square as a whole, which I think implies a lot of AT/TA. Note that a point at 13 is halfway between zero and 23, and vice versa. {13,23} is the fixed point of alternating T/A, so to speak. Here's a histogram of 3-sequences:

    1. siergenhisto6.png
    imagesource

Which actually just shows a lot of TTT and AAA. Longer subsequence statistics show a similar picture. And of course we can always just do this:

  1. A1119676T1120986G882866C882887AA316677TT322527GG195417CC199940AAA127806TTT130977GGG45560CCC46320AAAA51308TTTT52674GGGG10022CCCC9962AAAAA18837TTTTT19401GGGGG2177CCCCC2199AAAAAA5617TTTTTT5772GGGGGG468CCCCCC409AAAAAAA2308TTTTTTT2326GGGGGGG69CCCCCCC59AAAAAAAA803TTTTTTTT797GGGGGGGG18CCCCCCCC14AAAAAAAAA419TTTTTTTTT458GGGGGGGGG4CCCCCCCCC8AAAAAAAAAA259TTTTTTTTTT265GGGGGGGGGG3CCCCCCCCCC8AAAAAAAAAAA141TTTTTTTTTTT173GGGGGGGGGGG2CCCCCCCCCCC12AAAAAAAAAAAA125TTTTTTTTTTTT114GGGGGGGGGGGG1CCCCCCCCCCCC2AAAAAAAAAAAAA83TTTTTTTTTTTTT112GGGGGGGGGGGGG1CCCCCCCCCCCCC1AAAAAAAAAAAAAA59TTTTTTTTTTTTTT110GGGGGGGGGGGGGG1CCCCCCCCCCCCCC2AAAAAAAAAAAAAAA63TTTTTTTTTTTTTTT82GGGGGGGGGGGGGGG1CCCCCCCCCCCCCCC1AAAAAAAAAAAAAAAA62TTTTTTTTTTTTTTTT66GGGGGGGGGGGGGGGG2CCCCCCCCCCCCCCCCC1AAAAAAAAAAAAAAAAA34TTTTTTTTTTTTTTTTT52GGGGGGGGGGGGGGGGGGGGG1AAAAAAAAAAAAAAAAAA43TTTTTTTTTTTTTTTTTT49AAAAAAAAAAAAAAAAAAA28TTTTTTTTTTTTTTTTTTT38AAAAAAAAAAAAAAAAAAAA28TTTTTTTTTTTTTTTTTTTT31AAAAAAAAAAAAAAAAAAAAA34TTTTTTTTTTTTTTTTTTTTT28AAAAAAAAAAAAAAAAAAAAAA31TTTTTTTTTTTTTTTTTTTTTT23AAAAAAAAAAAAAAAAAAAAAAA26TTTTTTTTTTTTTTTTTTTTTTT14AAAAAAAAAAAAAAAAAAAAAAAA15TTTTTTTTTTTTTTTTTTTTTTTT23AAAAAAAAAAAAAAAAAAAAAAAAA11TTTTTTTTTTTTTTTTTTTTTTTTT12AAAAAAAAAAAAAAAAAAAAAAAAAA12TTTTTTTTTTTTTTTTTTTTTTTTTT15AAAAAAAAAAAAAAAAAAAAAAAAAAA15TTTTTTTTTTTTTTTTTTTTTTTTTTT8AAAAAAAAAAAAAAAAAAAAAAAAAAAA11TTTTTTTTTTTTTTTTTTTTTTTTTTTT7AAAAAAAAAAAAAAAAAAAAAAAAAAAAA7TTTTTTTTTTTTTTTTTTTTTTTTTTTTT3AAAAAAAAAAAAAAAAAAAAAAAAAAAAAA6TTTTTTTTTTTTTTTTTTTTTTTTTTTTTT5AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA5TTTTTTTTTTTTTTTTTTTTTTTTTTTTTTT2AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA3TTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTT3AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA1TTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTT3AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA4TTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTT3AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA5TTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTT1AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA1TTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTT2AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA2TTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTT2AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA2TTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTT1AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA1TTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTT1AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA1
tablesource?

The longest single-letter run length in this section of DNA is 48 As. The longest string of A-or-T is 222 basepairs long. Quite long, but the longest pairing is actually T/C which has a sequence of length 231. C/G's longest sequence is 34 basepairs long. I wonder what it is about CG. Maybe an unusually (un)useful amino acid or some hydrophobilia issue. I wonder too if these are blanket statistical patterns or if certain quirks are only present, say, in non-coding regions.

You might be wondering why we don't just ask a biologist about these mysteries. The reason is because you're inside a car right now, I'm driving, we're lost, both of us are tourists, and I'm one of those people that would sooner burn hours of gasoline/diesel than ask for directions. You also suspect I might be some kind of criminal, so you're afraid of bringing up the issue. All around it's pretty awkward in here.

We can do a lot better than these static diagrams by giving ourselves the ability to manually movearound the vertices to see if we can find interesting patterns:

  1. siergenetics21.png
123source

And a tool that repeatedly applies the DeLorean transform to rebuild the sequence leading up to a region:

  1. siergenetics24.png
123source

I'm not actually sure how legit the maths of the program are, but there it be. Let's return to our charcoal diamond, here rotated:

  1. sierbycontradiction0.png
01234567"∞"source

Imagine we suddenly removed one vertex. That would mean that points can no longer land in that quadrant. Which would mean that no points could go from that quadrant to these subquadrants. Which would mean no points going to these subquadrants. And soon and soforth, until.

So that explains the holes in the Sierpinski triangle. I call this the "Sierpinski triangle by infinite quadrilateral descent" method of construction. It seems very natural to me, but it raises the question of what these regions in the various deterministic constructions have to do with each other:

  1. sierbycontradictionmatrix1.png
imagesource

(To be clear, the chaos game is just an algorithmic tradeoff vs the geometric approach. It is not necessarily doing anything non-deterministic in the larger scheme.) In this case I think the parity/binary explanations are going to be the simplest, though I'm a math noob and I don't see an immediately obvious way of approaching this, if the question even makes sense in the way I seem to be implying. However with some inspiration we can find an iterative angle that seems to me like a kind of multiplication:

(1)→αα(1011)=(α0αα)=((1)(0)(1)(1))=(1011)→αα(1011)=(α0αα)=((1011)(0000)(1011)(1011))=(1000110010101111)→α

So the Sierpinski triangle is the infinith power of (1011) under this 'multiplication.' Someone who knows enough group theory might recognize what's going on here. Unfortunately I don't, but one thing we can do is investigate this 'multiplication' in general. I was going to make a simple program to do that, but I got carried away and made this:

  1. siermatrixrepl1.png
123456789source

I know what this image reminds you of. Those little candle chandoliers that you hit in Castlevania to make hearts and morning stars come out. I also found The Sierpinski Scream, a letter H that would definitely beat you up if it was human, the up arrow and its Hot Topic-donning offspring, pink infinities made of pink infinities, even the vaunted Sierpinski Chronobracket.

Essentially what we have here in these little matrices is a notation for specifying translations. It's yet another algorithm with different tradeoffs for doing more or less the same thing that our chaos game and geometric algorithms are doing. We can bring this characteristic out by allowing arbitrary rules:

  1. sierchaching1.png
12345678source

Cha-ching baby. If Snoop Dogg ever used Mathematica, that's what square brackets in his custom font would look like. And I know what this one reminds you of. The folds of the brain. And check out the Black Riddler's Question Mark.

Inversion

What happens if we turn the Sierpinski triangle "inside out"? This is easy to answer because all we have to do is invert. You may be familiar with this plot of sin1x:

sineinverse12.png

It's typically given as an example of a function that isn't differentiable at a point (at 0 in this case). It can be seen as a composition of two functions:

1x⟶sinx

The important function is the first one. It inverts the entire number line around 1, mapping [1,∞) to [1,0) and vice versa. The reason the plot of sin1x looks like that is because it's essentially the regular sine function with its values from 1 to infinity all crammed between 0 and 1. In this sense, 1x→f is like a logarithmic plot on hypercrack for f.

So to turn our Sierpinski triangle inside out, we can do the same thing. For each point, we invert its distance but keep it at the same angle, using z|z| to normalize the point:

zinv=1|z|z|z|=z|z|2

  1. sierpinskicrest.png
imagesource

The radius of inversion is right at the corners of the triangle, and I've left the univerted triangle in the center. Here's what the first few construction steps of the triangle look like if we invert them:

  1. sierepicminisauce1.png
tablesource

Notice that we're just inverting the endpoints of the lines, not the lines-as-curves. Visually this doesn't make a difference at higher iterations:

  1. sierepicsauce9.png
imagesource

What about varying the radius of inversion? You first perform the same inversion as before, but with respect to the radius:

r|z|

Normally this would be enough to get the inverted distance, but division is performed through the lens of the unit 1. When we perform that r|z|, we "lose" the information about what the radius may have been. (This innocent-sounding thing strikes me as much more involved than it appears). If you work out what an inversion should look like on the number line you'll find that you have to scale the result back by multiplying by the radius:

r|z|r=r2|z|

It took me a while, but eventually I realized that the edges of the triangle were being mapped to curves, and that if you continued those curves they would form circles that intersected the origin, like this:

  1. sierepicsauce7.png

What this means is that by this inversion, infinite lines and circles that cross the origin are inverses of eachother. This realization almost punched my brain in its face, but apparently this is well-known. In fact it's called inversive geometry. I found myself quite disappoint, however, that online descriptions presented our r2 factor as part of the definition, rather than as the arithmetic consequence of the inversion operation. Son.

Let's not forget we have a bountiful cornucopia from which to invert:

  1. sierinv5.png
123source

Those light red shades are Mathematica QQing about plotting points at infinity. I thought Mathematica can do anything??? The problem is that the inverse of the origin under this scheme is essentially "everything at infinity" (from division by 0) and algebraically this inverse doesn't even have any specific 'direction' like ± ∞ do. The easiest solution is to just leave points at (0, 0) alone or remove the incident lines.

Sidenote. Notice that in this program we aren't even touching our original uninverted geometric renderer, because we don't need to. Our original renderer returns a Graphics structure. This structure (which you might call an M-expression) is to us a set of straightforward vector graphics directives, but is to Mathematica meaningless until the frontend gets ahold of it. Until then (and even afterwards) we can perform the same kinds of structural slicing and dicing that we can perform on any other structure. In this case, replacing points by their inverses.

A more complete solution to our point at infinity/division by 0 problem is to put the inverse of (0, 0) not at infinity, but really far. This doesn't come out of the algebra, but we can do it in a well-behaved way because we know from which direction our lines are coming from since we're defining things as polygons:

  1. sierinv9.png
tablesource

What if, for no particular reason, we vary the exponents of the inversion formula?

z<insert number here>←(elementwise exponent)|z|<insert some other, not necessarily distinct, number here>

Most of the results of this were boring, but the one for z3/|z|2 was cool:

  1. siercobra4.png
12source

See this one. One day you're going to be driving home from work. It's going to be dark. Pitch black. All a sudden out the corner your eye you're gonna see a flash in your rear view mirror. And when you look, you're gonna see that same Black Cobra Grill on my car speeding towards you at some unspeakable number of kilometers per hour. And then I'll disappear into the night. Like an episode off an MJ's Thriller×Knight Rider mashup.

If we mangle the formula every which way we can find a lot of interesting effects:

  1. sierfishie3.png
12source

The self-crossings form hexagonal figures. And American iconography? Here's another nifty one:

  1. sierstilettooftriangulardestruction2.png
daggerbutterflysource

I call it the Sierpinski Stiletto of Triangular Destruction. Hell yea. Also pay heed to the Sierpinski Butterfly of Poisonous Death, lest yee regret it. We can also move the circle of inversion around. I was going to write a program to do only that, but before I realized it I had accidentally built this:

  1. sieroops2.png
123456source

Oops. This hilarious function doesn't allow anything inside the unit disk. It's just waiting for someone to make a Yakety Sax movie about shapes crashing into the circle and crawling around it.

Someone on the internet asked an interesting question: Are there "zoom out" fractals? We know that if we zoom in on the Sierpinski triangle, we'll continue seeing detail endlessly. But are there fractals that no matter how far you zoom out, you can't get out of them?

Of course there are. We can just take a quote-unquote "zoom in" fractal and place one of its points of detail right at the origin, and then invert the fractal. Because the inverse of the origin is some kind of crazy infinity, we know that no matter how far we zoom out, we won't reach the end of the fractal. This example is really a formality though. You have a lot of liberty to make things up in math.

Cornucopia.

    1. siercandy8.png
    123source

From what I can tell, one of the settings used to deal with division by 0 is the so-called Riemann sphere, which is where we take a space shuttle and use it to fly over and drop a cow on top of a biodome, and then have the cow indiscriminately fire laser beams at the grass inside and around the biodome. That's my intuitive understanding of it anyway.

  1. understandingtheriemannsphere.gif
animationsource

(Note the cow cannot be spherical or it will roll off). Personally I don't have any beef with Riemann or any of his manifolds, but for our purposes the Riemann sphere is inadequate since it maps our inverses vertically. One interesting consequence of this is that in the 2D cross section where the imaginary component is zero (essentially the 'Weierstrass circle'), it maps multiplicative inverses vertically and additive inverses horizontally. This all seems mathematically expedient, but it's otherwise boring.

The Riemann sphere does give one explanation though about 'why' our circles and lines are inverses. In the Riemann sphere, the inverse of a circle that crosses the origin is a circle that crosses the North Pole, and since the lasers are being shot from the North Pole, they're limited to tracing out a line as they follow the circle. I was going to make a simple 3D diagram demonstrating this, but I accidentally made this:

animationsource

Oops. But since we now have this tool, let's see what other plots look like:

  1. siercowsine3.png
1234source

I call this one the Riemann cowsine, sort of like cosine but with "cow" instead of "co". This one is 12sin(2x). If you've seen the tangent function, you know it has a lot of infinities, which means the cowtangent is going to have a lot of circles. I suppose the fact that the circles enter and exit the north pole without being deflected is a result of the asymptotic behavior of the function being the same going up as going down. Come to think of it, those circles are more like a continous infinity symbol that goes in infinitely on itself. And look:

  1. sierellipticcowve3.png
imagesource

This is one of those fangled elliptic curves. Apparently they do form pairs of circle things on the Riemann sphere. I thought that was just an old wive's tale. The nice thing about this program is that it works on Graphics structures, such as those returned by Plot. That means you can plug arbitrary 2D plots and graphics into this function and have them automatically Riemannized. Like say you're trying to educe from without your incorrigible students' crania some particular factoid:

  1. sierellipticcowve1.png
123source

You've set this up with 2D graphics. But you can just plug the output of this into our Riemannizer to get this. In fact in Mathematica you can even copy/paste the 2D plot (itself an interactwithable vector object) like this. And you can vector-edit that plot in-place and when you re-evaluate the expression, the differences will appear in the Riemannization. Not bad for what essentially amounts to one line of code:

g /. Line[pts_] :> Line[toRiemann[pts]]

This is the power of Mathematica's macro-at-will symbolic semantics and well-curated architecture. Specifically in this case, it's the fact that the built-in plotting functions return the same laid-bare Graphics vector structures that your own versions of those functions would return. This Riemannizer only does a direct endpoint conversion of lines, but you can easily have it 3Dify whatever you want in a more thorough fashion.

After I made my Cyclotron 4000 masterpiece, I considered what a version 2 might be. Now I know. With some adjustments to the contraption, we now have the Cycowtron 4800 Deluxe (pronounced psy-cow-tron forty-eight-hundred de-lux):

  1. cycow2.png
123456source

This thing's almost as curly as my hair. Note that in Mathematica these aren't static renderings. They're regular Graphics3D panes that you can spin and move around every which way. But let's not forget why we're here:

  1. cowpinski4.png
1234source
.

Look at the symmetry of this inversion:

  1. cowpinskiinv1.png
12source

We have the original in the middle in purple, its Riemann mapping in blue, their inverses in red, and the cow in black and white. And except for the cow they all meet at the same three points. How gangster is that. (My friend's six-month old informs me that it's "substantially gangster" (paraphrasing)). But witness the scene of a zoom-out fractal:

  1. cowpinskiinv7.png
imagesource

At the chasm of infinity, our cow glances past its precipice, stares down its abyss. You know that machine in the Hitchhiker's Guide that explodes your mind or whatever by showing you how pathetically insignificant you are compared to the universe? Well this is like a Windows 3.1 version of that. Our poor cow friend's soul is being wrung on the very clothesline of endlessness itself. I think this is the first time I'm happy I'm not a cow.

...is what I would have said if this was any cow but this one.

  1. brahm2.png
12source

This cow does not cower. Infinity cannot bully this bull, cannot bloviate this bovine. By all appearances this cow is wearing infinity on its mane. Its horns are probably made of ℵℵ⋱ down 4 or 5 levels, an immutability surpassed only by that of the tusks of the Alephant. Our cow isn't staring into infinity. It's looking down at infinity, observing infinity with detached understanding. If our cow were not so enlightened, and also had the facial muscles, it might betray the subtlest of smiles at infinity's infinity face, for infinity's turbid fractal whirlpools and vast lethargic swamps are but swathes of data like any other to this cow.

Long ago, having mastered the magesterial tetrafecta of science, mathematics, spirituality, and politics, our cow stepped hoof outside Farmer Joe's farm and set out on an adventure of like, just so much awesome. One of its side gigs these days is being the final observer of our domain, preventing our section of the Great Algorithm from backtracking by stellating through the cosmos our most entwined entwinements. I think this is the first time I'm jealous of a cow.

In any case, as you can see the Riemann sphere is pretty useless. But while we're on the subject of 3D let's see how our various approaches do here. Chaos game:

  1. sier3Dchaos15.png
1234567source?

This is using little spheres as the points. You could use pyramids or anything else instead. Even go back to nature and use actual points. It's a bit tricky to get decent images since the chaos game doesn't place points in a regular arrangement, so you need a large number of points. Each of these images uses 2 million spheres and takes about 10 minutes to render on my little laptop.

This top view shows one of the symmetries that appear in the 3D triangle. This side view shows another. And a top view of the 4-corner pyramid. These symmetries are interesting because they appear absolutely no different than 2D renditions (for example). At first this seems mysterious, since the symmetries appear from every which angle. But the reason it happens is because our distance function works on each coordinate independently:

pi+1=12(pi+v)

This formula is applied to the x y z coordinates separately. So we could chop off any one of the coordinates from all the points in the 3D Sierpinski triangle to get a regular 2D Sierpinski triangle. And more to the point, what we're really doing is geometric: finding the point halfway between two given points. This 12(pi+v) formula is just a particular algebraic statement of that.

In other words, the geometry of the algorithm doesn't care about our coordinate system, so we're going to get the projected equivalent of a 2D rendering from any angle we pick, not just from the x y z cross sections of our computation (certainly there is a linear algebra term for this).

  1. sier3Dchaos27.png

To make clear what we're talking about, this is the chaos game on a prism, and the same thing from the same viewpoint, except with the 3D projection effect removed. As you can see, the 'hidden dimension' has no offect on what is seen. Au contraire messieur. If it did have an effect, that would be interesting.

Something I noticed though is that while we can remove a coordinate, we can't add a coordinate, in the sense that, for example, there's no way to combine independent x y streams to create a Sierpinski triangle. For our 2D Sierpinski triangle, there's something to the fact that a single point is specified by two coordinates instead of just one.

I think there may be an interesting statistical or information-theoretic interpretation to this. I'm not really familiar with either of these subjects though. Geometric approach:

  1. sier3Dgeom17.png
1234source

Behold the Lemon Lime Fortress. Throw in a few salt blocks, pour some Corona at the top, join the party at the base. To make our lives one notch easier, our code takes advantage of Mathematica's built-in transformation infrastructure, in this case the symbol Scale. It also pulls the geometry of things from our good friend Mr. PolyhedronData. The nice thing about having such a general setup is that we can readily apply this geometric fractalization on arbitrary shapes:

  1. sier3Dgeom18.png
12345source

Don't ask me what the hell that last shape is. I figure it just managed to stow away into PolyhedronData somehow, like the semiconscious pre-sentient kernel of a future Skynet. The faces of these shapes show very clearly that we get 2D slices for free, like in these perspectives from below (we aren't cheating here). The edges by themselves make pretty patterns:

  1. sier3Dgeom11.png
12345source

To make sure that after all this scrolling we're still on the same web page, this is our chaos game algorithm:

1 start at any point. call it p
2 pick a vertex at random
3 find the point halfway between p and that vertex
4 call that point p and draw it
5 goto 2

The only difference between 2D and 3D versions of this algorithm is having 3 coordinates instead of 2. Just as in 2D, we can alter step 3 in various ways. The simplest is to move not halfway towards the chosen vertex, but .25 or .7 of the way, etc:

  1. sier3Dchaosdf1.png
123source

Those odd random walks are because the 4- and 5-pyramids have Mean[vertices] != {0, 0, 0}. One thing I noticed is that random walks resemble the outlines of continents. How curious. I wonder if it boils down to the self-similarity of the Brownian motion of water molecules, or something of the like. I.e. the idea that if our continents were surrounded by materials which did not move Brownianly, our coastlines would have different kinds of shapes. Remember that we can get creative with our distance function:

  1. sier3Dchaosdf5.png
123456source

Keep in mind that in Mathematica these are all interactive 3D panes. Since I associate these kinds of sparse fractal distributions with the distribution of matter through the scales of the cosmos, flying through these point structures engages my Björkian semi-spiritual naturalistic side. :) Your mileage may vary. Our old logarithmic distance function can be applied in 3D as well. For two points a and b, with d the Euclidean distance and w a specific number between 0 and 1 (though not necessarily), the distance function is:

(a+b)log(d(a,b)+w)
  1. sier3Drander8.png
123456source

These pictures differ by w factor, viewpoint, or the set of vertices on which the game is being played. For most of these I'm using the vertices of regular polyhedra from PolyhedronData. Note that the vertices of the game are not necessarily in proportion to the figure itself.

At this point I should remention that all of the code snippets on this page are self-contained. If you have Mathematica you can copy-paste this and start producing these figures, which, I should also remention, are interactive 3D models. I'm a big fan of black ink on white paper, and these are like being able to change the perspective of a pure ink painting in real time. Teknikara no jutsu.

  1. sier3Drander12.png
123456789source

Some of thes are like alien Rorschach tests. Like what do you see in this one? I see a mosquito that can suck the lifeblood out of your soul. This one, however, is definitely from an as-yet unreleased Matrix film. And we also have the Minotaur's armor and his shield of Cancer. I'd recognize my buddy's armor in even the most obtuse alien Rorschachs. See also a stereographic projection of one of the rooms of Asterion's maze and an aspect, which needs no explanation.

The originals are 3D but this coloring is a 2D image process. It highlights components of the image based on their sizes. So if your image has 3 large blobs with dozens of tiny blobs all around, you can use, for example, # /. {1 -> Red, 2 -> Green, 3 -> Yellow, _ -> Pink} & to color the big blobs specific colors and all other blobs pink. Though in most of these images I only use one or two colors.

12source[MovieMaker]?

This just need some James Horner music. And science majors, witness a dangerous nuclear science experiment gone horribly awesome. These are animations on the w factor. For the source, you can just use the basic code. But if you intend to do more general experimentation, then something like my little MovieMaker utility will be useful. It's a quite general utility. Each of these movies took something on the order of 20 hours for my computer to make. That's why having a minimal-fuss setup is convenient.

As for the renderings and animations themselves, they're basically me chewing a few times on one of the leaves of one of the branches of a tree I happened to run up the side of like a monkey. There's a lot of trees in this jungle to monoperambulate.

What's great about these structures is that they are still fractals. They may look spazzy and some of them may remind you of Vash the Stampede's plant mode arm, but they possess self-similarity throughout. For example, why do the arms of the nest look like that?

  1. sier3Dranderexp1.png
123456

It's because the nest as a whole looks like that. And notice that as the big bird flies in from below to explode into the nest, the little birds all around the nest follow along (because adults know best) and explode into their own little nests, and so on, producing the distinctive infinitary echela of simultaneously exploding dinosaur progeny. And notice that the big bird itself is a version of the entire figure. Now, as for the hat, who knows.

The chaos game is an algorithm that we use for the sake of computational convenience. The "real" algorithm doesn't randomly pick among the vertices, it takes every point toward every observer at each step. And it's actually easy to see how the self-similarity of the algorithm comes about. Look here at a house and an observer:

  1. sierifsexp1.png
12345

If we run one step of the "real" algorithm, we get this. Something interesting here is that there is no difference between what the observer sees in either case. The little house is exactly blocking his or her view of the bigger house, like an inescapable mathematical version of a really tall person sitting in front of you at a theatre (formally we would say the houses are cosyzygous). If we start with two observers, we get this then this.

So it's clear that the scaled-skewed self-similarity is inevitable. What I find interesting is that for a given set of vertices and distance function, the resulting figure as a whole is also inevitable. You can start the chaos game at any point (or points, because e.g. the 12 factor effectively shrinks your whole house into a point) and you will end up with the same figure, just a different approximation of it.

Another way of thinking about it is that the resulting figure is precisely the figure that all observers "agree on":

sier3Dobs6.png

Because running the full algorithm on the entire figure does nothing. I.e. the figure is the fixed point of the algorithm. This automagic consensusing bonks my head and seems to me to carry a particular philosophical undertone... over which I shalln't digress.

Mathematically, it appears our chaos game shennaneganery as a whole falls under the contraction mapping principle. Tersely complicated explanations of inconfusably simple things not withstanding, I know me some topology but not enough to understand the bigger picture of what's going on.

On the subject of hats, when going from 2D L-systems to 3D L-systems I had to put a hat on the turtle and also give it the ability to do backflips and taco rolls:

  1. turtle3Dwithramen.jpg
with ramenwithout ramen

Even wearing Mugen's shoes. Wow. Unfortunately, as epic as this is, with our current technology we're limited to e.g. representing the turtle's hat with an abstraction called a "vector", which certainly doesn't connote the same social status or sophistication. Still it's enough for some 3D L-systems, such as this version of the arrowhead construction:

  1. sier3Dlsys2.jpg
123source

Whoops. Accidentally X-rayed my heart. Or was that this one? In any case, this is the 3D Sierpinski arrowhead curve. It might not look very 3D, but technically it's 3D because it's made out of a tube instead of a line. All joking aside, try as I might I wasn't able to figure out the construction for the 3D arrowhead curve, sadface.

And though this a crushing defeat, we here at the Sierpinski triangle page are stalwart folk for whom such failure is but a rare trigger of recidivistic saccades to our respective vices, for in the characteristic case we amene our fibrile egos by way of the platitudinous homily that what doesn't kill you makes you stronger. In the process of trying to figure out the 3D arrowhead I ended up making an easy-to-use flexible L-system program.

True story, when I woke up this morning I could have sworn my body was contorting into different LOGO curves, in the hope of trial-and-erroring the arrowhead construction. It was like that dream scene in Fight Club, except instead of a girl it was a LOGO curve. Definitely one of the more Freudiologically-awkward memories I'm going to have to carry around for the rest of my life.

  1. sier3Dlsys7.png
123456789?

What makes this program great is that even just for 2D L-systems, the 3D perspective makes things more intuitive. The arrowhead problem also demanded debugging features such as keeping track of the turtle's orientation, a definite necessity because of the enormous degrees of freedom that geometric L-systems possess.

To give you an idea of this freedom, all of the items in this table are the same exact L-system at the same exact power. The only difference between them is the base angle specified. (By the way, notice Voltron. This is how you know L-systems are Turing complete.) If you take a couple of these to higher powers you get these images (11th and 13th iterations). It's interesting to wonder what some of these might look like at say the thousandth or billionth iteration. Or even, the millionth.

Sidenote. You may have noticed that I never really explained what L-systems are. In fact what I do and don't explain on this page is pretty much completely arbitrary, largely to annoy people who are already familiar with all of this stuff. "Why aren't you mentioning IFS" I hear them crying. Hilarious. But if you've used Mathematica you know that it's well-suited for replacement schemes such as L-systems in a way that is difficult to convey in the context of other languages. Take a look at a simple function definition in Mathematica:

  1. add[a_, b_] := a + b

What this is saying is: Whenever something matching the pattern add[a_, b_] is found, replace it by a + b. In other words, function application is a special case of pattern matching. Those _ characters are the analogue of the regex . character, the Kleene proton. So a_ means "match any single thing, and call it a". You can in fact do this, which will make the 'function' return 1 when it is passed any two things, as well as use more involved patterns.

I point this out because it can be difficult to appreciate the fundamental straightforwardness of the Mathematica language, I think even for people who have used it for a while. And especially if you're coming to Mathematica from more mainstream languages where the idea of function application being a special case of something more general would be considered some kind of unreachable koan.

The arrowhead isn't the only L-system that can create the Sierpinski figure. More likely there are an infinite number of distinct L-systems that form the Sierpinski triangle in the limit. When we were fiddling with the Sierpinski triangle as a graph, you may have noticed that the zig zag and criss cross had recursive structure:

  1. sier3Dlsysresistorplot.png
1234source

We can find these paths for the 3D Sierpinski graph as well, though not necessarily. In fact all along we could have been grapherizing a lot of our stuff, even things like the different distance functions. My point here however is that we may be able to reverse-engineer an L-system from these structures. And it might not actually be hard at all. It does have the down side however of sounding really boring, so on to nonboringer pastures we skidaddle-prance.

Since cellular automata often have the 'world' array joined at the ends, it makes sense to think of their evolution as being on a cylinder:

  1. sier3Dca3.png
123456789source?

This is Rule 22 with two initial black squares. It's a cylindrical mapping of this. The sphere in the center is an homage to the Sega Saturn. Long live Sega Saturn, long live Dreamcast. Neo Geo forever. This is a different projection of the same thing, which might actually be easier to comprehend than the cylindrical projection.

And a plot of a range-7 automaton, described in this paper, that was evolutionarily engineered to discriminate between majority-white and majority-black initial conditions. And a particle plot oNEKO!!! Ka-wa-ii. My hope is the image of this dark hieroglyphic cat infests your dreams with nightmares so mindbendingly horrid your perception of reality and fantasy becomes forever warped. Whoops did I say that out loud. See also my Cellular Automata program.

Of course, there are automata whose evolutions are properly three-dimensional, like these quadrilateral versions of Rule 22:

  1. sier3Dca2D3.png
12345source?

An actual 3D automaton whose evolution would be 4-dimensional:

  1. sier3Dca3D2.png
12345source?

And just so we're all clear, time isn't "the fourth dimension." That statement is the conceptual version of eating bagels without cream cheese, namely a manifestation of meaniglessness.

In rectangular 3D each cell is surrounded by 33−1=26 cells, so the number of even just simple totalistic rules is very large, nevermind starting configurations. This means that finding "interesting" rules and configurations can be a tricky artform. This is another place where I could use, say, a warehouse full of Alienware laptops.

If you have Mathematica 9 (must be nice), its Image3D functionality is perfect for these 3Dified cellular automata. And speaking of grid thingies, let's not forget our unexpectedly-glorious matrix replacement scheme:

  1. sier3Dmatrixrepl11.png
1234567source

This scheme clearly shows the projective character of these algorithms. Take for example this nifty 3D plus sign made of 3D plus signs, holy mathphobia inducer. It looks like a 2D fractal plus sign when viewed along each axis, but resembles various 2D constructions when viewed from mixed angles.

What's not obvious from these images is that the matrix controls at the top (the pink spheres) and the output figure share the same viewpoint (twirl one, the other two follow). In Mathematica this is as easy as wrapping a couple of things in Dynamic[ ], after which the system takes care of automatically updating things as necessary. It's pretty much the ideal of what event handling should be, at least for these kinds of applications. The underlying engineering for this on Mathematica's part must be very intricate.

And speaking of intricate, this is probably the most complicated Mathematica program I've so far written, in part because I didn't run it through any last-phase refactoring. If you have the courage to fiddle with this program (and I encourage you to have this courage, as the program has a particular issue I couldn't solve), be prepared to suffer dearly for my laziness.

Give me a moment.

OK, it looks like we're in the inversion section. Where did all this 3D stuff come from? Holy cow. HOLY BRAHMAN DATA COW. Oh I think I know which voice it was. Irregardless, since a bunch of 3D things essentially just programmed themselves into existence while I wasn't looking, this means we can do 3D INVERSIONS!!!! Chaos game.

  1. sier3Dinv33.png
12345source

This. Four-headed tri-jawed infinity-mouthed Pac-man langolier. If the world ever decides to give me a nightmare, I hope it picks one of these adorable things to chase me through the dark recesses of my deranged mind. Geometric.

  1. sier3Dinv7.png
1234source

The ostensive architectonics, quite awesome. c.f. Dyson sphere. The code however is simple. Cobra.

  1. sier3Dinv22.png
12345source

And fishie! Logarithmic.

  1. sier3Dinv27.png
1234source

"Chaos game with logarithmic distance function" is a bit long. We need to give this specific kind of fractal a name. What about "Charlie render"? So I'd be like "here we have an inverted Charlie render at w factor .01" and people would nod comprehendingly while reading that, as if there were an established literature on Charlie renders.

You might object that the contours of this nomenclature don't quite align with the striking yet oft- hauntingly quiescent leylines of its intended referents, but you would be wrong — the matching is nigh onomatopoeial per my linguistic auteurity. Incidentally, you should see what my writing looks like when I really cut loose. Rejoice asplendent my sparing you that paragon 'cross the rubicon, padawan.

Since the originals have a lot of points close to 0, their inverses have a lot of points at very large distances. In this case I've decided to clamp the maximum distance of points to a short range (essentially putting them on a leash, like those ball & chain dogs in Mario Bros. 3). It's another way of dealing with infinities. I like this approach because it preserves the radial texture of the figure, snowglobe-like. Taking this to its conclusion, we normalize all points to the same distance:

  1. sier3Dnorm15.png
12345678source

These two are the same, except the first one has an opaque sphere in the interior so that you can't see points beyond the horizon. The extra points in the second one are on the other side of the globe. These points are colored according to their original distance. And the unnormalized figure.

Questions

How many points does the Sierpinski triangle have, besides infinity? Say at a given iteration?

  1. sierpointcount1.png
123source?

All additions are in powers of 3. So at a given iteration we have Σ 3k total points. There's all sorts of ways to find the closed form of this sum, not least of which is to use the internet. I'm a fan of the algebraic approach:

S=31+32+33+⋯+3n−3S=−32−33−⋯−3n+1−2S=31−3n+1S=12(3n+1−3)S=32(3n−1)

The nice thing about this kind of manual deduction is that it gives us an excuse to plaster more math on our page, giving perusers who don't know any better the impression that we're really smart. This sum accounts for the additions. We also need to account for the first 3 points. For a given iteration, we have a total of 32(3n−1)+3=32(3n+1) points. The arithmetic works out better if we count the polygons instead of the points.

If my web search kune do hasn't failed me, this would make most of our algorithms "geometric space and therefore time" (GSATT) algorithms. Actually I just made that up, I don't know what they're called. It's not really relevant for us since the geometricness also means we get a large number of points with few iterations.

What does the "integration" of the Sierpinski triangle look like? There's various ways to interpret this in 2D, but I'm curious about how the number of points of the triangle increases along a straight line, as if the triangle were a single-variable function:

  1. sierpointcount6.png
123source

Hmm. I was hoping it would look something like the so-called Devil's Stairscase, which is the same thing for the Cantor set. You can just feel the Staircase's ragged darkness filling you with joy. But this, this looks like the underside of a fluffy cloud. I think I will call it Lumpy Space Satan's Hairline. Not as dark and morally grimy a name as I was hoping to coin, but not bad either.

My original reason for inverting the Sierpinksi triangle was to see how it might magnify the inner texture. I.e. turning the triangle inside out to make the inside more visible. "You could have explained that in the actual inversion section" you say. Indeed, but let's not hark on couldas and shouldas. The point is there is an intuition behind these things, and we can ask other questions in the same spirit. For example, what if we extend the 2D Sierpinski triangle into 3D, with each point a different z coordinate (depth) depending on its distance from the center of the triangle?

  1. siersonogram1.png
12345source

We get what we expect, a boomerang-looking thing. And look at this lovely demonic-looking Moire pattern, surely the universe's recompense for that fluffy cloud nonsense above. We can also normalize the points so that all we see is the radial detail. That produces a coronet-looking thing, which can be unrolled:

  1. siersonogram12.png
123456source?

What does a radial histogram of the Sierpinski triangle look like?

  1. sierradialhistogram1.png
123source

What happens if we run the Game of Life on the Sierpinski triangle?

  1. sierlife1.gif
animationsource

Basically nothing. The triangle does this and that, shoots a couple gliders, settles. Larger versions do more or less the same thing but take longer to settle. Not very interesting, but it raises the idea of using fractals as starting configurations. However, on the internet I found that lines produce Sierpinski triangles:

  1. sierlife6.png
1234source?

I didn't even have to add the horns. This is one end of a line after some iterations. The pattern continues propagating forever and ever as long as there is line left and becomes more distinguished at larger scales. It appears to be driven entirely by the line itself. Consider the evolution of a line that is infinitely long, something you can actually witness in the Game of Life by connecting the edges of the board.

As the finite line splits, it leaves debris due to the circumstances of the ends. The pattern you end up with is a trace of the line's subdivisions. It's because the line splits cleanly and does so in a Sierpinski recursion that you end up with clear Sierpinski triangles at larger scales.

If you want to play with large Game of Life constructions, the easiest way is to export them as images and open them in a dedicated Game of Life program, as those can run the game at very high speeds.

What does a random walk on the Sierpinski graph look like?

  1. sierrandomwalk6.png
123source

About what you would expect. I'll leave the stats to those whose laziness is bounded from above, instead of below. What does a "circle" look like on the Sierpinski graph?

  1. siergraphcircles1.png
1234source

This brings to light a more important question: What the hell is this? It's like the ugly duckling of radius 3 Sierpinski subgraphs. Just look at it. LOL. But OK, I mightn't myself be the most handsomest chap on the block, and graphs are people too after all.

There's a good chance that subgraph is hideous because it contains one of the 3 end vertices of the graph as a whole, though I'm too lazy to check this. Those vertices are in part pathological because they have degree 2, whereas all the other vertices have degree 4. But really I think the Sierpinski graph itself is contrived. At least, the finite version seems contrived to me.

Perhaps because the Sierpinski pattern might actually be a grid, in the sense that the empty space is an integral part of its characterization a la our infinite quadrilateral descent construction. If we base a graph on the pattern produced by the mod 2 binomial, we get this graph:

  1. siergraph20.png
1234source

Which looks like this in a tiered layout. Maybe the "real" Sierpinski graph is a binary tree of this sort, and it's only connected on all sides in the infinite case. And maybe right now I'm making mathematicians bash their heads against walls, which would be awesome.

The binomial mod 2 construction was one of the approaches that went AWOL during our 3Dification blitz. Does it have a 3 dimensional version? Yes, the multinomial mod 2. The code is almost as pretty as it is for the 2D version:

  1. array[n_] := Mod[Array[Multinomial, n {1, 1, 1}, 0], 2];
    
    draw[n_] := Graphics3D[Cuboid /@ Position[array[n], 1],
       Lighting -> "Neutral", Boxed -> False];
    
123456source?

Poor Boxed, always being set to False. What happens to our chaos game algorithm if we implement some notion of momentum for the active point?

  1. siermomentum17.png
12345678source?

I didn't find any interesting formulas, but still I managed to get a variety of figures by fiddling with numbers. Probably I would have to use math to find something more interesting.

The figures have precise symmetries (180 degree rotation), apparently because the particle eventually overshoots far enough that the randomness becomes a small jitter component (because the vertices of the game become very distant), so it accumulates a near-linear velocity/path on its way back. I'm not sure about this explanation though.

Even the ones that look like random walks are symmetric. They aren't standard random walks, rather the particle is overshooting back and forth. This raises the idea of symmetrizing random walks:

  1. sierrandwalk1.png
123source

Awesome possum. What do you see here? When fiddling with momentum I found a simple variation on the logarithmic distance function:

  1. sierrandermod13.png
123456789source

Or something of a generalization. I blindly parameterized several parts of the formula. Some of the parameters are sensitive, but in any case it's easy to find spiffy images. The hard part is deciding which of them to put here. 3D version:

  1. sier3Drandermod6.png
123456source?

I love how some of these look like sketches. You'd expect to find this as an illustration in a wizard's journal, but it's actually from a Graphics3D pane in my Mathematica notebook. This opportune box, beside being the final confine of a truculent force, is the result of the clipping I use to keep the point from escaping. I set the clipping as a parameter because it can be used to effect.

This clipping restriction isn't always necessary, and it might not be necessary for all points within a given figure, which raises an interesting prospect: What if we try to identify the points that fly off into infinity and those that don't?

  1. sierranderparam6.png
123456source

Here the white points go off into infinity quickly. The black points don't, or at least they take a lot longer to escape. There are certainly patterns here, but they're much less pronounced and computationally harder to reveal to than they are for the Mandelbrot set, which is the same idea for Julia iterations. But if you spin a few knobs you can find interesting figures irregardless. The different colors/shades are different escape speeds. It may not be immediately apparent, but these are fractals also.

A lot of fractals have scaled/skewed characteristics, including the Mandelbrot set. I wonder if there's a non-trivial chaos game that can create the Mandelbrot set. Since we're skittering around complex numbers, is there an interesting complex-valued version of the logarithmic chaos game?

  1. sierlogloglog2.png
1234source

I don't know. Strictly speaking there wouldn't be a difference, but if you put on a blindfold and chuck logarithms at Mathematica helter-skelter, pretty pictures eventually come out. So I guess the answer is some form of yes. Formally these might still be considered Julia sets.

I've mentioned before that there are a lot of crazy distance functions out there for us to use in our chaos game, and there isn't anything special about the logarithm function. What do plots using other functions look like?

  1. sierrandergen15.png
123456789source

They look just as awesome, of course. Here we have plots using the sine, cosine, and, you guessed it, Ramanujan tau Dirichlet L-function. And this is using the same basic form as the logarithm version, without us even having to put on Loki's mask and get real buckwild. Speaking of masks.

Usually I don't pick favorites, but I like the cosine image (of course sin/cos are just offsets of eachother) because it has an infinite number of folded sheet things that seem to have precise contours. I'll leave the 3Dification as an exercise, but not the how-fast-points-go-to-infinity plot.

The reason it's easy to get all these pictures without trying very hard is that the self-similarity is almost guaranteed by the chaos game algorithm. As we saw earlier, "move toward a point" amounts to the same thing as "make a resized copy of everything toward the perspective of that point."

This is a simplification, but the point is that you essentially get the skeleton of self-similarity for free, or perhaps something a bit more broad. And more abstractly, I think some remarks could be made about the real number system itself.

What does the Sierpinski triangle sound like? One easy interpretation is to consider the L-system construction for the triangle and convert different angles to different frequencies as the turtle makes the triangle:

  1. sierzrp5.png

    mp3  midi
12source

It sounds totally lame. Not surprising since the L-system construction is simple. There is real power here though. This tonifier operates on coordinate lists of any kind, not just those produced by this particular L-system. And if you do things like layer different iterations on top of each other, you can get nifty chord thingies.

A variation of this would be to determine the waveform directly from the L-system. In a past life I made such a program in C#/WPF. It was around 1200 lines of code. In Mathematica it would be around 30 lines of code, and maybe around 150 lines total with a solid UI around it. It would also be about a million times more powerful/general/flexible. There's a lot of reasons for this, none of which have to do with math.

Luckily for me I don't have to explain. The goddess of finishing projects has finally crawled out of her cave and seen fit to smite her lightning bolt through my ears and across my temporal lobe, for this tune has sated and sedated the voices and quelled their cantankerous echoes. And so ends part 1.

siermasterlock6.png

This page made while drinking Starbucks and listening to CoLD SToRAGE. If you know programming, consider contributing to Mathics. If you're having trouble with the code snippets, try clearing all variables or restarting the kernel. If you're losing the formatting of copy-pasted code, and that annoys you, right click -> insert code cell. For general Mathematica inquiries, visit Mathematica.SE or the Wolfram Community.

With the exception of Mr. Scruples who is under "CC BY NC SA", some of his companion source code stolen from StackExchange under "CC BY SA", and the geometry of our Brahman data cow whose license status is unknown, all images/animations/video/audio/source code on this page is in the public domain. It would be the best thing ever if you made money from my work. :D Also see this post by Vitaliy Kaurov for some fun info.

Finally, if you're interested in sending me a message, hit me up at the developer email on my little calculator app page. As I'm generally busy trolling Twitch chat I might not be able to get back to you, but if such possible asymmetry doesn't deter you, feel free!

<<

About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK