35

My Journey into Fractals

 5 years ago
source link: https://www.tuicool.com/articles/hit/ZBfq6bI
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

My journey into fractals

Hi, I’m Greg, and for the last two years, I’ve been developing a 3d fractal exploration game, which started as just a “what if” experiment.

I would describe myself as technical artist, meaning, I am bad at both arting and coding. I had some experience with shader programming, and love unusual and experimental technological and artistic decisions.

One day I looked at 3d-fractals at shadertoy , and decided to write my own fractal renderer, but in a game engine. Because it is more convenient in a long run than shadertoy, and how cool would it be to have fractals in a game engine? Since 2014 I used Urho3d game engine for my home projects. It’s an open source engine with deferred shading render path (among a couple of others). It might not be as cool anymore as in 2010, but I still love deferred shading, and it will allow me to light my fractal with 100s of lights. Also it has easy to setup HDR auto-exposure, bloom and shaders written in plain hlsl or glsl (I use OpenGl). It’s sure going to be a fun little weekend project!

qQvymuq.png!web
Oh wow, it’s working.
FZBBbyB.png!web
Z-clipping with regular polygonal geometry.
FnArUna.png!web
Ugly noisy normals and lighting.

When I introduced lighting it was super noisy due to an infinite number of super tiny details, the aliasing on normals was intense. I can’t have MSAA with deferred, but luckily there are a couple of tricks in raymarching, to make sure you are not resolving details smaller than pixel size which helps a lot. Picture below is much smoother now.

U7JFFrn.png!web
Bunny shaped hallway (its location and formula lost forever)

I had several ideas on how to optimize fractal raymarching. It seemed wasteful to me, that there is a ray for each pixel, and neighboring rays basically trace the same path. The obvious solution would be to somehow combine their efforts into fewer rays and diverge them only for a last bit of their travel. My first idea was to draw a grid of quads on screen, march space in a vertex shader, then finish the job in a pixel shader. It was a stupid idea, I quickly realized it’s much easier to setup lower resolution depth-buffers, than fiddle with polygonal grid, I just need to edit renderpath.xml, no coding.

naMnMjY.png!web
1/64 depth vs full resolution.

So I marched thick rays in lower resolution, then read result and continued the ray path in higher resolution.

RvInIjZ.png!web
This screenshot shows how thicker low-res rays reach infinity, so empty parts of an image are skipped in higher resolutions.

I was sure this method is too obvious to be invented by me, but I never knew the right word to google. Only 1.5 years later I found, that this technique is called cone marching (referring to the fact that rays are getting thicker over distance). And described in 2012 paper by demo group Fulcrum.

This paper is still a great and very detailed description of this tech, especially the last part, when they talk about ways to squeeze some more performance and detail by making trade offs. There are lots of ways to cut corners and really just comes down to “what artifacts you find tolerable”.

I ended up using four low res passes: 1/64, 1/32, 1/8, 1/2, and finally full-res, which only makes 15 ray steps maximum. On my GTX 960 it runs on 40–60 fps at 1080p. The bottleneck is of course pixel shader instructions for fractal rendering, and overdraw and G-buffer bandwidth for deferred shading meaning it scales pretty badly with increased resolution. The opposite is also true, at lower resolutions 720p or 540p you can run it on pretty old discrete GPUs.

There is still a lot of stuff to improve and try. I’m sure my setup is far from perfect, even though I was revisiting and refining it several times. What surprised me the most, is how much you can achieve by randomly swapping stuff around, adding “magic numbers”, just trying and observing results, instead of figuring out the most mathematically correct and academically valid method.

Here is how everything looked after three weeks:

fractal with deferred shading, HDR and bloom.

One thing that bothered me at this point was too high of a contrast on very small details. It shattered the image and made it hard to “read” the shapes.

nyENRne.png!web

My plan of fixing this consisted of three items:

  1. Faking GI, to somehow get rid of pitch black form shadows(core shadows) right next to the light source.
  2. Fake cast shadows. Because real ones are out of question and I actually never tried them (maybe someday).
  3. light scattering, to further pull together shattered pieces. And just like with shadows, It’s better to be some fast approximation, rather than brute force.
3Yr6fyZ.png!web

First attempt at shadowing. Simple AO multiplied with lighting result. Already makes it easier to read shape.

RNbAria.png!web
Half-lambert diffuse model.

For faking GI I tried good ol’ half-lambert. I ended up using a mix of two. Half-lambert right next to a light source and it gradually transforms into regular lambert with distance.

Then volumetric light scattering, it really ties the room together. I first tried to fake it with simple math. Because I’m the guy who always tries to fake everything with simple math.

JBreiar.png!web
2AN3uuN.png!web

It looked okay from a distance, but didn’t behave properly when the camera flew inside the light volume. Also, no way to do spotlights.

Around that time I tried to color my fractal and never tried it again for a year.

My simple math failed. I had to find a better way to do light scattering, and I found In-Scattering Demo by Miles Macklin.

aemAziq.png!web
Aww yiss! the real math, done by someone who actually can math.
IFreAzi.png!web
Mmmm, and now I can do spotlights.
yABFJzm.png!web
And handle camera inside of a light volume perfectly.

I had no experience with C++ before, and at this point I had to perform two dives into Urho3D source to hack my lighting, each editing just a couple lines. It was very scary.

And I also modeled a submarine (rocket powered). Because volumetric spotlights and submarines are a perfect fit for each other.

aI3yI3e.png!web
iiiei2Y.png!web
Mandelbox formula.

Now to the shadows

Here is another hacky hack, that actually kind of worked. For fake shadows I first calculated bent normals. It’s like regular normals but smoothed out, while geometry stays the same (It’s easy with raymarching and distance fields, just like calculating regular normals but on larger radius).

Zb26jya.png!web
Regular lighting but with bent normals.

I added a new G-buffer texture to store bent normals in.

MFJfU3m.png!web
Note the teapots, I calculate bent normals for polygonal geometry too.

Then each light multiplies the lighting from two normals together. Note how small beads on the opposite from the light side of larger forms does not receive any light.

iaQzYnj.png!web

Here is an ON/OFF comparison. Not bad for one extra dot product.

2eIrYvV.gif

There are problems of course. With polygonal models mostly. They can receive this shadows, but they can’t cast or have their own. Received shadows are accurate only when the model is close to a fractal. Pesky polygons, they are no match to superior distance fields! And there are places where this method just gives up completely and shows total rubbish.

But wait, have I told you, that bent normals come with almost free AO? That’s right! since I already have general direction to open space (Which bent normal essentially is). I also noticed that length of not normalized bent normal is already kind of looks like AO:

VFbIFba.png!web
This AO is 100% free!

I experimented a lot with different ways to calculate AO with as few distance field calculations as possible. And my current method uses only one extra formula tap. I decided, that it’s much cooler to use it for very large scale AO, like dark caves you can walk into, instead of small scale like stones and cracks. Here is a GIF of me changing the scale of AO:

qaAJfyY.gif

And here are more recent images of my AO. Not bad, considering it is super cheap, fully dynamic and not view depended (as SSAO).

zQJvQjN.png!web
NVfMVni.png!web

Yeah, there are artifacts and places it doesn’t work as good. You may spot some weirdness, on the image above.

I also used bent normals for my lambert — half-lambert mixing. Now I would call it Fake GI. It’s barely noticeable (as any proper GI), but it costs me nothing.

bMFry2i.gif
Fake GI, also free.

Next stop — sky and atmosphere.

I made a cubemap and used it for image based ambient lighting. Again, instead of reading smart people’s papers, I just challenged myself to hack it with just one cubemap read. I tried many ways, with normals and bent normals, different mip levels and AO dependency.

6RrE7fj.png!web

Same cube map I used for sky and fog color. The closer the surface the smaller cube map mip I take. This trick can make very pleasant colorful fog, and gradually blend it into skybox. Skybox (cubemap) should not have small details with high contrast. It should be all washed out and blurry. Which is okay for my moody and gloomy atmosphere with volumetric lights. I can also keep cubemaps small (256px).

VVFzaiY.png!web
R3i6fqb.png!web
qEVjuu3.png!web
zQ7bArU.png!web
IryaQba.png!web
yaeaqmV.png!web
fM7FBz2.png!web
ZRRfIra.png!web
B3qAbaq.png!web

Abstract cubemaps were produced in blender, then brought into the engine with 16 bit precision, which is really important if you want to work with high dynamic range:

jQJrqem.png!web
ieaMj2z.png!web
eamIziA.png!web

Coloring.

Back in 2013 I was building a mobile tiny planet suborbital flight simulator prototype. It never came past the landscape rendering, but I really liked the texturing technique I came up with. It uses only two textures: one — 1024px tiled detail texture with different forms of voronoi pattern stored in different channels, the other is color lookup texture where vertical axis represents altitude and horizontal represents cavity. So you can paint a snow line, layered rock formations, different colors of soils and vegetation. Altitude and cavity information is stored in vertex colors, and detail texture just shifts this values. One voronoi pattern shifts vertically, another — horizontally.

In2QFvb.png!web
eeQ3aui.png!web
EVzaeuj.png!web
N3eiA3A.png!web
Thank you, Georgy Feodosevich, I really appreciate your contribution into computer graphics!
Bbm6NrY.png!web
Examples of color lookup textures.

Now, my fractal coloring is basically the same, but instead of vertex color I use different values pulled out from fractal formula calculation. I hoped to find a good unified way to color fractals, but gave up on this idea for artistic freedom. So I basically write texturing code for each fractal.

6BZFbqQ.png!web
yIVfu2m.png!web
6nEJNvm.png!web
fEVVfiZ.png!web
QJz2euU.png!web
buUnuer.png!web
uuMrE3I.png!web
Z7z2mmi.png!web
ZrMbmmQ.png!web
yuIJrq7.png!web
IJzqu2m.png!web

Physics.

I calculate physics on GPU with pixel shader. Using float4 texture as input and output, inputting position and sphere radius, outputting normal and distance to surface. Sounds simple, I just had to bite my elbow and learn C++.

Sphere is my only primitive for fractal collisions. But even sphere collisions are not very precise. But hey, I can finally interact with fractals! They feel almost solid, maybe a bit squishy. Look, doesn’t it look fun?

For shapes that are not very spherical I just use several spheres compound:

Yeah, take that, stupid submarine!

Apart from being highly approximate, this collision check is not in sync with regular physics. Bullet physics uses fixed rate in a separate thread, while fractal collisions are updated along with picture rendering. They can even run with different FPS.

Knowing all that beforehand I wasn’t sure how good my collisions are going to be, and what game I will be able to build with them. To compensate on that, I was thinking about very slow and sluggish vehicular game, where fractal acts like a soft silt, and your submarine just slows down and gets stuck in it. But as soon I saw that bouncing teapots, I immediately wanted to be a first person character, to bunny hop around that fractals. So I made a character controller:

It uses just two spheres. One for body and one for legs. I’m thinking about adding raycasts and spherecasts to my fractal collision system, but I’m lazy to do it, so I will just try my best to avoid raycasts as long as possible.

What kind of game am I going to make?

There is still a lot to talk about, I haven’t touched on many other physics and graphics challenges I faced, tools and engine support, fractal formulas and how do I find new ones, fractal and environment animation, my struggles with audio and what I eat for breakfast. But this post is getting way to long, and I’m running out of time. I will leave it for next time, and talk a bit about plans.

There is a lot of stuff I’m not sure about, the highly experimental nature of this project makes it difficult to imagine the end result. I do have a plan, I know the direction I want to go, I just can’t predict where exactly it will get me. I’m discovering.

  1. I want to make a game about discovery, exploration and trailblazing. Adapting oneself and adapting the environment. I want it to be moody, atmospheric and inconceivable.
  2. I want to draw inspiration from the real world. From humanity’s endeavors to explore and develop challenging environments like deep sea, arctic or space. I want fractals to be another such challenging environment. So much, that even transportation and navigation might be a challenging task.
  3. I don’t want it to be just[familiar game genre] but in fractals. I want to explore new game mechanics that are only possible or work really well in huge generated distance field based environments.
YvmmQbJ.jpg!web
iAbINba.jpg!web
Caspian sea shelf oil production (1964); Kapitan Sorokin icebreaker (1987) TASS Archive
MJjMJrn.jpg!web
AnQVFbN.jpg!web
Trieste II (DSV-1) deep submergence vehicle; Kharkovchanka soviet arctic exploration vehicles.
um2aayY.jpg!web
Z777vqV.jpg!web
Weather man photo story by Evgenia Arbugaeva .
36Vbmuz.png!web
YZVbueB.png!web
Man-made structures in fractal universe.

At this stage I’m not even sure if I’m going to make content or systems driven game. Both appeal to me in different ways, and I might try to do a bit of each. For now I will just continue to experiment, and will try not to invest too much time in specific features I might change later.

I will wrap up with a couple videos. First one is the trailer I did back in april:

And compilation of my whole screenshots folder:

Thank you for scrolling it all the way down here. If you want to follow me in my wanderings, follow me on Twitter, Youtube or Itch.io .


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK