How to make dynamic point clouds in Unreal Engine | The Production Process of the Gardens of Kowloon

Production Overview

This project was a lot of firsts for me–first VR, first Houdini to Unreal pipeline, and first Niagara simulation. It was also the largest photogrammetry scan I ever attempted (over 5000 photos!). I made a ton of mistakes, learned a lot, and hope to share some of that with anyone interested. For a brief overview, the steps went roughly like this (with a lot of backtracking and mixing of steps along the way as I learned better techniques):

  1. R&D
  2. Capturing photos
  3. Processing model in Reality Capture
  4. Cleanup and particle creation in Houdini
  5. Exporting from Houdini to Unreal Engine
  6. Creating dynamic point clouds with Niagara
  7. Final tweaks

Here’s a visual representation of how much time was spent on each portion (with a total of 50-60 hours of work).

1. R&D

This abstract style did not come from nowhere. My initial introduction to the art of point clouds was the Clove Cigarette music video done in Unreal Engine. I’ve also been mesmerized by RubenFro’s Unity projects with Lidar scans. These two formed the basis of my understanding of point cloud art and manipulation. I wanted to combine and evolve both of these techniques by creating a dynamic system that also had three dimensionality to each point/particle. While I knew Unity had both of these capabilities, I had zero experience with the software, and resolved myself to finding a way within Unreal Engine. To my knowledge this specific effect hadn’t been done before.

This goal left me with two big unknowns: how to achieve point cloud movement in Unreal Engine and how to render point clouds as physical objects (rather than the default two dimensional circle).

After some searching online I gained confidence from the wonderful aesthetic and three dimensionality of this video by Lino Thaesler. I also saw potential for movement with this video by Nacho Cossio. Bouncing between these inspiration and tests with the Lidar plugin in UE, I realized there were many limits to this system of point clouds. The plugin was easy, but offered limited manipulation. There must be another way I told myself. And after a few more days of frustration I came across this game changing video from Emi Unalan… dynamic movement, shader changes, AND three dimensionality (my mind was blown!!). Not the exact effect I wanted but these were definitely the missing pieces to the puzzle. A quick search in the comments led me to the biggest breakthrough in this project: the Houdini to Niagara plugin and pipeline.

There it was.

Niagara.

The one tool built for particles in Unreal Engine. How had I not considered it before? Well, likely because I had only used it once two years ago, before I realized my time would be better spent learning Houdini’s particle system. Well it appeared now was the time to truly learn this workflow.

In initial tests I tried simulating the particle movement in Houdini, exporting the alembic cache file into Niagara, and having that animation be triggered to create the illusion of a dynamic system, but I quickly found that the time based alembic file was too intensive for any computer to run (there were hundreds of thousands of points!). So I switched to creating one initial “frame” of the particles in Houdini, and extrapolating upon this with the dynamics of Niagara instead. With this rough framework in mind I went forward with production!

2. Capturing photos

First, I did some quick test scans with the Polycam Lidar App on my phone to map out the area to capture. After bringing this rough model into UE and running around in it, I clarified which areas of the massive park I wanted to focus on.

A few days later, I walked through the garden early one morning (in the magic hour before sunrise so there wouldn’t be as many people and to limit harsh shadows) and took 5000+ 4K photos with a sony A7siii. The whole scan took about 30 minutes of dedicated shooting and walking around. I quickly realized that filming in the lackluster light of pre-dawn made it incredibly hard to get clear properly exposed images. To allow me to move quickly the shutter speed was a quicker 1/200, and for capturing details throughout the whole image the aperture was a lower f/11, but this meant the ISO had to be an incredibly high 51,200 for a remotely ok exposure. Unfortunately, this did lead to more noisy images. The other mistake I made was to expose for the wider shots of the pathway, without double checking if there would be enough detail underneath the shadows of the roof (which ended up quite dark).

In a future project, I’d like to experiment with manually adjusting the aperture as I’m walking around to brighten up the darker portions of spaces. I wonder how Reality Capture would handle this difference in levels, and if that would improve its ability to align cameras for those spaces, or hamper it as there’s less consistency with the wider angles.

3. Processing model in Reality Capture

Break out your chocolate milk and vibey music playlists because here comes the fun part of tending to your computer for several days as you politely remind it the proper alignment of images!

Unfortunately my prior technique of dragging in the images and hitting “start” did not produce a good model with the base settings. And thus began the long journey of optimization and control points as I learned the in and outs of this wonderful software.

The initial camera alignment pass achieved over 1000 individual camera aligned components, with the largest ones only including 500-700 images. This is an absurdly large amount of unconnected pieces. Ideally you only have 2-5 components to stitch together. I credit this largely to underexposed components of the area beneath buildings and the high ISO. With 10+ hours of work I manually connected most of the components together with 30 control points into one full component with 3000 aligned cameras.

Some tricks I learned through this process:

  1. Before doing any manual work, hit “align images” multiple times in a row as this will actually learn from previous alignments to improve subsequent ones. Do this until you don’t see a difference in the number of aligned cameras.
  2. If you must resort to control points, don’t be full of yourself and assume tutorials aren’t necessary. Take a few minutes to surf YouTube. One big tip I learned is: as you add control points you can very quickly run through the images attached to one point by right click holding on the point and pressing the down arrow key. This keyboard shortcut genuinely saved me so much time.
  3. For better textures, change the texture/unwrap settings to maximal intensity and increase the resolution.

Once enough images were aligned, the generated model consisted of 341 MILLION triangles, which was simplified to 30 million to bring into Houdini.

I had an initial vision of the player walking around underneath this covered section of the path which had a lovely roof and pillars on either side. In the initial Lidar scan these looked great, but once I went through the photogrammetry process, the close together pillars did not come across clearly. Part of this I believe was due to the inherent differences between Lidar–which specializes in smaller darker spaces–and photogrammetry–which specializes in larger spaces you can see many angles from. The covering and pillars of the pathway meant one could only take pictures from the position of the pathway itself, and not in the traditional photogrammetry method of wide angles in a circular pattern. The underexposed nature of these areas also likely did not help. So I pivoted and pushed onward with a more limited map that followed the inner open pathways of the garden.

4. Cleanup and particle creation in Houdini

With the model exported, I brought it into Houdini to cleanup and distribute the particles/points. The key aspect of this process was using a ‘measure’ node to calculate the area of the polygons. The area is essentially a representation of the detail level on the model, and I used this value to automatically delete larger sections and displace the particles with less detail. This created a more randomized edge distribution to the particles (drag the slider to see the difference).

The two important final steps of this processing are exporting to Niagara and exporting the further simplified version of the model to use to help build a collision mesh.

Full node tree
the inside of the AVOP node

5. Exporting from Houdini to Unreal Engine

Okay now for the big step. Getting to Unreal Engine! Most of this process and the subsequent workflow was informed by this fantastic tutorial series going over how to get a particle simulation from Houdini to Niagara. One major change in my situation was the desire to not export multiple frames of a simulation, but rather have one singular frame that is the starting point for Niagara. On the Houdini Labs Niagara ROP node merely change the setting to “Single Frame With ‘Time’ Attribute”. Later in Niagara we will do some tweaking to make this work.

6. Creating dynamic point clouds with Niagara

This was the most challenging and fun aspect of the project. To get the basics for Niagara dynamics I was conceptually helped a ton by this tutorial by PrismaticaDev. Beyond that I did a ton of experimenting. I won’t even try to go through every technical detail of what I changed and adjusted, but here’s a quick overview of the unique challenges and solutions of this project:

Having only one “frame” input for Niagara to last long

  • Initially I had wrongly assumed one needed to export multiple frames from Houdini to have it last that length (looped) in Niagara. After weeks of struggling with unnecessarily huge files I found the simple solution: change loop behavior to infinite, increase loop duration and point attribute lifetime to a high number (600 for me), and MOST IMPORTANTLY go to ‘particle state’ and uncheck all three values here. This allowed the particles to spawn from the single frame with a time attribute from Houdini, and ultimately allowed for the type of interactive dynamics needed in the project.

Visual look (scale, render as sphere, material)

the incredibly simple material
  • Randomize the scale (this could be done with a Houdini value but I found it more flexible to add a multiply by randomized float value within Niagara). Add mesh renderer and set it to sphere. Add material override and create a custom material with the particle color fed into the color. Initially, I was split on whether to have the material be emissive or not. I loved the physical shadows of a non-emissive look, but eventually decided that it was too dark and required too much extra lighting which took away from the aesthetic. For a compromise I gave it an emissive value of 0.5.

Dynamics (calculating player position and adding forces)

  • This was perhaps the most complicated portion of the process. I largely pulled from this tutorial by Andreas Glad to have the player interact with the particles. It required calculating the distance from the particles to the VRpawn, ramping this value to 0-1, and multiplying this by a curl force strength. This let there be a slight disturbance of the points calculated along the path of the player.
  • For the special areas where the particles spring up, I repeated the method from above for a point force with a larger upwards velocity, and also multiplied the strength by a box collision that detected when the player entered a certain region around this area.

Scaling size over time

  • The final touch of having the particles scale into the scene was done by this relatively simple scale by curve method.

7. Final tweaks

Other than particle forces, the biggest challenge of this project was getting a collision mesh working properly. First, I exported a simplified mesh from Houdini, aligned the mesh to the points in UE, hide it from the renderer, and attempted to generate collision with a NAV mesh volume. No matter how detailed I’d make the convex hull collision, it was never enough to get all the twists and turns of the paths. After a few weeks of work I discovered that I could manually model the collision mesh in Blender (with walls to stop the player from walking too far), and set the ‘collision complexity’ to ‘use complex collision as simple’. This manual modeling might not be necessary if the Houdini mesh is simplified enough, but with the order of how I discovered these factors it didn’t make sense to test it otherwise.

After the additions of color grading, volumetric fog, lighting, sound effects, a smooth move VR add on, and some other things that I may be forgetting now, the project was finished!

I hope this was helpful to anyone attempting a similar effect. This was a long and exciting process for me. I’ve honestly wanted to attempt a similar effect for the years since the Clove Cigarette music video.

If you do create anything in the exciting point cloud art space or have any questions feel free to email me at igazmararian@gmail.com. Hope you have a wonderful rest of your day 🙂

Leave a Reply

Discover more from

Subscribe now to keep reading and get access to the full archive.

Continue reading