Jan 13, 2026
MUHAMMAD GHIFARY
For decades, the fundamental atom of computer graphics has been the Triangle representation, the simplest 2D shape that can define a flat surface requiring only 3 points (vertices) in 3D space. From the original Doom game to the latest Unreal Engine 5 demo, we build 3D digital worlds by stitching together rigid polygons. But reality isn’t made of hard edges and vertices; it is fuzzy, complex, and volumetric.
In 2020, Neural Radiance Fields (NeRFs) promised a solution, using AI to ‘dream’ photorealistic scenes (Mildenhall et al., 2020). But they came with a heavy cost: agonizingly slow rendering speeds — previously I wrote a technical article about NeRFs as well.
Enter 3D Gaussian Splatting (3DGS). Released in 2023, this technique threw out the neural networks and brought back an old 90s concept — point-based rasterization — supercharged with modern optimization, the same engine as in deep learning (Kerbl et al. 2023). The result? Photorealism that rivals NeRFs, but renders at a blistering 100+ FPS on consumer hardware.
In this article, we’ll look under the hood to see how millions of fuzzy 3D ellipsoids / Gaussian ‘blobs’ instead of Triangle are rewriting the rules of 3D rendering. We’ll also explore how a full pipeline of 3DGS can be implemented with JAX, an open source framework developed by Google that is designed for high-performance numerical computing and large-scale machine learning.
To understand why 3DGS is revolutionary, we must look at how it defines space.
In a traditional Triangle Mesh, an object is a hollow shell. To render a cat, you stretch a “skin” of texture over a wireframe skeleton. It works great for solid surfaces like walls or cars, but it fails at complex, thin, or semi-transparent structures. Have you ever noticed how video game hair often looks like stiff strips of paper? That is the limitation of the Triangle.
3DGS abandons the shell. Instead, it treats the world as a volumetric cloud.
Imagine a point cloud, but instead of tiny, single-pixel dots, every point is a 3D ellipsoid (a stretched sphere). We call these “Gaussians.” Each Gaussian is defined not just by a position, but by a set of learnable parameters that describe its existence in space.
If the Triangle is a piece of origami paper, the Gaussian is a soft, colored snowball.

Figure 1: Illustration of Multi-View Scene Reconstruction with 3D Gaussian Splatting taken from (Choi et al. 2025)
3DGS can enable a new class of product features that were previously impossible due to the “quality vs speed” trade-off. Previously, if a product needed photorealism, it used pre-rendered video (non-interactive). If it needed interactivity, it used polygon meshes (often lacking photorealism for complex materials). 3DGS bridges this gap.
Here are the specific functionalities in real-world products that can be enabled by 3DGS:
1. Next-Gen E-Commerce: "The Unscannable Product" Viewer
Traditional photogrammetry (Meshes) fails drastically when scanning objects with transparency, refraction, thin structures, or high reflectivity. This limits 3D product views to matte objects like shoes or furniture.
2. Real Estate & Tourism: The "Cinematic" Virtual Tour
Current virtual tours (like Matterport) often rely on 360° panoramas. You can jump from point A to point B, but you cannot "walk" smoothly between them. Mesh-based navigations often look like low-poly video games.