Normal maps are one of the most powerful techniques in real-time rendering. They allow flat polygon surfaces to display complex surface detail — bumps, grooves, scratches, pores — without adding a single extra triangle to the mesh. A normal map is an RGB image where each pixel encodes the direction a surface faces at that point, and the renderer uses this information to calculate lighting as though the geometry were far more detailed than it actually is.
How Normal Maps Encode Direction
Every pixel in a normal map stores a 3D direction vector packed into three color channels. The Red channel encodes the X-axis (left-right tilt), the Green channel encodes the Y-axis (up-down tilt), and the Blue channel encodes the Z-axis (how much the surface points straight outward). A flat surface facing directly toward the camera has a normal of (0, 0, 1), which in RGB color space maps to (128, 128, 255) — that characteristic blue-purple color you see in every normal map.
The mapping formula is straightforward: each channel stores values from 0 to 255, representing the range −1.0 to +1.0. A value of 128 means 0 (no tilt), 0 means fully tilted in the negative direction, and 255 means fully tilted positive. The renderer unpacks these values back to floating-point vectors and uses them in lighting calculations instead of the geometric normal.
Tangent Space vs. Object Space
Tangent-Space Normal Maps
Tangent-space normal maps are by far the most common type. They define surface perturbation relative to the polygon’s own orientation, using a coordinate system built from the surface tangent, bitangent, and geometric normal. Because the directions are relative, the same tangent-space normal map works correctly on any surface orientation — it can be applied to a floor, a wall, or a curved object and the lighting will respond correctly.
Tangent-space maps always appear predominantly blue because most pixels point roughly outward (Z-axis dominant). Small variations in red and green encode the fine surface details.
Object-Space Normal Maps
Object-space normal maps store absolute directions relative to the object’s local coordinate system. They appear rainbow-colored because normals point in all directions. Object-space maps are simpler to render (no tangent frame computation) and avoid tangent-space artifacts at UV seams, but they are locked to a specific mesh. Deforming or reusing the mesh breaks the lighting. They are occasionally used for static architecture or terrain.
Normal Maps vs. Displacement Maps
Normal maps only affect lighting — they do not move geometry. This means silhouette edges remain perfectly smooth, and parallax effects are missing. A brick wall with a normal map looks bumpy from the front but has a flat profile at grazing angles. Displacement maps actually move vertices, creating real geometric depth with correct silhouettes and self-occlusion. The trade-off is cost: displacement requires dense tessellation and is expensive in real-time rendering.
For most real-time applications, normal maps provide the best quality-to-performance ratio. Use displacement only for close-up hero surfaces where silhouette accuracy matters, or in offline renderers like Cycles or Arnold where tessellation cost is acceptable.
Generating Normal Maps from Height Maps
A height map (grayscale, white = high, black = low) can be converted to a normal map by computing the surface gradient at each pixel. The algorithm samples neighboring pixels to determine slope in X and Y, then constructs a normal vector from those slopes. The Normal Map tool on Texturize does exactly this — upload any grayscale height map or texture, and it generates a tangent-space normal map with adjustable strength.
The strength parameter controls how steep the perceived bumps are. Lower strength values produce subtle surface variation (good for smooth materials like polished marble), while higher values create deep grooves and pronounced bumps (ideal for rough stone or bark).
Common Mistakes
- Wrong color space — Normal maps must be imported as linear (Non-Color) data, never sRGB. Incorrect color space produces washed-out or exaggerated lighting.
- Flipped green channel — OpenGL and DirectX use opposite Y-axis conventions. If your normal map looks inverted (bumps appear as dents), flip the green channel. Blender and most DCC tools use OpenGL convention; Unreal Engine uses DirectX.
- Baking at wrong resolution — Normal maps baked from high-poly to low-poly meshes need sufficient resolution to capture fine details. Under-resolved bakes produce blocky-looking normals.
- Overuse of strength — Cranking normal map strength too high makes surfaces look artificially rough and creates visible faceting under specular lighting.
Practical Tips
Combine a base texture from the Brick Generator or Cobblestone Generator with a derived normal map for a complete material setup. In Blender, connect the normal map through a Normal Map node set to Tangent Space. In Unreal Engine, connect it to the material’s Normal input after sampling with a Normal Map expression. Test your normal maps under multiple lighting angles — rotating a directional light around the surface reveals any baking errors or channel issues.