grieferz
PARDON MY FRENCH
Sunday, February 1, 2026
end times
Saturday, January 31, 2026
video games pt 2
license: public domain CC0
A learned 2D-to-3D shading system that could make billboards feel volumetric without actual geometry. This has some really compelling properties.
Why This Could Work Beautifully
The Core Insight
Billboards are "cheap" but look flat because they lack:
- Parallax (solved by 3D positioning, you already have this)
- View-dependent shading (this is what the CNN would fake)
- Silhouette variation (harder, but possible with alpha)
A CNN that takes [albedo, normal_map, light_dir, view_angle] and outputs "how this would look if it were 3D" is essentially learning view-dependent relighting on flat cards.
What Makes This Tractable
Traditional 3D Pipeline:
Millions of triangles → Rasterization → Shading → Pixels
Your Proposed Pipeline:
Hundreds of billboards → Style CNN per billboard → Composited scene
The CNN is doing "shading" but learned from artistic examples
rather than physically computed.
Conceptual Architecture
┌─────────────────────────────────────────────────────────────────────────────┐
│ BILLBOARD STYLIZATION ENGINE │
├─────────────────────────────────────────────────────────────────────────────┤
│ │
│ ASSET AUTHORING (Offline) │
│ ───────────────────────── │
│ Each billboard asset includes: │
│ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ │
│ │ Albedo │ │ Normal Map │ │ Height/Depth │ │
│ │ (RGBA) │ │ (tangent) │ │ (8-bit) │ │
│ └──────────────┘ └──────────────┘ └──────────────┘ │
│ │ │ │ │
│ └────────────────┼────────────────┘ │
│ ▼ │
│ ┌─────────────────────────────────────────────────────────────────────┐ │
│ │ Per-Billboard Runtime Input │ │
│ │ │ │
│ │ • Albedo texture [64×64×4] │ │
│ │ • Normal map [64×64×3] (baked from high-poly or hand-painted) │ │
│ │ • Light direction [3] (world space, from sun/dominant light) │ │
│ │ • View direction [3] (camera to billboard center) │ │
│ │ • Style embedding [100] (which artistic style to apply) │ │
│ │ │ │
│ └─────────────────────────────────────────────────────────────────────┘ │
│ │ │
│ ▼ │
│ ┌─────────────────────────────────────────────────────────────────────┐ │
│ │ BILLBOARD SHADING CNN (runs per-billboard) │ │
│ │ │ │
│ │ ┌─────────┐ │ │
│ │ │ Encoder │◀── Concat(Albedo, Normal, Depth) │ │
│ │ └────┬────┘ │ │
│ │ │ │ │
│ │ ▼ │ │
│ │ ┌─────────┐ │ │
│ │ │ Light │◀── MLP(LightDir, ViewDir) → [256] embedding │ │
│ │ │ + View │ │ │
│ │ │ Cond. │◀── Style embedding [100] │ │
│ │ └────┬────┘ │ │
│ │ │ (AdaIN-style conditioning) │ │
│ │ ▼ │ │
│ │ ┌─────────┐ │ │
│ │ │ Decoder │──▶ Output [64×64×4] RGBA │ │
│ │ └─────────┘ (stylized, shaded, with updated alpha) │ │
│ │ │ │
│ └─────────────────────────────────────────────────────────────────────┘ │
│ │ │
│ ▼ │
│ ┌─────────────────────────────────────────────────────────────────────┐ │
│ │ COMPOSITING │ │
│ │ │ │
│ │ • Sort billboards back-to-front (standard) │ │
│ │ • Alpha blend with depth test │ │
│ │ • Optional: soft particles, atmospheric fog │ │
│ │ │ │
│ └─────────────────────────────────────────────────────────────────────┘ │
│ │
└─────────────────────────────────────────────────────────────────────────────┘
What The CNN Learns
If trained correctly, the CNN would learn to:
| Input Signal | Learned Behavior |
|---|---|
| Normal map + Light dir | Directional shading (lit side vs shadow side) |
| Normal map + View dir | Fresnel-like rim lighting, specular hints |
| Depth/height map | Ambient occlusion, contact shadows |
| Style embedding | "Paint this like Moebius" vs "Paint this like Ghibli" |
| Alpha channel | Soften silhouettes based on view angle |
The Silhouette Problem (And A Clever Solution)
The hardest part of making billboards look 3D is that their silhouette doesn't change with view angle. But you could:
Option A: Multiple billboard layers (parallax billboards)
Front layer ────────── (foreground details, high alpha cutout)
Mid layer ───────── (main body)
Back layer ──────── (background/shadow catcher)
Slight parallax offset based on depth map creates pseudo-3D
Option B: CNN predicts alpha erosion based on view angle
When viewed edge-on, the CNN learns to:
• Thin the silhouette
• Add rim lighting
• Soften alpha at edges
This fakes the "foreshortening" you'd get from real geometry
Option C: Learn to generate displacement for mesh billboards
Billboard has a simple quad mesh that gets vertex-displaced
based on CNN-predicted depth. Not flat anymore, but still
way cheaper than full 3D model.
Training Data Strategy
This is where it gets interesting. You'd need paired data:
Training Pair:
┌─────────────────────────────────────────────────────────────────┐
│ │
│ INPUT: TARGET: │
│ ┌─────────────────────┐ ┌─────────────────────┐ │
│ │ Flat-lit albedo │ │ Same object, │ │
│ │ + normal map │ ──▶ │ rendered in target │ │
│ │ + light/view dirs │ │ artistic style with │ │
│ │ │ │ correct 3D shading │ │
│ └─────────────────────┘ └─────────────────────┘ │
│ │
│ Source: 3D render of Source: Either 3D render with │
│ object with baked normals NPR shader, OR hand-painted │
│ artist reference │
└─────────────────────────────────────────────────────────────────┘
You could generate this synthetically:
- Take 3D models
- Render flat albedo + normal maps
- Render the same model with various NPR/toon shaders as targets
- Vary light and view direction
- Train CNN to map (flat + normals + light + view) → (shaded stylized)
Performance Characteristics
SCALING ANALYSIS
════════════════
Assumption: 64×64 billboard textures, ~100 visible billboards
Per-billboard CNN inference:
Input: 64 × 64 × 7 channels = 28,672 floats
Output: 64 × 64 × 4 channels = 16,384 floats
Batched inference (100 billboards):
Combined input tensor: [100, 64, 64, 7]
Single CNN forward pass (batched)
Combined output tensor: [100, 64, 64, 4]
Estimated timing (RTX 3060, optimistic):
Batched 100× 64×64 inference: ~4-8ms
Compare to traditional rendering:
100 stylized 3D objects with full geometry: Potentially much more
expensive depending on triangle count and shader complexity
SWEET SPOT:
• Many small objects (vegetation, particles, crowds, debris)
• Stylized/artistic rendering where "painterly" beats "accurate"
• Mobile/low-end where geometry is expensive
Game Genres This Would Suit
- Paper Mario / Parappa style — Intentionally flat characters in 3D world
- Diablo-like isometric ARPGs — Lots of small enemies, fixed-ish camera
- City builders / RTS — Hundreds of units, low camera angle
- Stylized horror — Junji Ito-style 2D characters in 3D environments
- Living illustration — "Playable storybook" aesthetic
- VR with intentional flatness — Characters that feel like paper cutouts but properly lit
What This Engine Would NOT Have
Traditional Engine Billboard Stylization Engine
────────────────── ────────────────────────────
Skeletal meshes → Flipbook animations or sprite sheets
Normal mapping → Normal maps still used, but as CNN input
PBR materials → Style embeddings
Shadow maps → CNN learns to fake shadows
LOD meshes → Resolution scaling on billboard textures
Occlusion culling → Still works (billboard bounds)
Minimum Viable Experiment
PHASE 0: Proof of Concept
═════════════════════════
1. Single billboard asset:
• Hand-painted albedo (64×64)
• Normal map (from Blender bake or hand-painted)
2. Minimal CNN:
• Input: [albedo, normal, light_dir]
• Output: [shaded_albedo]
• Architecture: Tiny U-Net (~200K params)
• Trained on synthetic data (Blender renders)
3. Demo scene:
• One billboard
• Rotating light source
• Watch the shading respond
Success = "It looks like a 3D object even though it's flat"
Friday, January 30, 2026
video games
Hybrid AI‑Assisted Style Engine — Phase 0 Specification
A modular, Unreal‑integrated system for realtime per‑object neural stylization, style morphing, and expressive visual transformation.
TEXTUAL BLOCK DIAGRAM — FULL SYSTEM OVERVIEW
[ Offline Pipeline ]
OASP → MMS → Style Variants + Metadata
[ Runtime Pipeline ]
Level Loads → MSNS (One CNN per Level)
↓
SMS (Style Morphing System)
↓
RSTM (Runtime Style Transfer Module)
↓
G-Buffer Strategy (Reuse / Multi-Res / Sequential)
↓
Composite → Final Frame
[ Performance Layer ]
BATS (Benchmark & Auto-Tuning System)
1. High‑Level Goals
- G1: Realtime per‑object neural stylization (hero feature)
- G2: Style morphing driven by gameplay context
- G3: Multi‑style CNN per level
- G4: Scalable G‑buffer & parallelization strategy
- G5: Benchmark & Auto‑Tuning System (BATS)
- G6: Optional offline AI asset upgrade pipeline (OASP/MMS)
- G7: Clean Unreal integration
2. Phase 0 Scope
- Unreal Engine only
- 1–3 stylized objects
- One multi‑style CNN per level
- Half‑resolution NST
- Octagonal zone demo room
- Basic BATS instrumentation
3. The “Sculpture Hall” Demo (Hero Experience)
[ Octagonal Sculpture Platform ]
- One giant central statue (20 ft tall)
- 6–12 smaller statues arranged around it
- All statues have UStylizedObjectComponent
[ Surrounding 8 Zones ]
- Flat-fronted alcoves arranged around the octagon
- Each zone has a distinct theme:
Victorian, Gothic, Sci-Fi, Underwater,
Ink Sketch, Watercolor, Neon Glitch, Marble
- Each zone has a rectangular trigger volume
- Each zone has themed props + lighting
[ Player Movement ]
- FPS camera
- Entering a zone triggers a style morph
- All statues morph toward that zone’s style
Why this demo works
- Extremely readable
- Visually dramatic
- Perfect for per‑object stylization
- Perfect for SMS morphing
- Perfect for sequential G‑buffer reuse
- Zero physics mismatch (statues are static)
4. Offline Asset Stylization Pipeline (OASP)
(Optional for Phase 0 — “but wait, there’s more”)
[ Base Mesh ]
↓
[ Multi-View Rendering ]
↓
[ Multi-View Diffusion ]
↓
[ Texture Reconstruction ]
↓
[ MMS Mesh Refinement ]
↓
[ Exported Style Variants ]
Outputs:
- Mesh variants
- PBR textures
- Style metadata
5. Meshing Model Subsystem (MMS)
(Optional for Phase 0)
Modules:
- Displacement refinement
- Procedural geometry augmentation
- Neural mesh refinement (Phase 1+)
6. Multi‑Style Neural Shader (MSNS)
One CNN per level, containing all styles for that level
[ Level Loads ]
↓
[ Multi-Style CNN ]
↓
[ Style Library (Embeddings) ]
↓
[ Per-Object Style Embedding ]
Unreal Implementation
- CNN implemented as a Global Shader (
FGlobalShader) - Weights stored in
.uasset - Style embeddings passed as constant buffers
- Compute dispatch via Render Graph
7. Style Morphing System (SMS)
Realtime interpolation between styles
[ Zone Style Embedding ] ----\
→ [ Morph Interpolator ] → [ Current Style ]
[ Base Statue Style ] --------/
Driven by:
- Player entering a zone
- ZoneManager updating
TargetStyleEmbedding - Statues lerping toward target
Unreal Implementation
AZoneManagertracks active zoneAStyleZoneTriggersetsZoneStyleIdUStylizedObjectComponentlerps embeddings
8. Runtime Style Transfer Module (RSTM)
For each stylized object:
Render into G-Buffer (Color, Normal, Depth)
↓
Run CNN compute shader (MSNS)
↓
Composite stylized output into scene color
Unreal Implementation
- Custom Render Graph pass
FScreenPassTexturefor G‑buffersRHICmdList.DispatchComputeShaderfor CNN- Composite via fullscreen pixel shader
9. G‑Buffer Reuse & Parallelization Strategy
Parallelization Spectrum:
[ Serialized ] — [ Hybrid ] — [ Fully Parallel ]
Resolution Pyramid:
G0: 1/2 res
G1: 1/4 res
G2: 1/8 res
G3: 1/16 res
Phase 0 Strategy
- Serialized (1 G‑buffer)
- Half‑res only
- Sequential per‑object processing
Unreal Implementation
- Allocate via
GRenderTargetPool.FindFreeElement - Reuse same RT for all objects
- Optional: add G1/G2 later
10. Editor Integration & Tooling
[ UStylizedObjectComponent ]
- StyleId
- StyleResolutionOverride
- UpdateEveryNFrames
- MorphSpeed
[ UStyleLevelSettings ]
- StyleLibrary
- CNNAsset
- MaxParallelStylePasses
- MaxStyleResolution
- VRAMBudgetMB
[ ZoneManager + ZoneTriggers ]
- ZoneStyleId
- ZoneName
- ZoneColorTint
11. Hardware Requirements
30 FPS → GTX 1060 / RX 580
60 FPS → RTX 2060 / 3060
120 FPS → RTX 3080 / 4070
12. Benchmark & Auto‑Tuning System (BATS)
[ Benchmark Mode ]
↓
[ Data Collection ]
- CNN time
- G-buffer time
- Composite time
- VRAM usage
- Resolution tier usage
- Temporal stability
- Object count
[ Analysis Engine ]
↓
[ Auto-Tuning Report ]
- Suggested resolution tier
- Suggested parallelization
- Suggested update frequency
- Suggested object count
[ Apply Recommendations ]
Unreal Integration
- GPU timestamps (
FGPUTiming) - CPU trace events (
TRACE_CPUPROFILER_EVENT_SCOPE) - Unreal Insights for visualization
- Console command:
StyleEngine.RunBenchmark 10
13. Unreal‑Specific Implementation Guidance
Global Shader Setup
FStyleCNNShader : public FGlobalShader- Bind:
- InputColor
- InputNormal
- InputDepth
- StyleEmbedding
- CNN weights
Render Graph Integration
- Add pass after base lighting
- Use
AddPasswith compute dispatch - Composite before post‑processing
Trigger Zones
- 8
UBoxComponenttriggers - Each with
ZoneStyleId - ZoneManager handles transitions
Statue Actors
- Static meshes
- No animation
- Perfect for stable G‑buffer inputs
14. Full System Flow
OFFLINE:
Base Mesh → Diffusion → MMS → Style Variants
RUNTIME:
Load Level → Load CNN → Load Style Library
↓
Player Enters Zone → ZoneManager Sets TargetStyle
↓
Statues Lerp Style Embeddings (SMS)
↓
For Each Statue:
Render to G-Buffer
Run CNN
Composite
↓
Final Frame
BENCHMARK:
Run BATS → Collect Stats → Generate Report → Apply Settings
15. Critical Risks & Mitigations
Physics Mismatch:
→ Use static statues only
Temporal Instability:
→ Use G-buffer inputs + optional temporal smoothing
VRAM Pressure:
→ Serialized G-buffer reuse
Style Interpolation Artifacts:
→ Train CNN with interpolation examples
Per-Object Overhead:
→ Limit to 1–3 statues in Phase 0
16. Applicability to SDF & Voxel Engines
SDF Engines:
- MSNS: Full compatibility
- SMS: Full compatibility
- RSTM: High (needs masks)
- OASP/MMS: Partial (needs SDF conversion)
Voxel Engines:
- MSNS: Full compatibility
- SMS: Full compatibility
- RSTM: High (per-chunk stylization)
- OASP/MMS: Limited (geometry dynamic)
17. Final Summary
This document defines a complete, Unreal‑focused Phase‑0 system for:
- Realtime per‑object neural stylization
- Style morphing driven by player movement
- Multi‑style CNN per level
- G‑buffer reuse & sequential processing
- Benchmarking & auto‑tuning
- A visually stunning “Sculpture Hall” demo
It is technically feasible, visually impressive, and architecturally clean.
Sunday, January 18, 2026
The Big Picture
Authoritarianism is not a single switch that flips. It’s a process — and so is resisting it.
Ordinary citizens have more power than they think when they:
Build community
Stay civically engaged
Protect independent information
Form coalitions
Push for institutional reform
Maintain psychological resilience
Learn from global movements
Democracy is not a static system; it’s a practice. And it’s one that ordinary people have repeatedly reclaimed throughout history.
8. Use Technology Thoughtfully, Not Fearfully
Technology can be used for surveillance — but it can also be used for:
Secure communication
Organizing
Education
Transparency
Citizen journalism
Crowdsourced accountability
The key is not to abandon technology, but to use it in ways that empower rather than isolate.
7. Connect With Global Democratic Movements
People around the world have faced similar challenges. There’s a huge body of knowledge on:
Nonviolent resistance
Digital rights
Anti-corruption strategies
Community organizing
Legal advocacy
International solidarity strengthens local movements and provides models that work.
6. Practice Psychological Resistance
Authoritarian systems rely on learned helplessness. Breaking that cycle is a political act.
Don’t normalize abuses
Don’t repeat propaganda, even to mock it
Don’t internalize the idea that “nothing can change”
Celebrate small wins
Support people who speak up
Hope is not naïve — it’s strategic.