Tuesday, March 10, 2026

so many options

 

🌐 1. Archetype‑Based ECS

(Unity DOTS, Bevy, Frostbite internal systems)

🧩 Real‑world examples

Engine / GameWhy it fits archetype ECS
Unity DOTS demos (Megacity, Entity Component Samples)Tens of thousands of entities, heavy physics, chunk‑optimized memory layout
Bevy engine games (Rust gamedev scene)Hot‑loop systems, predictable scheduling, parallelism
Frostbite engine subsystems (Battlefield series)Massive crowds, destructible environments, physics‑dense scenes

🧠 Why these games choose archetype ECS

  • They need maximum throughput
  • They have huge numbers of similar entities
  • They benefit from chunked SoA memory
  • They rely on tight, predictable pipelines

🧪 Typical workloads

  • 10k–100k entities
  • physics‑heavy
  • AI swarms
  • particle systems
  • large open worlds

🌿 2. Sparse‑Set ECS

(Flecs, EnTT, Svelto.ECS, many custom engines)

🧩 Real‑world examples

Engine / GameWhy it fits sparse‑set ECS
Flecs‑based games (indie & AA titles)Flexible component sets, ergonomic queries
EnTT‑based engines (C++ gamedev)Great for tools, editors, gameplay logic
Svelto.ECS (used in Unity‑based production games)Hybrid ECS with strong separation of concerns

🧠 Why these games choose sparse‑set ECS

  • They want flexibility over raw speed
  • They mix gameplay logic + simulation
  • They need ergonomic iteration
  • They want easy debugging

🧪 Typical workloads

  • 1k–20k entities
  • mixed gameplay + simulation
  • tools, editors, UI
  • AI, inventory, quests

Sparse‑set ECS is the “default good choice” for most games.


🌳 3. Hybrid Node/ECS

(Godot 4, Roblox, Unreal’s internal component model)

🧩 Real‑world examples

Engine / GameWhy it fits hybrid ECS
Godot 4Nodes for creators, ECS under the hood for performance
RobloxHierarchical data model + component‑like behaviors
Unreal Engine (Actor + Component system)OO façade, data‑oriented internals

🧠 Why these games choose hybrid ECS

  • They need creator‑friendly ergonomics
  • They want OO‑style scripting
  • They hide ECS complexity behind nodes/actors
  • They support UGC or large teams

🧪 Typical workloads

  • scripting‑heavy
  • UGC platforms
  • tools + gameplay + simulation
  • moderate entity counts

This is the “best of both worlds” dialect for engines that must be ergonomic.


🏷️ 4. Tag‑Heavy / Minimal ECS

(Roguelikes, small engines, hobby engines, ECS‑lite frameworks)

🧩 Real‑world examples

Engine / GameWhy it fits tag‑heavy ECS
RogueBasin ECS tutorialsSimple tags for AI, FOV, inventory
Amethyst (early versions)Tag‑driven systems
Many custom roguelike enginesMinimal data, lots of markers

🧠 Why these games choose tag‑heavy ECS

  • They want simplicity
  • They don’t need high performance
  • They want clear, readable logic
  • They use ECS mainly for decoupling

🧪 Typical workloads

  • < 1k entities
  • turn‑based
  • grid‑based
  • simple AI
  • minimal physics

This is perfect for prototypes, small games, and teaching.


🧬 5. “OO façade → ECS IR” (your proposed dialect)

This is the one you’re designing:
OO‑looking code → ECS IR → SoA kernels, with an AI‑agent co‑designer.

🧩 Real‑world analogues

SystemWhy it’s similar
Unity DOTS with Roslyn analyzersOO → ECS transformation
Svelto.ECSStrong separation of logic and data
Frostbite’s internal toolsCodegen + data‑oriented pipelines
ISPC / HalideHigh‑level code → optimized kernels

This is the “future dialect” — and your agent‑native approach makes it viable.

dumb all the way down

ai agents sure can be dumb as rocks in some ways.

yes. yes, i do.

"You’re asking exactly the right question at exactly the right moment."

 

for the 100th time.

Friday, March 6, 2026

I SWEAR IT'S NOT PORN, OK?

Battery

1 season • TVPG • 2016-2016

Yui Fujimaki, Hozumi Goda, Tasuku Hatanaka
From a best-selling novel series, a prodigious but arrogant pitcher moves to a new town. He soon meets a skilled catcher, the only one who can handle his powerful throws.

Anime • Inspiring • Competitive • Heartfelt
Episodes
Available free
+ Save

Wednesday, March 4, 2026

all is full of fail

https://news.ycombinator.com/item?id=47248868

human societies have lurched past the point of not going horribly, horribly wrong, and being pretty evil.

Sunday, March 1, 2026

"dp" baby!

license: public domain CC0


Medium‑format style pipeline for Canon Dual Pixel RAW

Spec & design document (Python implementation)


1. Goals and scope

Primary goal:
Given a Canon Dual Pixel RAW capture, automatically produce a medium‑format‑style SDR image that:

  • Preserves subject detail and avoids “AI mush”
  • Uses real Dual Pixel depth (not guessed) to drive optical‑like transforms
  • Emulates key MF traits: smoother DOF, micro‑contrast pop, gentle highlight rolloff, subtle subject/background separation

Non‑goals (for v1):

  • Perfect physical simulation of any specific MF body
  • Exact replication of Canon DPP behavior
  • Multi‑view consistency (single‑image only)

2. Inputs, outputs, and assumptions

2.1 Inputs

  1. Dual Pixel RAW file (e.g., .CR3 from EOS R5 with DPR enabled)
  2. Optional: AF metadata (focus point) if available.

2.2 Intermediate representations

  1. Base image

    • Linear or gamma‑encoded RGB, shape H x W x 3float32 in [0,1].
  2. Depth/disparity map

    • From Dual Pixel data, shape H x Wfloat32.
    • Relative depth is sufficient; absolute units not required.

2.3 Outputs

  • Medium‑format‑style SDR image
    • H x W x 3uint8 or uint16, in JPEG/PNG/TIFF.

3. High‑level pipeline

  1. DPR extraction layer

    • Extract base RGB image and depth/disparity map from Dual Pixel RAW.
  2. Depth processing layer

    • Normalize depth
    • Smooth depth
    • Determine focus plane
  3. Optical‑style transform layer (MF physics‑inspired)

    • Depth‑dependent blur (DOF geometry)
    • Depth‑aware micro‑contrast (focus pop)
    • Highlight rolloff (soft knee)
    • Depth‑aware color separation (subject vs background)
  4. Output & export layer

    • Gamma/tone check
    • Color space tagging (e.g., sRGB)
    • File export

4. Module design

4.1 DPR extraction module

Responsibility:
Convert Canon Dual Pixel RAW into:

  • img: np.ndarray[H, W, 3], float32, [0,1]
  • depth: np.ndarray[H, W], float32

Implementation notes:

  • Use Canon SDK or third‑party DPR tools (outside this spec) to:
    • Decode RAW
    • Extract left/right sub‑images
    • Compute disparity map (e.g., block matching or provided by SDK)
  • Save depth as .npy for reuse.

Interface:

def extract_dpr(image_path: str) -> tuple[np.ndarray, np.ndarray]:
    """
    Returns:
        img   : H x W x 3 float32, [0,1]
        depth : H x W float32, arbitrary scale
    """

4.2 Depth processing module

Responsibility:
Turn raw disparity into a stable, normalized depth field and choose a focus plane.

4.2.1 Depth normalization

  • Robustly map depth to [0,1] using percentiles to ignore outliers.
def normalize_depth(depth: np.ndarray) -> np.ndarray:
    d_min, d_max = np.percentile(depth, [1, 99])
    depth_norm = np.clip((depth - d_min) / (d_max - d_min + 1e-6), 0.0, 1.0)
    return depth_norm

4.2.2 Depth smoothing

  • Apply edge‑preserving smoothing (bilateral or guided filter) to reduce noise while preserving edges.
def smooth_depth(depth_norm: np.ndarray) -> np.ndarray:
    depth_8u = (depth_norm * 255).astype(np.uint8)
    smoothed = cv2.bilateralFilter(depth_8u, d=9, sigmaColor=25, sigmaSpace=25)
    return smoothed.astype(np.float32) / 255.0

4.2.3 Focus plane selection

  • Default: median depth (works well for portraits).
  • Optional: use AF metadata or local contrast peak.
def choose_focus_plane(depth_norm: np.ndarray, mode: str = "median") -> float:
    if mode == "median":
        return float(np.median(depth_norm))
    # Hook for smarter strategies later

4.3 Optical‑style transform module

This is the core MF‑style logic. It operates on imgdepth_norm, and focus_depth.

4.3.1 Depth‑dependent blur (MF DOF geometry)

Goal:
Simulate shallower, smoother DOF: blur increases with distance from focus plane.

Design:

  • Compute depth distance:
    ( d(x,y) = |D(x,y) - D_\text{focus}| )
  • Map to blur strength [0,1] with a scale factor k.
  • Precompute a blurred version of the whole image.
  • Blend sharp/blurred based on strength.
def depth_dependent_blur(
    img: np.ndarray,
    depth_norm: np.ndarray,
    focus_depth: float,
    k: float = 10.0,
    sigma: float = 8.0,
) -> np.ndarray:
    dist = np.abs(depth_norm - focus_depth)
    strength = np.clip(dist * k, 0.0, 1.0)
    blurred = cv2.GaussianBlur(img, (0, 0), sigmaX=sigma, sigmaY=sigma)
    strength_3 = strength[..., None]
    out = img * (1.0 - strength_3) + blurred * strength_3
    return np.clip(out, 0.0, 1.0)

Future extension:
Replace Gaussian with bokeh kernels (disk, cat’s‑eye) and depth‑dependent kernel size.


4.3.2 Depth‑aware micro‑contrast (focus pop)

Goal:
Increase local contrast in the focus plane, mimicking MF “pop” without global oversharpening.

Design:

  • Unsharp mask to get detail layer.
  • Depth‑gated Gaussian around focus depth.
def depth_aware_micro_contrast(
    img: np.ndarray,
    depth_norm: np.ndarray,
    focus_depth: float,
    radius: float = 3.0,
    amount: float = 0.4,
    focus_width: float = 0.1,
) -> np.ndarray:
    blurred = cv2.GaussianBlur(img, (0, 0), sigmaX=radius, sigmaY=radius)
    detail = img - blurred

    dist = np.abs(depth_norm - focus_depth)
    mask = np.exp(- (dist**2) / (2 * focus_width**2))
    mask_3 = mask[..., None]

    enhanced = img + amount * detail * mask_3
    return np.clip(enhanced, 0.0, 1.0)

4.3.3 Highlight rolloff (MF‑like soft knee)

Goal:
Simulate larger full‑well capacity: highlights compress gently instead of clipping.

Design:

  • Compute luminance.
  • Apply soft‑knee curve above a knee threshold.
  • Scale RGB by luminance ratio.
def highlight_rolloff(
    img: np.ndarray,
    knee: float = 0.7,
    strength: float = 0.6,
) -> np.ndarray:
    lum = 0.2126 * img[...,0] + 0.7152 * img[...,1] + 0.0722 * img[...,2]
    over = np.clip(lum - knee, 0.0, 1.0)
    compressed = lum - over * strength * (over / (over + 1e-3))
    ratio = np.where(lum > 1e-3, compressed / (lum + 1e-6), 1.0)
    ratio_3 = ratio[..., None]
    out = img * ratio_3
    return np.clip(out, 0.0, 1.0)

4.3.4 Depth‑aware color separation

Goal:
Slightly richer subject color, slightly softer background color.

Design:

  • Convert to HSV.
  • Depth‑weighted saturation adjustment.
def depth_aware_color(
    img: np.ndarray,
    depth_norm: np.ndarray,
    focus_depth: float,
    focus_sat: float = 1.05,
    bg_sat: float = 0.95,
    focus_width: float = 0.1,
) -> np.ndarray:
    hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
    h, s, v = cv2.split(hsv)

    dist = np.abs(depth_norm - focus_depth)
    focus_mask = np.exp(- (dist**2) / (2 * focus_width**2))
    bg_mask = 1.0 - focus_mask

    s = s * (focus_mask * focus_sat + bg_mask * bg_sat)
    s = np.clip(s, 0.0, 1.0)

    hsv = cv2.merge([h, s, v])
    out = cv2.cvtColor(hsv, cv2.COLOR_HSV2BGR)
    return np.clip(out, 0.0, 1.0)

4.4 Orchestration module

Responsibility:
Wire all modules into a single call.

def mf_style_from_dp(
    image_path: str,
    depth_path: str,
    output_path: str,
) -> None:
    img, depth = extract_dpr(image_path)  # or load img + np.load(depth_path)
    depth_norm = normalize_depth(depth)
    depth_norm = smooth_depth(depth_norm)
    focus_depth = choose_focus_plane(depth_norm, mode="median")

    x = img
    x = depth_dependent_blur(x, depth_norm, focus_depth, k=10.0, sigma=8.0)
    x = depth_aware_micro_contrast(x, depth_norm, focus_depth)
    x = highlight_rolloff(x)
    x = depth_aware_color(x, depth_norm, focus_depth)

    out = np.clip(x * 255.0, 0, 255).astype(np.uint8)
    cv2.imwrite(output_path, out)

5. Mapping Canon DPP behaviors to this pipeline

DPP featureMeaningPipeline equivalent
Bokeh ShiftDepth‑aware bokeh geometrydepth_dependent_blur (future: bokeh kernel)
Micro AdjustmentFocus plane & sharpnesschoose_focus_planedepth_aware_micro_contrast
Ghosting ReductionDepth‑aware edge consistencyDepth smoothing + careful blur blending
Export 16‑bit TIFFPreserve tonal smoothnessUse float32 pipeline, optional 16‑bit output

6. Testing strategy

6.1 Unit tests

  • Depth normalization:
    • Input with outliers → output in [0,1], monotonic.
  • Depth‑dependent blur:
    • Synthetic depth gradient → blur increases with depth distance.
  • Micro‑contrast:
    • Flat image → no change.
    • Edge at focus depth → increased local contrast.
  • Highlight rolloff:
    • Synthetic ramp → smooth compression above knee, no banding.
  • Color separation:
    • Subject at focus depth → saturation slightly increased; background slightly decreased.

6.2 Visual regression tests

  • Use a small set of DPR portraits and scenes.
  • For each:
    • Run pipeline with fixed parameters.
    • Save outputs and compare visually and via metrics (SSIM, local contrast histograms).

6.3 Parameter sweeps

  • Sweep ksigmaamountkneestrength over reasonable ranges.
  • Log:
    • Edge sharpness in focus plane
    • Background blur level
    • Highlight clipping percentage

Use this to pick sane defaults.


7. Extensibility

7.1 Plug‑in bokeh kernels

  • Replace Gaussian blur with:
    • Disk kernels
    • Elliptical kernels
    • Depth‑dependent shape (cat’s‑eye near edges)

7.2 Lens “profiles”

  • Add a config layer:
@dataclass
class MFProfile:
    blur_k: float
    blur_sigma: float
    micro_amount: float
    knee: float
    rolloff_strength: float
    focus_sat: float
    bg_sat: float
  • Predefine profiles: “Subtle MF”, “Strong MF”, “Leica‑ish”, etc.

7.3 Neural refinement (optional later)

  • Train a small CNN to refine the deterministic output:
    • Input: original image, depth, MF‑style output
    • Target: hand‑tuned or MF reference images
  • Keep deterministic pipeline as backbone; NN only adds subtle corrections.

8. Implementation checklist

  1. DPR extraction

    • Integrate Canon SDK or existing DPR tool.
    • Verify depth map quality on test images.
  2. Core pipeline

    • Implement modules exactly as specified.
    • Run on a small DPR set.
  3. Parameter tuning

    • Adjust ksigmaamountkneestrength for portraits vs landscapes.
  4. Batch runner

from pathlib import Path

def batch_process_dp(input_dir: str, output_dir: str):
    Path(output_dir).mkdir(parents=True, exist_ok=True)
    for cr3 in Path(input_dir).glob("*.CR3"):
        out = Path(output_dir) / (cr3.stem + "_mf.jpg")
        mf_style_from_dp(str(cr3), depth_path=None, output_path=str(out))
  1. Document defaults and knobs
    • Provide a short README describing each parameter and its visual effect.