from my perspective, most ui and ux is horribly wrong.
like, why `npx tsc src/foo.ts` doesn't read the tsconfig.json file for the default settings? whatever!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
from my perspective, most ui and ux is horribly wrong.
like, why `npx tsc src/foo.ts` doesn't read the tsconfig.json file for the default settings? whatever!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
"That’s because Claude Code is designed to be secure by default, asking before every action that could change your system or code."
that is not actually security.
that also not good ux.
man, i kinda hate software.
(Unity DOTS, Bevy, Frostbite internal systems)
| Engine / Game | Why it fits archetype ECS |
|---|---|
| Unity DOTS demos (Megacity, Entity Component Samples) | Tens of thousands of entities, heavy physics, chunk‑optimized memory layout |
| Bevy engine games (Rust gamedev scene) | Hot‑loop systems, predictable scheduling, parallelism |
| Frostbite engine subsystems (Battlefield series) | Massive crowds, destructible environments, physics‑dense scenes |
(Flecs, EnTT, Svelto.ECS, many custom engines)
| Engine / Game | Why it fits sparse‑set ECS |
|---|---|
| Flecs‑based games (indie & AA titles) | Flexible component sets, ergonomic queries |
| EnTT‑based engines (C++ gamedev) | Great for tools, editors, gameplay logic |
| Svelto.ECS (used in Unity‑based production games) | Hybrid ECS with strong separation of concerns |
Sparse‑set ECS is the “default good choice” for most games.
(Godot 4, Roblox, Unreal’s internal component model)
| Engine / Game | Why it fits hybrid ECS |
|---|---|
| Godot 4 | Nodes for creators, ECS under the hood for performance |
| Roblox | Hierarchical data model + component‑like behaviors |
| Unreal Engine (Actor + Component system) | OO façade, data‑oriented internals |
This is the “best of both worlds” dialect for engines that must be ergonomic.
(Roguelikes, small engines, hobby engines, ECS‑lite frameworks)
| Engine / Game | Why it fits tag‑heavy ECS |
|---|---|
| RogueBasin ECS tutorials | Simple tags for AI, FOV, inventory |
| Amethyst (early versions) | Tag‑driven systems |
| Many custom roguelike engines | Minimal data, lots of markers |
This is perfect for prototypes, small games, and teaching.
This is the one you’re designing:
OO‑looking code → ECS IR → SoA kernels, with an AI‑agent co‑designer.
| System | Why it’s similar |
|---|---|
| Unity DOTS with Roslyn analyzers | OO → ECS transformation |
| Svelto.ECS | Strong separation of logic and data |
| Frostbite’s internal tools | Codegen + data‑oriented pipelines |
| ISPC / Halide | High‑level code → optimized kernels |
This is the “future dialect” — and your agent‑native approach makes it viable.
"You’re asking exactly the right question at exactly the right moment."
for the 100th time.
Battery
1 season • TVPG • 2016-2016
Yui Fujimaki, Hozumi Goda, Tasuku Hatanakahttps://news.ycombinator.com/item?id=47248868
human societies have lurched past the point of not going horribly, horribly wrong, and being pretty evil.
license: public domain CC0
Spec & design document (Python implementation)
Primary goal:
Given a Canon Dual Pixel RAW capture, automatically produce a medium‑format‑style SDR image that:
Non‑goals (for v1):
.CR3 from EOS R5 with DPR enabled)Base image
H x W x 3, float32 in [0,1].Depth/disparity map
H x W, float32.H x W x 3, uint8 or uint16, in JPEG/PNG/TIFF.DPR extraction layer
Depth processing layer
Optical‑style transform layer (MF physics‑inspired)
Output & export layer
Responsibility:
Convert Canon Dual Pixel RAW into:
img: np.ndarray[H, W, 3], float32, [0,1]depth: np.ndarray[H, W], float32Implementation notes:
.npy for reuse.Interface:
def extract_dpr(image_path: str) -> tuple[np.ndarray, np.ndarray]:
"""
Returns:
img : H x W x 3 float32, [0,1]
depth : H x W float32, arbitrary scale
"""
Responsibility:
Turn raw disparity into a stable, normalized depth field and choose a focus plane.
[0,1] using percentiles to ignore outliers.def normalize_depth(depth: np.ndarray) -> np.ndarray:
d_min, d_max = np.percentile(depth, [1, 99])
depth_norm = np.clip((depth - d_min) / (d_max - d_min + 1e-6), 0.0, 1.0)
return depth_norm
def smooth_depth(depth_norm: np.ndarray) -> np.ndarray:
depth_8u = (depth_norm * 255).astype(np.uint8)
smoothed = cv2.bilateralFilter(depth_8u, d=9, sigmaColor=25, sigmaSpace=25)
return smoothed.astype(np.float32) / 255.0
def choose_focus_plane(depth_norm: np.ndarray, mode: str = "median") -> float:
if mode == "median":
return float(np.median(depth_norm))
# Hook for smarter strategies later
This is the core MF‑style logic. It operates on img, depth_norm, and focus_depth.
Goal:
Simulate shallower, smoother DOF: blur increases with distance from focus plane.
Design:
[0,1] with a scale factor k.def depth_dependent_blur(
img: np.ndarray,
depth_norm: np.ndarray,
focus_depth: float,
k: float = 10.0,
sigma: float = 8.0,
) -> np.ndarray:
dist = np.abs(depth_norm - focus_depth)
strength = np.clip(dist * k, 0.0, 1.0)
blurred = cv2.GaussianBlur(img, (0, 0), sigmaX=sigma, sigmaY=sigma)
strength_3 = strength[..., None]
out = img * (1.0 - strength_3) + blurred * strength_3
return np.clip(out, 0.0, 1.0)
Future extension:
Replace Gaussian with bokeh kernels (disk, cat’s‑eye) and depth‑dependent kernel size.
Goal:
Increase local contrast in the focus plane, mimicking MF “pop” without global oversharpening.
Design:
def depth_aware_micro_contrast(
img: np.ndarray,
depth_norm: np.ndarray,
focus_depth: float,
radius: float = 3.0,
amount: float = 0.4,
focus_width: float = 0.1,
) -> np.ndarray:
blurred = cv2.GaussianBlur(img, (0, 0), sigmaX=radius, sigmaY=radius)
detail = img - blurred
dist = np.abs(depth_norm - focus_depth)
mask = np.exp(- (dist**2) / (2 * focus_width**2))
mask_3 = mask[..., None]
enhanced = img + amount * detail * mask_3
return np.clip(enhanced, 0.0, 1.0)
Goal:
Simulate larger full‑well capacity: highlights compress gently instead of clipping.
Design:
def highlight_rolloff(
img: np.ndarray,
knee: float = 0.7,
strength: float = 0.6,
) -> np.ndarray:
lum = 0.2126 * img[...,0] + 0.7152 * img[...,1] + 0.0722 * img[...,2]
over = np.clip(lum - knee, 0.0, 1.0)
compressed = lum - over * strength * (over / (over + 1e-3))
ratio = np.where(lum > 1e-3, compressed / (lum + 1e-6), 1.0)
ratio_3 = ratio[..., None]
out = img * ratio_3
return np.clip(out, 0.0, 1.0)
Goal:
Slightly richer subject color, slightly softer background color.
Design:
def depth_aware_color(
img: np.ndarray,
depth_norm: np.ndarray,
focus_depth: float,
focus_sat: float = 1.05,
bg_sat: float = 0.95,
focus_width: float = 0.1,
) -> np.ndarray:
hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
h, s, v = cv2.split(hsv)
dist = np.abs(depth_norm - focus_depth)
focus_mask = np.exp(- (dist**2) / (2 * focus_width**2))
bg_mask = 1.0 - focus_mask
s = s * (focus_mask * focus_sat + bg_mask * bg_sat)
s = np.clip(s, 0.0, 1.0)
hsv = cv2.merge([h, s, v])
out = cv2.cvtColor(hsv, cv2.COLOR_HSV2BGR)
return np.clip(out, 0.0, 1.0)
Responsibility:
Wire all modules into a single call.
def mf_style_from_dp(
image_path: str,
depth_path: str,
output_path: str,
) -> None:
img, depth = extract_dpr(image_path) # or load img + np.load(depth_path)
depth_norm = normalize_depth(depth)
depth_norm = smooth_depth(depth_norm)
focus_depth = choose_focus_plane(depth_norm, mode="median")
x = img
x = depth_dependent_blur(x, depth_norm, focus_depth, k=10.0, sigma=8.0)
x = depth_aware_micro_contrast(x, depth_norm, focus_depth)
x = highlight_rolloff(x)
x = depth_aware_color(x, depth_norm, focus_depth)
out = np.clip(x * 255.0, 0, 255).astype(np.uint8)
cv2.imwrite(output_path, out)
| DPP feature | Meaning | Pipeline equivalent |
|---|---|---|
| Bokeh Shift | Depth‑aware bokeh geometry | depth_dependent_blur (future: bokeh kernel) |
| Micro Adjustment | Focus plane & sharpness | choose_focus_plane, depth_aware_micro_contrast |
| Ghosting Reduction | Depth‑aware edge consistency | Depth smoothing + careful blur blending |
| Export 16‑bit TIFF | Preserve tonal smoothness | Use float32 pipeline, optional 16‑bit output |
[0,1], monotonic.k, sigma, amount, knee, strength over reasonable ranges.Use this to pick sane defaults.
@dataclass
class MFProfile:
blur_k: float
blur_sigma: float
micro_amount: float
knee: float
rolloff_strength: float
focus_sat: float
bg_sat: float
DPR extraction
Core pipeline
Parameter tuning
k, sigma, amount, knee, strength for portraits vs landscapes.Batch runner
from pathlib import Path
def batch_process_dp(input_dir: str, output_dir: str):
Path(output_dir).mkdir(parents=True, exist_ok=True)
for cr3 in Path(input_dir).glob("*.CR3"):
out = Path(output_dir) / (cr3.stem + "_mf.jpg")
mf_style_from_dp(str(cr3), depth_path=None, output_path=str(out))