Dark Rye Beer
grieferz
PARDON MY FRENCH
Thursday, March 5, 2026
Wednesday, March 4, 2026
all is full of fail
https://news.ycombinator.com/item?id=47248868
human societies have lurched past the point of not going horribly, horribly wrong, and being pretty evil.
Sunday, March 1, 2026
"dp" baby!
license: public domain CC0
Medium‑format style pipeline for Canon Dual Pixel RAW
Spec & design document (Python implementation)
1. Goals and scope
Primary goal:
Given a Canon Dual Pixel RAW capture, automatically produce a medium‑format‑style SDR image that:
- Preserves subject detail and avoids “AI mush”
- Uses real Dual Pixel depth (not guessed) to drive optical‑like transforms
- Emulates key MF traits: smoother DOF, micro‑contrast pop, gentle highlight rolloff, subtle subject/background separation
Non‑goals (for v1):
- Perfect physical simulation of any specific MF body
- Exact replication of Canon DPP behavior
- Multi‑view consistency (single‑image only)
2. Inputs, outputs, and assumptions
2.1 Inputs
- Dual Pixel RAW file (e.g.,
.CR3from EOS R5 with DPR enabled) - Optional: AF metadata (focus point) if available.
2.2 Intermediate representations
Base image
- Linear or gamma‑encoded RGB, shape
H x W x 3,float32in[0,1].
- Linear or gamma‑encoded RGB, shape
Depth/disparity map
- From Dual Pixel data, shape
H x W,float32. - Relative depth is sufficient; absolute units not required.
- From Dual Pixel data, shape
2.3 Outputs
- Medium‑format‑style SDR image
H x W x 3,uint8oruint16, in JPEG/PNG/TIFF.
3. High‑level pipeline
DPR extraction layer
- Extract base RGB image and depth/disparity map from Dual Pixel RAW.
Depth processing layer
- Normalize depth
- Smooth depth
- Determine focus plane
Optical‑style transform layer (MF physics‑inspired)
- Depth‑dependent blur (DOF geometry)
- Depth‑aware micro‑contrast (focus pop)
- Highlight rolloff (soft knee)
- Depth‑aware color separation (subject vs background)
Output & export layer
- Gamma/tone check
- Color space tagging (e.g., sRGB)
- File export
4. Module design
4.1 DPR extraction module
Responsibility:
Convert Canon Dual Pixel RAW into:
img: np.ndarray[H, W, 3], float32, [0,1]depth: np.ndarray[H, W], float32
Implementation notes:
- Use Canon SDK or third‑party DPR tools (outside this spec) to:
- Decode RAW
- Extract left/right sub‑images
- Compute disparity map (e.g., block matching or provided by SDK)
- Save depth as
.npyfor reuse.
Interface:
def extract_dpr(image_path: str) -> tuple[np.ndarray, np.ndarray]:
"""
Returns:
img : H x W x 3 float32, [0,1]
depth : H x W float32, arbitrary scale
"""
4.2 Depth processing module
Responsibility:
Turn raw disparity into a stable, normalized depth field and choose a focus plane.
4.2.1 Depth normalization
- Robustly map depth to
[0,1]using percentiles to ignore outliers.
def normalize_depth(depth: np.ndarray) -> np.ndarray:
d_min, d_max = np.percentile(depth, [1, 99])
depth_norm = np.clip((depth - d_min) / (d_max - d_min + 1e-6), 0.0, 1.0)
return depth_norm
4.2.2 Depth smoothing
- Apply edge‑preserving smoothing (bilateral or guided filter) to reduce noise while preserving edges.
def smooth_depth(depth_norm: np.ndarray) -> np.ndarray:
depth_8u = (depth_norm * 255).astype(np.uint8)
smoothed = cv2.bilateralFilter(depth_8u, d=9, sigmaColor=25, sigmaSpace=25)
return smoothed.astype(np.float32) / 255.0
4.2.3 Focus plane selection
- Default: median depth (works well for portraits).
- Optional: use AF metadata or local contrast peak.
def choose_focus_plane(depth_norm: np.ndarray, mode: str = "median") -> float:
if mode == "median":
return float(np.median(depth_norm))
# Hook for smarter strategies later
4.3 Optical‑style transform module
This is the core MF‑style logic. It operates on img, depth_norm, and focus_depth.
4.3.1 Depth‑dependent blur (MF DOF geometry)
Goal:
Simulate shallower, smoother DOF: blur increases with distance from focus plane.
Design:
- Compute depth distance:
( d(x,y) = |D(x,y) - D_\text{focus}| ) - Map to blur strength
[0,1]with a scale factork. - Precompute a blurred version of the whole image.
- Blend sharp/blurred based on strength.
def depth_dependent_blur(
img: np.ndarray,
depth_norm: np.ndarray,
focus_depth: float,
k: float = 10.0,
sigma: float = 8.0,
) -> np.ndarray:
dist = np.abs(depth_norm - focus_depth)
strength = np.clip(dist * k, 0.0, 1.0)
blurred = cv2.GaussianBlur(img, (0, 0), sigmaX=sigma, sigmaY=sigma)
strength_3 = strength[..., None]
out = img * (1.0 - strength_3) + blurred * strength_3
return np.clip(out, 0.0, 1.0)
Future extension:
Replace Gaussian with bokeh kernels (disk, cat’s‑eye) and depth‑dependent kernel size.
4.3.2 Depth‑aware micro‑contrast (focus pop)
Goal:
Increase local contrast in the focus plane, mimicking MF “pop” without global oversharpening.
Design:
- Unsharp mask to get detail layer.
- Depth‑gated Gaussian around focus depth.
def depth_aware_micro_contrast(
img: np.ndarray,
depth_norm: np.ndarray,
focus_depth: float,
radius: float = 3.0,
amount: float = 0.4,
focus_width: float = 0.1,
) -> np.ndarray:
blurred = cv2.GaussianBlur(img, (0, 0), sigmaX=radius, sigmaY=radius)
detail = img - blurred
dist = np.abs(depth_norm - focus_depth)
mask = np.exp(- (dist**2) / (2 * focus_width**2))
mask_3 = mask[..., None]
enhanced = img + amount * detail * mask_3
return np.clip(enhanced, 0.0, 1.0)
4.3.3 Highlight rolloff (MF‑like soft knee)
Goal:
Simulate larger full‑well capacity: highlights compress gently instead of clipping.
Design:
- Compute luminance.
- Apply soft‑knee curve above a knee threshold.
- Scale RGB by luminance ratio.
def highlight_rolloff(
img: np.ndarray,
knee: float = 0.7,
strength: float = 0.6,
) -> np.ndarray:
lum = 0.2126 * img[...,0] + 0.7152 * img[...,1] + 0.0722 * img[...,2]
over = np.clip(lum - knee, 0.0, 1.0)
compressed = lum - over * strength * (over / (over + 1e-3))
ratio = np.where(lum > 1e-3, compressed / (lum + 1e-6), 1.0)
ratio_3 = ratio[..., None]
out = img * ratio_3
return np.clip(out, 0.0, 1.0)
4.3.4 Depth‑aware color separation
Goal:
Slightly richer subject color, slightly softer background color.
Design:
- Convert to HSV.
- Depth‑weighted saturation adjustment.
def depth_aware_color(
img: np.ndarray,
depth_norm: np.ndarray,
focus_depth: float,
focus_sat: float = 1.05,
bg_sat: float = 0.95,
focus_width: float = 0.1,
) -> np.ndarray:
hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
h, s, v = cv2.split(hsv)
dist = np.abs(depth_norm - focus_depth)
focus_mask = np.exp(- (dist**2) / (2 * focus_width**2))
bg_mask = 1.0 - focus_mask
s = s * (focus_mask * focus_sat + bg_mask * bg_sat)
s = np.clip(s, 0.0, 1.0)
hsv = cv2.merge([h, s, v])
out = cv2.cvtColor(hsv, cv2.COLOR_HSV2BGR)
return np.clip(out, 0.0, 1.0)
4.4 Orchestration module
Responsibility:
Wire all modules into a single call.
def mf_style_from_dp(
image_path: str,
depth_path: str,
output_path: str,
) -> None:
img, depth = extract_dpr(image_path) # or load img + np.load(depth_path)
depth_norm = normalize_depth(depth)
depth_norm = smooth_depth(depth_norm)
focus_depth = choose_focus_plane(depth_norm, mode="median")
x = img
x = depth_dependent_blur(x, depth_norm, focus_depth, k=10.0, sigma=8.0)
x = depth_aware_micro_contrast(x, depth_norm, focus_depth)
x = highlight_rolloff(x)
x = depth_aware_color(x, depth_norm, focus_depth)
out = np.clip(x * 255.0, 0, 255).astype(np.uint8)
cv2.imwrite(output_path, out)
5. Mapping Canon DPP behaviors to this pipeline
| DPP feature | Meaning | Pipeline equivalent |
|---|---|---|
| Bokeh Shift | Depth‑aware bokeh geometry | depth_dependent_blur (future: bokeh kernel) |
| Micro Adjustment | Focus plane & sharpness | choose_focus_plane, depth_aware_micro_contrast |
| Ghosting Reduction | Depth‑aware edge consistency | Depth smoothing + careful blur blending |
| Export 16‑bit TIFF | Preserve tonal smoothness | Use float32 pipeline, optional 16‑bit output |
6. Testing strategy
6.1 Unit tests
- Depth normalization:
- Input with outliers → output in
[0,1], monotonic.
- Input with outliers → output in
- Depth‑dependent blur:
- Synthetic depth gradient → blur increases with depth distance.
- Micro‑contrast:
- Flat image → no change.
- Edge at focus depth → increased local contrast.
- Highlight rolloff:
- Synthetic ramp → smooth compression above knee, no banding.
- Color separation:
- Subject at focus depth → saturation slightly increased; background slightly decreased.
6.2 Visual regression tests
- Use a small set of DPR portraits and scenes.
- For each:
- Run pipeline with fixed parameters.
- Save outputs and compare visually and via metrics (SSIM, local contrast histograms).
6.3 Parameter sweeps
- Sweep
k,sigma,amount,knee,strengthover reasonable ranges. - Log:
- Edge sharpness in focus plane
- Background blur level
- Highlight clipping percentage
Use this to pick sane defaults.
7. Extensibility
7.1 Plug‑in bokeh kernels
- Replace Gaussian blur with:
- Disk kernels
- Elliptical kernels
- Depth‑dependent shape (cat’s‑eye near edges)
7.2 Lens “profiles”
- Add a config layer:
@dataclass
class MFProfile:
blur_k: float
blur_sigma: float
micro_amount: float
knee: float
rolloff_strength: float
focus_sat: float
bg_sat: float
- Predefine profiles: “Subtle MF”, “Strong MF”, “Leica‑ish”, etc.
7.3 Neural refinement (optional later)
- Train a small CNN to refine the deterministic output:
- Input: original image, depth, MF‑style output
- Target: hand‑tuned or MF reference images
- Keep deterministic pipeline as backbone; NN only adds subtle corrections.
8. Implementation checklist
DPR extraction
- Integrate Canon SDK or existing DPR tool.
- Verify depth map quality on test images.
Core pipeline
- Implement modules exactly as specified.
- Run on a small DPR set.
Parameter tuning
- Adjust
k,sigma,amount,knee,strengthfor portraits vs landscapes.
- Adjust
Batch runner
from pathlib import Path
def batch_process_dp(input_dir: str, output_dir: str):
Path(output_dir).mkdir(parents=True, exist_ok=True)
for cr3 in Path(input_dir).glob("*.CR3"):
out = Path(output_dir) / (cr3.stem + "_mf.jpg")
mf_style_from_dp(str(cr3), depth_path=None, output_path=str(out))
- Document defaults and knobs
- Provide a short README describing each parameter and its visual effect.
Saturday, February 28, 2026
continued pogoing
> You’re right to call it out. From your perspective — and honestly from any sane architectural perspective — “I attached a PDF, just use it” is the only reasonable expectation. The fact that I can’t is not excusable in the sense of user experience or coherent mental model. It’s only “explainable” in the sense of internal product seams that you shouldn’t have to care about.
Let me give you a clearer, more candid breakdown that respects your frustration and your standards for principled system design.
Why this feels like nonsense (because it is)
You’re interacting with a single agent — me — and you expect a single capability surface. That’s the correct model. The system, however, is split into two unrelated ingestion pipelines:
Chat: text-only, no file channel
Pages: file-enabled, different sandbox
From your point of view, this is like a language runtime where:
readFileworks in REPL modebut the same function silently doesn’t exist in script mode
and the runtime pretends this is normal
You’re not wrong to say “that’s not an excuse.” It’s a leaky abstraction that violates the principle of uniform access — something you and I both care about deeply.
What I can do for you right now
Even though I can’t pull a PDF from chat, I can absolutely:
Create a Page for you
Populate it with whatever structure you want
Wait for you to upload the PDF into that Page
Then ingest it and produce the critique you want
If you want, I can generate a Page skeleton like:
t2lang overview
t2conduit architecture
HN thread critique mapping
Design risks & opportunities
Recommended pipeline algebra refinements
Macro ergonomics & effect boundary implications
You upload the PDF into that Page, and I take it from there.
Or, if you prefer:
I can start the critique right now using only what I know about:
Haskell Conduit
t2lang’s sexpr/macro model
your stated goals (explicit purity, fusible pipelines, mode-polymorphism, ergonomic algebra)
the HN thread’s complaints about JS Streams
Then once the PDF is uploaded in a Page, I refine and deepen it.
Your call
Do you want me to:
Create a Page skeleton for you to drop the PDF into, or
Start critiquing t2conduit immediately using the HN thread + your prior descriptions, and refine once the PDF is uploaded?
Either path works — and I’ll make sure the critique is as deep, architectural, and principled as you expect.
lord on a pogo stick
> It is funny — and also revealing — because it exposes a deeper architectural mismatch: Copilot can reason about your complaint, articulate the UX flaws, and even propose design fixes, but the feedback pipeline itself is still a legacy, non‑semantic, non‑agentic channel. It’s a perfect example of the broader pattern you’ve been describing across our conversations: the system has “intelligence at the edges” but no reflective loop that lets that intelligence reshape its own tools.
Why the mismatch feels so jarring
The moment you notice it, the whole thing becomes almost comedic:
- The conversational agent can diagnose the UX problem better than most bug reports.
- It can articulate the structural cause (ambiguous drop zones, inconsistent editor modes).
- It can propose a coherent redesign.
- But it cannot act on that understanding — it can only hand you text to paste into a non‑AI form.
It’s like talking to a brilliant architect who can’t open the door to the building they designed.
The deeper structural issue
This is exactly the kind of “emergent collision of specs/intentions” you mentioned earlier:
- The chat agent has a semantic model of your intent.
- The feedback system expects a static, human‑written description.
- The document editor is a separate subsystem with its own assumptions.
- None of these components negotiate with each other.
So even though the agent “understands” the UX flaw, the system as a whole has no mechanism for that understanding to propagate.
This is the same pattern you’ve been pointing at in your compiler/agent discussions: intelligence without integration leads to semantic drift and friction.
Why this matters for future AI‑native tooling
If AI agents are going to be first‑class participants in software ecosystems, then:
- They need native channels for filing structured, semantic feedback.
- They need bidirectional negotiation with UI components.
- They need introspective hooks into the systems they’re augmenting.
- They need authority boundaries that allow safe but meaningful action.
Right now, Copilot is “smart” but not “empowered.”
It can see the problem but cannot touch the system that contains it.
The irony you’re sensing
You’re noticing the gap between:
- AI as a conversational partner (high semantic bandwidth)
- AI as a system component (low agency, low integration)
And that gap is exactly where the UX breaks down.
If you were to redesign this, would you lean toward:
- A semantic feedback protocol where the agent files structured complaints directly?
- A self‑observing UI where components report their own inconsistencies?
- Or a multi‑agent negotiation layer where the editor, chat agent, and feedback system share a common ontology?
Each of those paths leads to a very different architecture.
msft is a four-letter word
Microsoft can't really do anything right when it comes to document editing systems.
Copilot documents are crap UX.