GreenM3DC's Focus on Delivering, borrowing Gary Starkweather's method inventing the Laser Printer

Coherence and focus

Published: 2026-04-28

Gary Starkweather was solving an information transfer problem.

The original problem was straightforward: Xerox wanted to send a copy from one copier to another. Transfer the image across a wire. Starkweather worked on it and ran into the same wall anyone would hit: white light is incoherent. Every photon is at a different phase, a different frequency, going a different direction. You cannot preserve precise spatial information on an incoherent carrier without the signal degrading. The image degrades. The signal falls apart before it arrives.

A laser is different. Its photons are coherent: same phase, same frequency, same direction. The source is coherent. And once you have a coherent source, you can use optics to focus it — direct it exactly where it needs to go, pixel by pixel, without loss. The laser solved the coherence problem that white light could not.

Then Starkweather saw the deeper thing. If you are sending a coherent signal anyway, why carry the entire image? A fax sends the complete picture — every pixel, whether it matters or not. But a coherent digital signal can carry structure: the information that describes the image, not the image itself. Send the structure. Render it on the receiving end. The result is more precise, faster, and far more efficient than copying the whole surface and transmitting it. That insight is the laser printer. Not a better copier. A new class of machine: one that transfers structured information and renders it onto a physical surface.

GreenM3DC is solving the same class of problem.

A construction project generates structural information continuously — material locations, RFI status, delivery provenance, thermal boundary conditions at mechanical interfaces. That information exists. The problem is that it is incoherent: scattered across systems, held by different teams, expressed in different formats, and never compiled into a single structured transfer that a decision-maker can act on. The owner does not lack data. The owner lacks a focused surface. Without that surface, the project cannot distinguish noise from structural signal.

GreenM3DC is the transfer mechanism. Each framework in the stack is a coherent lens — calibrated to one layer of the physical system, aimed at one class of structural claim. The spatial compiler is the optics. It takes those coherent inputs and focuses them onto a surface at the scale where a human can see what needs attention. The compile result is not trying to be a complete model of the building. It is a focused transfer of the building's own admitted signals, structured through a coherent grammar, rendered at the resolution where an owner can make a decision.

The Structural MRI Scanner is one tool in that transfer chain.

Just as an MRI in healthcare produces a diagnostic scan — not a treatment, not a care plan, but a precise localization of where the body is incoherent — the Structural MRI Scanner produces a structural scan of the project field. Four anomaly classes. Fifteen findings. Thermal boundary stress at the perimeter interfaces where mechanical rooms connect to outside chiller infrastructure. Material staged in the wrong location. Design blocked waiting on RFI resolution. Delivery status unknown. The scanner does not find generic issues. It transfers typed, localized incoherence onto a surface the project team can read.

The value is not that it finds problems. Project teams already know problems exist. The value is that it separates problem types by structural cause, localizes them in the field, and identifies which gate cannot truthfully close until the incoherence is resolved.

Once a boundary is identified, resolution can be compiled.

Each corrective action runs through the GreenM3DC compiler against the specific gate it is meant to close. A gate passes when its conditions are structurally met — not when someone marks it resolved. This is not a 100% project approval. It is a gate-by-gate compile: the gates that have been identified, tracked, and run. Some pass. Some do not. The ones that do not tell you exactly what still needs to close.

That is Starkweather's principle applied to infrastructure. He did not make printing faster. He built a mechanism that could transfer structured information from a coherent digital source onto a prepared physical surface. GreenM3DC does not make project reporting faster. It builds a mechanism that transfers structural information from a coherent compile stack onto a decision surface an owner can act on.

Structural MRI turns project uncertainty into typed, localized, business-actionable incoherence. The blur is where you point next.

GreenM3DC is a structural analysis project applying compile-time verification to green data center design. The sensor bridge is admitted. The spatial compiler is running. Phase 2a — EFC identification, the feedback-control lens — is next in the stack.

Green = Sustainable -> Compiler

Green Is a Compiler

The standard green data center question is: Is this facility green?

That is the wrong question. Too easy to answer badly.

The better question is: Can these green conditions be sustained?

That is a compiler question. A compiler takes declared inputs, checks them against rules, and returns a verdict — not a score, not a certification. A gate decision: PASS, FAIL, or UNKNOWN.

Green = Sustainable

Green means sustainable.

Not efficient today. Not renewable on paper. Not carbon neutral by accounting convention.

Sustainable means the conditions that make the facility green can be held over time, as the world changes around it. That one move changes everything — because a lot of things that currently pass as green stop compiling.

Lowest energy use may not be sustainable. A facility running PUE 1.05 on free-air cooling is impressively efficient. But some of that efficiency is borrowed from the climate envelope around it. If that envelope shifts over the operating life of the building, the free-air window narrows and the PUE climbs. The efficiency was not built into the system. It was leased from the atmosphere.

Renewable may not be sustainable. Hydro depends on watershed conditions. Solar depends on manufacturing, degradation, and end-of-life. Wind depends on grid integration and geography. RECs are accounting tools, not physical supply by themselves — a REC can match consumption on paper while the facility draws fossil generation at 2am. The electrons do not care about the certificate.

None of this means renewable energy is bad. It means the sustainability compile is more demanding than the green checklist.

Compiler Outputs

PASS — the claim holds across the declared time horizon, boundary, and stress conditions.

FAIL — the claim does not hold, or a prohibited dependency appears.

UNKNOWN — the witnesses are missing. The compile cannot run.

UNKNOWN is not a soft PASS.

What the Compile Checks

For GreenM3DC, the compile uses four structural checks.

INV — what must remain true

PUE must remain below a declared threshold, measured at the meter, not modeled at design. Renewable fraction must be matched to actual consumption, not just annual average. Carbon accounting must close within a declared reporting window.

NINV — what must never occur

Fossil fuel must not become the primary power source while the facility still claims to be green. Cooling capacity must not fall below heat load — thermal runaway is not a warning, it is a compile failure. Carbon neutrality must not rest entirely on purchased offsets with no internal reduction pathway.

BOUND — where the claim holds

Free-air cooling efficiency is valid only within a declared ambient range. Outside that range the PUE claim does not compile — the model has left its boundary. The renewable claim holds at this grid location, with these generation sources, under these matching rules — not universally.

MORPH — what must be able to change

When ambient conditions exceed the free-air cooling threshold, the mode must shift from free-air to mechanical cooling. That transition must be declared and tested, not assumed. When the primary renewable source degrades, there must be a declared substitution path — not a future intention, a structural commitment.

These are four examples — one per category. The full GreenM3DC compile is built to run over dozens of tests across the same four categories.

The point here is the structure. The list is the work.

Most facilities would not return PASS or FAIL on this compile. They would return UNKNOWN.

Not because they are failing, but because the witnesses are missing. No declared time horizon. No stress scenario. No lifecycle assessment of the hardware fleet.

UNKNOWN is not green. UNKNOWN is not sustainable.

Can you run this compile?

INV PUE_THRESHOLD · RENEWABLE_MATCH · CARBON_WINDOW

NINV FOSSIL_PRIMARY · COOLING_FLOOR · OFFSET_ONLY

BOUND FREE_AIR_ENVELOPE · RENEWABLE_LOCALITY · LOAD_DENSITY

MORPH COOLING_MODE_SHIFT · SOURCE_SUBSTITUTION · HARDWARE_EOL

Next: The IT asset list as structural input — what the BOM actually tells you about whether a facility can be sustained.

The GreenM3 Data Center Project

Back to Green Data Centers

I stopped writing about green data centers for a while because the conversation started feeling stale.

The same ideas kept showing up with new logos attached: renewable energy claims, PUE numbers, sustainability reports, renderings, commitments, and announcements. Some of the work is real. Some of it is excellent. But the public conversation has become predictable.

So I decided to come back a different way.

Instead of writing about another announced facility, I am going to write about my own fictional green data center — one that lets me test what "green" actually means when the claims have to hold.

The project starts with a simple physical frame:

100,000 square meters of floor area, 10 meters tall, for a total of 1,000,000 cubic meters of space.

That number — 1,000,000 cubic meters — is not arbitrary. It is a forcing function.

At that scale, the comfortable hand-waving that fills most green data center writing stops working. You cannot just say "we use renewable energy" and leave it there. You cannot cite a PUE number without explaining how you measured it. You cannot claim cooling efficiency without accounting for what happens when the ambient temperature spikes, the grid gets stressed, or the AI workload doubles overnight.

At 1,000,000 m³, every claim becomes a structural argument.

And structural arguments either hold or they do not.

What I Got Bored Of

The green data center space has a formula. You have seen it.

A press release announces that a new hyperscale facility will be powered by 100% renewable energy. There is a rendering. There are sustainability commitments. There is a PUE number that sounds impressive. The facility opens. The sustainability report comes out twelve months later. Much of it reads like marketing.

I am not saying the work is not real. Some of it is. But the industry conversation has become a loop.

The hard questions are usually avoided.

What does it actually mean to be green in a way that can be verified by someone other than the company making the claim?

What happens to green commitments during a prolonged drought, when cooling towers become a liability?

What happens when the local grid is stressed and diesel generators run for four hours?

What happens when the AI workload doubles overnight and the thermal profile of the building changes?

Those questions are more interesting to me than another announcement.

The Fictional Project as a Tool

So I built a fictional one.

No specific location. No owner. No PR constraints. Just a volume of space and the question:

What would it take to make this genuinely, structurally green?

Fictional does not mean unserious. It means unconstrained. It lets me test the claims without being trapped inside a vendor story, a corporate sustainability report, or a single site's limitations.

I use the word structurally deliberately.

I have been developing a way of thinking called StructuralTruth: the idea that any serious claim about a system should be expressible as:

  • invariants — what must remain true

  • violations — what must never occur

  • boundaries — where the claim holds

  • transformations — what is allowed to change

If you cannot express your "green" claim in those terms, you probably do not have a claim yet.

You have an aspiration.

The fictional data center is the test bed for applying that thinking to physical infrastructure.

The fictional project has a name: GreenM3DC.

M3 stands for the cubic meter — the fundamental unit of the space.

Everything I write in this series will be grounded in one question:

Does the claim hold?

Not does it sound right. Not does it appear in a sustainability report. Not does it support a nice rendering.

Does it actually hold, under measurement, over time, in real operating conditions?

That is the standard I am interested in. It is harder than it sounds.

Let's go.

Next: What does "green" actually mean? A structural definition that survives contact with reality.

How I Use ChatGPT and Claude Code Together — and Why I Don’t Mix Their Roles

Over the last several weeks, I’ve settled into a workflow that looks unusual on the surface but has proven extremely effective in practice:

  • ChatGPT for structural exploration and review

  • Claude Code for deterministic compilation and execution

  • No overlap between their responsibilities

The key is not which models I use—it’s how I separate their roles.

The Capability Asymmetry That Matters

Here is the practical difference that forced this separation:

That tells you how each tool wants to be used.

ChatGPT = Structural Workspace

I use ChatGPT for:

  • Long-lived thinking

  • Naming and structure

  • Clarifying intent

  • Reviewing results after execution

I do not use it to touch the filesystem or “prove” code works.

Claude Code = Compiler

Claude Code is treated as a deterministic machine:

  • It edits real files

  • It runs real commands

  • It fails concretely

  • It enforces correctness through execution

No long-term reasoning. No design debates.

The Critical Rule

I never use ChatGPT to review Claude Chat.

Instead, the loop is always:

Structure → Compile → Review

  1. ChatGPT defines structure

  2. Claude Code executes it

  3. ChatGPT reviews what actually happened

This avoids language-only feedback loops and keeps everything grounded in reality.

Why This Works

  • Exploration stays fast

  • Execution stays correct

  • Code becomes expendable

  • Structure becomes durable

I’m now applying this workflow to OS-level services for electrical and mechanical systems in AI data centers, where ambiguity is expensive and determinism matters.

Final Thought

Most AI frustration comes from asking one tool to do two incompatible jobs.

Once you separate exploration, compilation, and review, AI starts behaving like a real engineering toolchain—not a chatbot.

Writing code with help of AI

I took computer programming classes at UC Berkeley and spent years trying to get better at programming afterward. While I understood the fundamentals, writing software always felt like it required an enormous amount of time and effort relative to the progress made. The work felt more about managing complexity than solving the underlying problems I cared about.

About a month ago, while exploring some ideas involving AI, I unexpectedly revisited writing code—this time with AI’s help. What surprised me wasn’t that the AI could write code, but that it fundamentally changed where the effort was spent. Instead of wrestling with syntax, frameworks, and coordination details, the work shifted toward defining structure, relationships, and invariants.

A week ago, Ray Ozzie wrote about his own experience collaborating with AI to design and prototype hardware and software systems. Ray is best known for creating Lotus Notes, and for his later work on large-scale distributed systems. His reflections strongly resonated with my own experience—but also highlighted something important.

What took weeks of focused effort in his case unfolded for me in hours.

Not because the problems were simpler, but because the approach was different.

Having spent much of 2025 transforming the way I write code, a few months ago I decided to see how far I could push myself in collaborating with AI to tackle hardware design.

The project - motivated by conversations with a customer - is nontrivial. Physical and cost constraints; both analog and digital domains; edge compute/storage ML; power challenges. Of course, Notecard for secure cloud backhaul.

I worked on it on-and-off for about 3-4 weeks - surprised not just that the foundation models had so much knowledge of EE, but that they clearly had internalized a vast number of components’ datasheets. Several times I ran into roadblocks where ultrathink or deep research yielded specific choices I’d never have considered.
— Ray Ozzie

Now I often spend 2–4 hours a day working on computer code. But the work itself is no longer about coding. I’m working on harder, more upstream problems, and the code is simply the executable output of the design.

This ability didn’t appear overnight. It comes from years working in product development as a project and program manager for both hardware and software—guiding execution across teams, understanding how the big picture fits together, and how small decisions compound. Over time, you learn to see systems as composed structures, where relationships matter more than parts, and where symmetries persist even as details change.What used to require explanation and persuasion now shows up as functional proof.

In the past, I might have written a paper or given a conference presentation. Now, in a fraction of the time, I can produce a functional proof—one that can be run, tested, and shared, and that scales to far more people than a paper or presentation ever could.

What also feels right is reaching out directly to Ray Ozzie. After connecting on LinkedIn, I was able to share a few thoughts on how his early technical decisions at Microsoft helped enable the company’s cloud evolution. I’m looking forward to exchanging perspectives on how AI is changing the way we think about coding, structure, and system design.