matt.log()

tag: AI

I’m a Senior Software Engineer, open-source contributor, and product builder with 15 years of experience. I share my experiences and observations in software engineering, web technologies, and the reality of shipping great software.

Matt Stypa profile picture

The Dark Software Factory is a lie

AI

BCG Platinion’s essay on the “Dark Software Factory” borrows an image from manufacturing. Picture a plant that runs with the lights off because nobody has to be on the floor. In their version, AI agents build, test, and ship around the clock. People supply intent and check the results. Whoever sets up the factory well and says clearly what they want is supposed to win. The article reads well, it cites sources, and it sounds like this shift is already normal.

It is also a sales pitch dressed up as a forecast. You are usually not buying the software that comes out the other end. You are buying the transformation package, the harness, the governance story, the training plan, and the hope that your company will be the next case study. I am not saying every number in there is wrong. I am saying we should split what the tools really do from what the deck needs you to feel.

None of this means I think models are a waste. We have all seen code show up faster than we could type it, and plenty of us have shipped real work that way. The fight is over what problem we think we solved. After a while you notice that the expensive part of software was almost never typing. It was figuring out what “right” even means, which tradeoffs matter, which edge cases count, and what failure you can accept. That work takes rounds. It does not vanish because the model got faster.

In practice the loop looks the same for me. I ask for code. The first cut is believable and wrong in the usual first cut ways. I tighten the ask, narrow the design, call out invariants, and go again. The gap between what I asked for and what I would ship gets wider as the task gets bigger. I do not treat that as a temporary rough patch until the next release. I treat it as the job, the same back and forth you have always had with a junior engineer, just sped up.

The pushback you hear is that the spec was the problem. If only intent had been clearer, if the org had invested in “intent thinking,” the factory would have nailed it on the first try. That is a neat way to blame the customer. A spec that is complete enough to deliver exactly what I want, with the tradeoffs and edge cases I would have caught in review, is not lighter work than building the thing. It is the build, just written in English. And English is a bad tool for that job. It is great for getting aligned. It is bad at being exact. We built types, contracts, tests, and invariants for a reason. Prose does not turn precise because we are in a hurry.

So the real question is not whether agents can write code. They can. The question is whether we admit what still limits us. For me the limit never moved from my hands to my prompts. It is still the messy part where you figure out what you are building, weigh bad options, and pick what to live with. Models help me read APIs I do not have memorized, try approaches I would have coded more slowly, and push through the middle of a feature once I know what done looks like. That is real help with learning and execution. It is not a way to skip the part where you decide what is worth building.

When someone sells a “dark” factory where the lights are off and people mostly think in intent, ask what you are actually trading for. You will get process, urgency, and a story that more output equals better judgment. I use these tools a lot, and I still want the lights on. The work that keeps software from falling apart is still the slow stuff, the review, and the plain work of knowing what you are making. An empty factory floor looks great in a photo. It is not how I would describe a team that ships good work.

Let me know what you think on Blusky