Developers Don't Write Code Anymore — In the AI Era, What Remains Is Responsibility

Developers Don't Write Code Anymore — In the AI Era, What Remains Is Responsibility I hear it a lot lately — that AI is going to make developers obsolete. Engineers who use AI every day seem to feel it the most. When you watch AI write your code day after day, you start wondering how long this will last. The story used to be that only juniors would get replaced. Now senior engineers are getting lumped into the same sentence. A friend told me recently he's thinking about switching careers. He's been coding for over thirty years, and these days he watches AI crank out in a few hours what he'd budget a week for. "Will the skills I'm building right now still matter in three years?" he asked. Another friend said the opposite — he looks at colleagues who barely use AI and wonders what they're even doing. Same moment in history, two completely different anxieties. I get both. But I think the premise underneath this anxiety is wrong. Half right, half wron...

개발자는 코드를 쓰는 사람이 아니다 — AI 시대에 남는 자리는 '책임'에 있다

개발자는 코드를 쓰는 사람이 아니다 — AI 시대에 남는 자리는 '책임'에 있다 AI 때문에 개발자가 없어질 거라는 말, 요즘 진짜 많이 듣는다. 특히 AI를 매일 쓰는 개발자들이 더 불안해하는 것 같다. 매일 AI가 자기 대신 코드를 짜는 걸 보다 보면, 이 일이 얼마나 더 이어질까 싶은 거다. 예전엔 주니어만 대체된다는 이야기였는데, 이제는 시니어까지 같이 묶여서 이야기가 나온다. 얼마 전엔 지인 한 명이 커리어를 바꿀까 고민 중이라고 했다. 30년 넘게 코딩했는데, 요즘 자기가 일주일 걸릴 작업을 AI가 몇 시간 만에 해치우는 걸 보면서 "내가 지금 쌓고 있는 스킬이 3년 뒤에도 의미가 있을까"라는 생각이 든다고 했다. 다른 한 명은 반대로, AI를 거의 안 쓰는 동료들을 보면서 "저 사람들 지금 뭐하고 있는 건가" 싶다고 했다. 같은 시기에 완전히 다른 종류의 불안을 겪고 있는 거다. 둘 다 이해는 간다. 근데 이 불안의 전제 자체가 틀렸다고 본다. 정확히는 반은 맞고 반은 틀렸다. 개발자가 없어지는 게 아니라, "개발자"라는 단어가 가리키는 대상이 바뀌고 있을 뿐이다. 근데 그 변화가 혼란스러워 보이니까 "개발자가 사라진다"는 쉬운 말로 퍼지는 것 같다. 작년에 사내에서 한 발표를 준비하면서 이 주제를 나름대로 정리해본 적이 있다. 주제는 복잡한 크로스플랫폼 데스크톱 앱을 리팩토링하면서 겪은 경험이었는데, 진짜 하고 싶었던 이야기는 프로젝트 경험담이 아니라 한 발짝 더 떨어진 곳에 있었다. "AI를 잘 쓰는 법"이 아니라, AI를 쓰다 보니 내가 하는 일의 무게중심이 어디로 옮겨가고 있는가. 그 이야기를 하고 싶었다. 그리고 1년이 지난 지금, 그때 적었던 결론은 오히려 더 또렷해졌다. 구현자에서 "다시 설계자로" 발표에서 썼던 문장 하나가 지금 다시 봐도 정확한 것 같다. 개발자가 사라지는 흐름이라기보다는,...

Question Your Defaults — How Model-Harness Overfitting Is Slowing Down Your Agent

Question Your Defaults — How Model-Harness Overfitting Is Slowing Down Your Agent In Part 3 of this series, I mentioned a fascinating fact. On Terminal Bench 2.0, Claude Opus 4.6 ranked 33rd inside Claude Code — the very harness it was trained in — but jumped to the top 5 when used with a different harness. I didn't fully unpack what that number means. While covering Anthropic's architecture in Part 4 and the hands-on guide in Part 5, I glossed over the most counterintuitive and practically important insight of the entire series. Using the default harness as-is may not be optimal. This post is where I address that. How Overfitting Happens Frontier coding models are post-trained inside their own harnesses. Claude is optimized through thousands of hours of coding tasks in the Claude Code environment; Codex models go through the same process in the Codex environment. During this process, the model adapts to the patterns of its specific harness: How Claude Code invokes to...

Harness Engineering in Practice — How to Apply It to Your Project Right Now

Harness Engineering in Practice — How to Apply It to Your Project Right Now You understand the concept (Part 3). You've seen how Anthropic implements it (Part 4). That leaves one question. How do you apply it to your own project? This post covers concrete methods for putting harness engineering to work in production, and the shifts in the developer's role that this paradigm will bring. Principle 1: Start from Failure This is Mitchell Hashimoto's principle — and one the HumanLayer team arrived at independently. Don't try to design the ideal harness upfront. Every time the agent fails, add a structural safeguard that prevents that failure from recurring. In HumanLayer's words: "Have a shipping bias. Only touch the harness when the agent actually fails." The mindset resembles TDD (Test-Driven Development). Just as you write a failing test first and then write the code to make it pass — you observe the agent's failure patterns and add harness eleme...

Harness Engineering in Practice — How Anthropic Designs AI Agents

Harness Engineering in Practice — How Anthropic Designs AI Agents The previous post covered the concept and components of harness engineering. This time, it's the real thing. Drawing on the concrete architecture patterns Anthropic published in their official engineering blog — along with experimental results from the OpenAI Codex team — let's look at how harnesses are actually applied in practice. The Basic Structure of an Agent Loop: The Inner Loop At the heart of every AI agent sits an agent loop . In Claude Code, it's called queryLoop . At its core, it's a while(true) loop. while (true) { 1. Prepare context (plan-mode attachments, task reminders) 2. Call the model (streaming API call) 3. Execute tools (detect tool call → validate schema → check permissions → execute) 4. Decide whether to continue (does the model have more to do?) } Each iteration is one "think, act, observe" cycle. The model thinks, invokes a tool, observes the resul...

What Is Harness Engineering — Designing the Reins for AI Agents

What Is Harness Engineering — Designing the Reins for AI Agents In Part 1 of this series, I talked about the decline of prompt engineering. With CLI-based tools on the scene, the value of manually crafting elaborate prompts was fading. But as 2026 unfolded, I realized that what replaced prompt engineering wasn't simply "better tools." Prompt engineering gave way to context engineering, and now context engineering is giving way to an entirely new paradigm: harness engineering . In this post, I'll break down what harness engineering is, why it matters right now, and what its key components look like. A Harness for a Horse, a Harness for an Agent A harness originally refers to the tack fitted onto a horse. Bridle, saddle, stirrups — equipment designed not to suppress the horse's power, but to channel it in the right direction. In AI, the term means exactly the same thing. A harness is the entire external system that controls and directs an AI agent's power...