YouTube's AI Slop Monetization Crackdown — The Tool Isn't the Problem, the Hand Holding It Is

The Day YouTube Pulled Monetization From Mass-Produced AI Channels July 15, 2025. YouTube changed exactly one word in the YouTube Partner Program guidelines. "Repetitious content" became "inauthentic content." Nothing seemed to happen at first. The company itself called it a "minor update," and the policy note was a single line. That single line was the warning shot. The actual crackdown landed in January 2026. The first to vanish were no-name faceless channels. By late January, larger channels started disappearing in twos and threes every week. Tubefilter ran a piece on January 29, 2026 unpacking the wave of bans. By February and March, channels with millions of subscribers were losing monetization or being terminated outright. By April, the running total looked like this: sixteen major AI-driven channels gone, a combined 4.7 billion lifetime views, 35 million subscribers, and roughly $10 million in annual revenue wiped out in a single quarter. Korea ...

유튜브 AI 슬롭 수익화 정지 — 도구가 아니라 사람이 문제다

유튜브가 AI 양산 콘텐츠 수익화를 막은 날 2025년 7월 15일이었다. 유튜브가 YPP(YouTube Partner Program) 약관에서 단어 하나를 바꿨다. "repetitious content"가 "inauthentic content"가 됐다. 그땐 별 일 없어 보였다. 정책 노트 한 줄이었고, 본인들도 "minor update"라고 했다. 그게 신호였다. 본격적인 차단은 2026년 1월부터 들이닥쳤다. 처음에는 이름 모를 페이스리스 채널 몇 개가 사라졌다. 그러다 1월 말부터 매주 큰 채널이 한두 개씩 떨어져 나갔다. Tubefilter가 2026년 1월 29일자 기사에서 본격 단속을 다뤘고, 2~3월에는 구독자 수백만짜리 AI 채널이 통째로 수익화를 잃거나 채널이 닫혔다. 4월 시점에 정리된 통계로는 16개 메이저 AI 채널이 정리됐고, 누적 조회수 47억 회, 구독자 3,500만 명, 연 추정 수익 1,000만 달러가 한 분기 만에 증발했다. 한국이 흥미롭다. 2025년 12월 가디언이 전 세계 인기 유튜브 채널 1만 5,000개를 조사했는데, 그중 278개가 AI 전용 채널이었다. 누적 조회수 630억 회, 연 수익 추정 1,700억 원. 그리고 이 슬롭(slop) 콘텐츠를 가장 많이 본 나라가 한국이었다. 84.5억 회. 2위 파키스탄(53.4억), 3위 미국(33.9억), 4위 스페인(25.2억)을 다 합한 것보다 많다. 한국에서 인기 있는 '3분 지혜' 한 채널이 한국 전체 AI 슬롭 조회수의 4분의 1을 가져갔다. 그래서 코드 매일 짜고, AI랑 매일 일하는 사람 입장에서 이걸 어떻게 봐야 하는지를 좀 길게 풀어보고 싶었다. 정책에 화내거나 환영하는 건 둘 다 쉽다. 진짜 어려운 건 양쪽을 같이 보는 거다. 정책의 진짜 워딩, repetitious에서 inauthentic으로 먼저 단어부터. 유튜브가 바꾼 건 정확히 한 단어다. repetitious(반복...

Building an LLM Robot with My Son — EP 9. 4-Month Retrospective, and What Comes Next

Building an LLM Robot with My Son — EP 9. 4-Month Retrospective, and What Comes Next When I wrote EP 0, there was a half-assembled acrylic chassis on the desk with one wheel turning in reverse. Now that robot walks to the kitchen on its own and finds a water glass. Can't pick it up — but it finds it. Four months. What Worked The agent harness approach held up Injecting domain knowledge through CLAUDE.md worked better than expected. The repeated context fatigue disappeared — AI wrote code that respected the project's rules without having to be reminded every session. The file started at ten lines and grew to 120. That growth is a record of the project itself. My son completed a behavior through eight prompts The scene from EP 5 is the one that stays with me. He sat down alone, iterated eight prompts, and built a working obstacle avoidance behavior. Never touched a line of code. "I made this" was not wrong. Apple Silicon local LLM actually works 112 tok/s on ...

Building an LLM Robot with My Son — EP 8. My Son Gave the AI Robot Its First Real Command

Building an LLM Robot with My Son — EP 8. My Son Gave the AI Robot Its First Real Command EP 6 connected the LLM server. EP 7 migrated to Pi. This episode: camera joins. Qwen2.5-VL-7B is now on the LLM server — the multimodal variant that accepts image input alongside text. Camera frames from the robot get sent with each request, and the model decides what to do based on what it sees. Camera + sensors + LLM + robot, all connected at once for the first time. Switching to Qwen2.5-VL From text-only Qwen2.5-7B to Qwen2.5-VL-7B. Same family — harness barely changed. Three things were different: New section added to CLAUDE.md: ## Vision Input - Camera resolution: 640×480 - Transmission format: JPEG (quality 70) - Frame timing: sent only at command request time (not continuous streaming) - Image + sensor data sent together ## LLM input format (vision mode) { "image": "<base64 encoded JPEG>", "sensor": "dist:45", "instruct...

Building an LLM Robot with My Son — EP 7. Upgrading the Robot Brain from Arduino to Raspberry Pi

Building an LLM Robot with My Son — EP 7. Upgrading the Robot Brain from Arduino to Raspberry Pi Arduino had reached its limit. The setup from EP 6 — Arduino plus a Python bridge laptop — worked, but it meant the robot was physically tethered to a laptop by a USB cable. An "autonomous" robot on a leash felt wrong. Migrating to Pi would let the bridge and ROS2 nodes run inside the robot itself. True independence. Also, my son had been asking about this for weeks. "When does the real computer go in?" had been the recurring question. The time had come. Pi 4 vs Pi 5 vs Banana Pi Three options, one criterion: LLM doesn't run on the Pi. The Pi handles ROS2 nodes, camera pipeline, and bridge role only. Heavy inference stays on the Mac LLM server. Device Approx. price RAM USB 3.0 Thermals Notes Raspberry Pi 4B 4GB $55 4GB 2 ports Hot Widely available Raspberry Pi 5 4GB $60 4GB 2 ports Better Stock inconsistent Banana Pi M5 $45 4GB ...

Building an LLM Robot with My Son — EP 6. Connecting the Robot to the LLM Server over LAN

Building an LLM Robot with My Son — EP 6. Connecting the Robot to the LLM Server over LAN The robot needed to talk to the LLM server. Until now the robot ran standalone — HC-SR04 measuring distance, motors responding to code. That works for basic behavior. But the whole point of this project is an LLM that makes decisions. The robot sends camera frames and sensor data to the LLM server, the LLM decides what to do, the command comes back. That communication layer had to be built. This episode is about how robot (edge) ↔ LLM server (Mac) gets connected. Three Options WebSocket : bidirectional real-time communication. Simple to implement, HTTP-based so firewall issues are minimal. Works well for a setup where the robot streams data and the server streams commands back. gRPC : Google's RPC framework. Protocol Buffers serialization means smaller payloads than WebSocket. Type safety and streaming support are both there. But setup is heavier — Protobuf schemas need to be maintained...

Building an LLM Robot with My Son — EP 5. My Son's First Day Coding a Robot with AI

Building an LLM Robot with My Son — EP 5. My Son's First Day Coding a Robot with AI The day came when my son wanted to try alone. Up until now I'd always been close — sitting next to him, typing alongside him, stepping in when errors appeared. But this afternoon: "I'll do it myself." A weekend afternoon. I went to another room. About thirty minutes later he came out. "Dad, the robot keeps turning left." His First Prompts Later we looked at everything he'd typed that day. The first prompt: "make the robot avoid obstacles by going left" Claude Code produced code. He uploaded it. The robot moved forward, detected an obstacle, stopped, turned left. So far correct. After turning, it didn't go forward again. Just kept turning left. Second prompt: "make it go forward again after avoiding the obstacle" Code was revised. Uploaded. This time: detect obstacle, stop, turn left, go forward again. But every turn was the sa...