Let me start with a number.
This afternoon, working alone, I did the following: renamed my product (from strategy to domain registration), built the company website from scratch (DNS, design, global deployment), set up a fully automated daily briefing pipeline (search → generate → deploy), ran a debug session on the product itself, watched an in-depth interview about OpenClaw, and wrote three articles that are now published on my blog and WeChat.
Cost: $92.50. No one else involved.
Break that down by traditional roles: brand strategist, domain broker, market researcher, DevOps engineer, frontend designer, frontend developer, content writer. That's five to six different specialists, thousands to tens of thousands of dollars, weeks of back-and-forth coordination.
I did it in one afternoon.
This isn't a flex. It's a fact that's worth examining honestly.
The Battle Royale
I have a fairly pessimistic take on what's happening, but I think it's an honest one:
AI is great news for some people and terrible news for others. The gap between the two groups is going to widen fast.
Block (Square's parent company) recently laid off over 4,000 people — nearly 40% of its workforce. The critical detail isn't the number. It's the reasoning: this wasn't a struggling company cutting costs. It was a healthy company transitioning to what Jack Dorsey called an "intelligence-native" organization — smaller, flatter, faster teams powered by AI.
In plain English: a company doing well financially no longer automatically means your job is safe. It might just mean the company can now afford to optimize you out.
I use the phrase "battle royale" to describe this. A 10-person department ends up with 1 person standing. The other 9 don't get their contracts renewed. This isn't "being overwhelmed." This is losing your income.
So what does the last person standing have that the other 9 don't?
Every serious thinker I've encountered lands on the same answer: judgment and accountability.
Why Judgment and Accountability
AI can write code, run analyses, produce reports, create designs, translate, search — these are all execution-layer capabilities. And AI is getting better at execution every month. It doesn't get tired, doesn't take sick days, doesn't need to be managed.
But two things remain stubbornly human:
Judgment — when the information is incomplete and the signals are contradictory, someone has to make the call: "we go this way." This requires deep contextual understanding, intuitive risk sensing, and the ability to weigh competing stakeholder interests. These are built from embodied experience, not training data.
Accountability — after the call is made, someone's name is on it. When things go wrong, someone takes the hit. AI doesn't sign off. AI doesn't bear consequences.
So here's my thesis:
The valuable person in the future isn't "someone who can use AI" — because everyone will be able to use AI. The valuable person is the one who can delegate the entire execution layer to AI and focus exclusively on judgment and accountability.
This is what I mean by One Person Company (OPC) and One Person Department (OPD). Not that companies or departments literally have one person — but that only one person is needed for judgment. Everything else is AI-executed.
Four Levels: Most People Are Stuck at Level 1
I've mapped AI collaboration maturity into four levels:
The gap isn't in tools. Claude, ChatGPT, Gemini — everyone has access to the same models. The gap is which level you've pushed your AI collaboration to.
Most people are stuck at Level 1 not because the tools can't do more, but because they're still using a "search engine" mental model to understand AI.
My Own Practice
I'm a business consultant who can't write a single line of code. I left Huawei in 2024 after spending my entire career there — joined straight out of college, retired over a decade later.
But right now, I'm building a complete macOS application by myself.
How? I use Claude Code to drive all development. I write product specification documents — defining what every feature should look like, how it should behave, what "done" means. Claude Code writes the code, debugs, tests. I don't touch a line of code, but every product judgment is mine.
Anthropic published a white paper in March called "How Anthropic Teams Use Claude Code," covering ten internal departments. Every story pointed in the same direction: boundaries are dissolving.
An infrastructure engineer debugged a Kubernetes cluster issue that normally required a networking specialist — solved it himself in minutes. A team member with no ML background figured out model functions and parameter configurations in fifteen minutes instead of an hour of reading docs. A product designer started making frontend code changes so significant that engineers were shocked — "that kind of large-scale state management refactor is not something designers normally touch." A lawyer in the legal team — zero programming experience — built a working predictive text app in one hour for a family member with a speech disability.
This isn't a story about AI replacing people. It's a story about one person's coverage expanding dramatically.
I'm a living example. I can't code, but I built a product. I don't know DNS, but I set up a website. I'm not DevOps, but my daily briefing auto-publishes every morning.
The common thread: I handle judgment (what should this thing be?), AI handles execution (how do we build it?).
How complex is this product? Claude itself assessed it for me during a debugging session — at least 15 independent technical subsystems working together in one application: Tauri desktop framework, Rust backend, React frontend, SQLite, full-text search, vector indexing, ONNX inference runtime, LLM API chains, MCP gateway, TOML plugin system, multi-stage pipeline, async job system, ASR integration, macOS native integration.
Its analogy: you're not building a bicycle. You're building a hybrid car — engine, battery, transmission, brakes, onboard computer, navigation, air conditioning, all built from scratch, all running simultaneously.
One person who can't write code, working with AI, took this from concept to testable prototype in two weeks. A senior full-stack team of 3-5 engineers would need 2-3 months for the same scope.
That's not my claim. That's Claude's assessment while helping me debug. It has no reason to flatter me.
The Sediment Layer: Where the Real Moat Is
At this point someone will ask: if anyone can use AI to do all this, where's the moat?
The moat is in the sediment layer.
At the bottom of my four-level framework there's a line: every collaboration → sediment → compound interest. Prompt templates, workflow SOPs, agent skills, knowledge bases, automated processes — each one gets a little thicker with every use.
In the battle royale, the last person standing doesn't have better tools. They have a thicker sediment layer.
Here's what mine looks like:
- My product spec has evolved from v1 to v2.3 — each version is a crystallization of the previous version's judgment calls
- My CLAUDE.md (collaboration rules for AI) encodes over a dozen verified principles
- My daily briefing template went through a week of calibration until AI understood my taste
- Every plugin I've built (recruiting assistant, investment assistant) codifies my consulting methodology into a reusable framework
None of this can be copied. Not because it's encrypted, but because it was built one judgment call at a time — like patina on copper, it can only form through real time and real use.
This is exactly what my product Ventira is designed to help users build. Your AI conversations, documents, and work records are scattered everywhere. Ventira cleans, indexes, and distills them so your judgment becomes an asset that a system can understand, invoke, and compound over time.
The longer you use it, the more irreplaceable it becomes. What's irreplaceable isn't the data — you can export that anytime. What's irreplaceable is the judgment imprint accumulated in the system.
For Those Still on the Fence
I know a lot of people reading this will feel anxious. But anxiety doesn't produce action. A path does.
Here's the path I've laid out for myself. Take what's useful:
First, figure out where your judgment lives. What do you find easy that others find painful? Where does your one hour of work produce more output than most people's? That's your gift. It's also your last line of defense against being optimized out.
Second, push your AI collaboration past Level 1. Stop using AI as a search engine. Try giving it a complete task with clear acceptance criteria and let it deliver. You'll find it handles most execution-layer work better than you expected.
Third, start building your sediment layer. After every AI collaboration, ask yourself: can this method be turned into a template? Can it be reused next time? Can AI automatically follow this standard going forward? Every template you lock in is another layer under your feet.
And the most important one: stop doing feasibility studies at the starting line.
Everything I did this afternoon was technically "outside my professional domain." But I jumped in and did it anyway. The friction is smaller than you think. The cost of reverting is lower than you think.
The battle royale has already started. Getting to your vantage point isn't about waiting until you're ready. It's about becoming ready in the process of doing.
先说个数字。
今天下午我一个人干了这些事:给产品重新命名(从策略推演到域名注册),搭完官网(DNS配置、页面设计、全球部署),把每日日报的自动化管道搭好(搜索→生成→部署全链路),给产品本身做了一轮 debug,看了一个关于 OpenClaw 的深度访谈,写了三篇文章并且发布到公众号和网站上。
花费 92.5 美元。全程没有请任何人帮忙。
把这些事拆开来看,涉及的专业工种包括:品牌顾问、域名经纪人、市场分析师、运维工程师、前端设计师、前端开发、DevOps 工程师。传统路径下,这是四到五个人,几千到几万美元,几周时间。
我一个下午。
这不是炫耀。这是一个正在发生的事实,我觉得值得认真聊聊。
大逃杀
我有一个比较悲观的判断,但我认为它是诚实的:
AI 来了,对一些人是好事,对一些人是坏事。人的两极分化会变得更明显。
Block(就是 Square 的母公司)前段时间裁了 4000 多人,接近四成员工。最关键的是他们的口径:这不是因为公司业绩不行,恰恰是因为公司在往"AI 原生组织"转型——更小、更扁、更快的团队可以跑得更好。
说人话:公司业绩好,不再自动等于你的岗位安全。公司业绩好,可能只是说明它更有余力把你优化掉。
我一直用"大逃杀"来描述这件事。一个 10 个人的部门,最后只剩 1 个人,其他 9 个人不再被续约。这不是"被淹没"那么轻描淡写,这是失去收入来源。
那最后剩下的那个人凭什么活下来?
所有高手都在强调同一件事:你唯一剩下来的,就是你的判断力和你的担责。
判断力和担责:为什么是这两样
AI 能写代码、能做分析、能出报告、能画图、能翻译、能搜索——这些都是执行层的能力。执行层的事,AI 越来越能干,而且不累、不请假、不需要管理。
但有两样东西 AI 当前替代不了:
判断力——面对不完整信息和矛盾信号时,拍板说"我们走这条路"。这需要对上下文的深度理解、对风险的直觉感知、对利益相关方的通盘考量。这些东西高度依赖具身化的经验积累,不是大模型能从训练数据里学会的。
担责——拍完板之后,这个决定是你的名字签上去的。出了问题你扛。AI 不签字、不担责、不承受后果。
所以我的判断是:
未来值钱的不是"会用 AI 的人"——因为人人都会用。值钱的是"能用 AI 把执行层全部委派出去,自己只做判断和担责的人"。
这就是 One Person Company(OPC)和 One Person Department(OPD)的本质。不是说公司或部门只有一个人,是说只需要一个人做判断,其余全部由 AI 执行。
四级演进:大多数人卡在 Level 1
我之前画过一张图,把人跟 AI 的协作成熟度分成四个级别:
差距不在工具。 Claude、ChatGPT、Gemini——大家用的都是同样的模型。差距在于你把 AI 协作推到了哪个级别。
大多数人卡在 Level 1,不是因为工具不行,是因为他们还在用"搜索引擎"的心智模型去理解 AI。
我自己的实践
我是一个完全不会写代码的商业顾问。2024 年从华为退休,开始做独立咨询。
但我现在一个人在做一个完整的 macOS 产品。
怎么做到的?我用 Claude Code 驱动所有开发。我写产品规格文档(spec),定义每一个功能应该是什么样的,怎么验收。然后 Claude Code 负责写代码、调试、测试。我不碰一行代码,但产品的每一个判断是我做的。
Anthropic 3 月份发的那份白皮书《Anthropic 团队如何使用 Claude Code》里讲了十个部门的故事。每个故事都指向同一个方向:边界在消融。
工程师顺手干了运维的活。没有 ML 背景的人十几分钟搞定了原来要翻一小时文档的事。设计师直接改前端代码,工程师看了都震惊。法务团队一个完全不会编程的律师,一小时做出了一个能用的应用。
这不是"AI 替代人"的故事,是"一个人覆盖面急剧扩大"的故事。
我自己就是一个活样本。我不会写代码,但我做出了一个产品。我不懂 DNS,但我搭好了官网。我不是 DevOps,但我的日报每天早上自动更新。
每一件事的共同点:我只负责判断(这个东西应该是什么样的),AI 负责执行(怎么把它做出来)。
说到底这个产品有多复杂?Claude 自己给我做过一次评估,拆得很细:
Tauri 桌面框架、Rust 后端、React 前端、SQLite 数据库、Tantivy 全文索引、LanceDB 向量索引、ONNX 推理运行时、LLM API 调用链、MCP 网关、TOML 插件系统、多阶段 Pipeline、异步 job 系统、ASR 集成、macOS 原生集成——至少 15 个独立的技术子系统在一个应用里协同工作。
它打了个比方:你造的不是一辆自行车,是一辆混合动力汽车——发动机、电池、变速箱、刹车系统、车载电脑、导航、空调都要自己做,而且它们必须同时工作。
一个不写代码的人,用 AI 协作在两周内把这个东西从概念推进到了可以肉身测试的阶段。即使是一个 3-5 人的资深全栈工程师团队,做到同等程度大概需要 2-3 个月。
这不是我自己吹的,这是 Claude 在帮我排查 bug 的时候顺手评估的。它没有理由拍我马屁。
沉淀层:真正的壁垒
到这里有人会问:如果人人都能用 AI 干这些事,那壁垒在哪?
壁垒在沉淀层。
我画的那张四级演进图底部有一条线:每次协作 → 沉淀 → 复利。Prompt 模板、工作流 SOP、Agent Skills、组织知识库、自动化流程——这些东西每用一次就厚一层。
大逃杀里活到最后的人,不是工具比别人好,是沉淀层比别人厚。
拿我自己举例。我跟 AI 的每一次协作,都在沉淀:
- 产品规格文档从 v1 迭代到 v2.3,每一版都是上一版的判断力结晶
- 我的 CLAUDE.md(给 AI 的协作规范)已经编码了十几条经过验证的协作规则
- 我的日报模板经过一周的磨合,AI 已经知道我喜欢什么、不喜欢什么
- 我做的每一个插件(招聘助手、投资助手)都是把我的咨询方法论固化成了可复用的框架
这些东西别人复制不了。不是因为它们被加密了,是因为它们是我的判断力一层层垒出来的——就像包浆,只能靠时间和使用慢慢形成。
这就是我做 Ventira 这个产品想帮用户实现的事:帮你把这个沉淀层建起来。 你的 AI 对话、文档、工作记录散落在各处,Ventira 把它们清洗、索引、提纯,让你的判断力变成一个系统能理解、能调用、能持续增长的资产。
用得越久,越不可替代。不可替代的不是数据——数据你随时可以导出。不可替代的是系统里累积的判断力印记。
给还在犹豫的人
我知道很多人看到这些会焦虑。但焦虑不产生行动,路径才产生行动。
我给自己的路径很简单,也分享给你:
第一,先搞清楚你的判断力在哪。 你做什么事情的时候,别人痛苦你不痛苦?你在哪些事上同样一小时产出比别人高?这是你的禀赋,也是你被优化掉的最后一道防线。
第二,把 AI 协作从 Level 1 往上推。 别停留在"问它一个问题"。试试给它一个完整的任务,定义清楚验收标准,让它交付。你会发现大多数执行层的事情,它干得比你以为的好得多。
第三,开始建你的沉淀层。 每次跟 AI 协作完,问自己:这次的方法能不能固化成一个模板?下次能不能直接复用?能不能让 AI 以后自动按这个标准来?每一次固化,都是在垒你的制高点。
最后一条,也是最重要的一条:别在起跑线上做可行性研究了。
我今天一个下午干的那些事,每一件拆开来看都"不属于我的专业范围"。但我就是跳进去做了。摩擦力比你想象的小得多,回退成本比你想象的低得多。
大逃杀已经开始了。站上你的制高点,靠的不是等到准备好,是在做的过程中变得准备好。