diff --git a/README-ja.md b/README-ja.md index 8717201b8..980ded977 100644 --- a/README-ja.md +++ b/README-ja.md @@ -1,234 +1,178 @@ -# Learn Claude Code -- 真の Agent のための Harness Engineering - [English](./README.md) | [中文](./README-zh.md) | [日本語](./README-ja.md) -## モデルこそが Agent である - -コードの話をする前に、一つだけ明確にしておく。 - -**Agent とはモデルのことだ。フレームワークではない。プロンプトチェーンではない。ドラッグ&ドロップのワークフローではない。** - -### Agent とは何か - -Agent とはニューラルネットワークである -- Transformer、RNN、学習された関数 -- 数十億回の勾配更新を経て、行動系列データの上で環境を知覚し、目標を推論し、行動を起こすことを学んだもの。AI における "Agent" という言葉は、始まりからずっとこの意味だった。常に。 - -人間も Agent だ。数百万年の進化的訓練によって形作られた生物的ニューラルネットワーク。感覚で世界を知覚し、脳で推論し、身体で行動する。DeepMind、OpenAI、Anthropic が "Agent" と言うとき、それはこの分野が誕生以来ずっと意味してきたものと同じだ:**行動することを学んだモデル。** - -歴史がその証拠を刻んでいる: - -- **2013 -- DeepMind DQN が Atari をプレイ。** 単一のニューラルネットワークが、生のピクセルとスコアだけを受け取り、7 つの Atari 2600 ゲームを学習 -- すべての先行アルゴリズムを超え、3 つで人間の専門家を打ち負かした。2015 年には同じアーキテクチャが [49 ゲームに拡張され、プロのテスターに匹敵](https://www.nature.com/articles/nature14236)、*Nature* に掲載。ゲーム固有のルールなし。決定木なし。一つのモデルが経験から学んだ。そのモデルが Agent だった。 - -- **2019 -- OpenAI Five が Dota 2 を制覇。** 5 つのニューラルネットワークが 10 ヶ月間で [45,000 年分の Dota 2](https://openai.com/index/openai-five-defeats-dota-2-world-champions/) を自己対戦し、サンフランシスコのライブストリームで **OG** -- TI8 世界王者 -- を 2-0 で撃破。その後の公開アリーナでは 42,729 試合で勝率 99.4%。スクリプト化された戦略なし。メタプログラムされたチーム連携なし。モデルが完全に自己対戦を通じてチームワーク、戦術、リアルタイム適応を学んだ。 - -- **2019 -- DeepMind AlphaStar が StarCraft II をマスター。** AlphaStar は非公開戦で[プロ選手を 10-1 で撃破](https://deepmind.google/blog/alphastar-mastering-the-real-time-strategy-game-starcraft-ii/)、その後ヨーロッパサーバーで[グランドマスター到達](https://www.nature.com/articles/d41586-019-03298-6) -- 90,000 人中の上位 0.15%。不完全情報、リアルタイム判断、チェスや囲碁を遥かに凌駕する組合せ的行動空間を持つゲーム。Agent とは? モデルだ。訓練されたもの。スクリプトではない。 - -- **2019 -- Tencent 絶悟が王者栄耀を支配。** Tencent AI Lab の「絶悟」は 2019 年 8 月 2 日、世界チャンピオンカップで [KPL プロ選手を 5v5 で撃破](https://www.jiemian.com/article/3371171.html)。1v1 モードではプロが [15 戦中 1 勝のみ、8 分以上生存不可](https://developer.aliyun.com/article/851058)。訓練強度:1 日 = 人間の 440 年。2021 年までに全ヒーロープールで KPL プロを全面的に上回った。手書きのヒーロー相性表なし。スクリプト化されたチーム編成なし。自己対戦でゲーム全体をゼロから学んだモデル。 - -- **2024-2025 -- LLM Agent がソフトウェアエンジニアリングを再構築。** Claude、GPT、Gemini -- 人類のコードと推論の全幅で訓練された大規模言語モデル -- がコーディング Agent として展開される。コードベースを読み、実装を書き、障害をデバッグし、チームで協調する。アーキテクチャは先行するすべての Agent と同一:訓練されたモデルが環境に配置され、知覚と行動のツールを与えられる。唯一の違いは、学んだものの規模と解くタスクの汎用性。 - -すべてのマイルストーンが同じ真理を共有している:**"Agent" は決して周囲のコードではない。Agent は常にモデルそのものだ。** - -### Agent ではないもの - -"Agent" という言葉は、プロンプト配管工の産業全体に乗っ取られてしまった。 +# Learn Claude Code -ドラッグ&ドロップのワークフロービルダー。ノーコード "AI Agent" プラットフォーム。プロンプトチェーン・オーケストレーションライブラリ。すべて同じ幻想を共有している:LLM API 呼び出しを if-else 分岐、ノードグラフ、ハードコードされたルーティングロジックで繋ぎ合わせることが "Agent の構築" だと。 +高完成度の coding-agent harness を、0 から自分で実装できるようになるための教材リポジトリです。 -違う。彼らが作ったものはルーブ・ゴールドバーグ・マシンだ -- 過剰に設計された脆い手続き的ルールのパイプライン。LLM は美化されたテキスト補完ノードとして押し込まれているだけ。それは Agent ではない。壮大な妄想を持つシェルスクリプトだ。 +このリポジトリの目的は、実運用コードの細部を逐一なぞることではありません。 +本当に重要な設計主線を、学びやすい順序で理解し、あとで自分の手で作り直せるようになることです。 -**プロンプト配管工式 "Agent" は、モデルを訓練しないプログラマーの妄想だ。** 手続き的ロジックを積み重ねて知能を力技で再現しようとする -- 巨大なルールツリー、ノードグラフ、チェーン・プロンプトの滝 -- そして十分なグルーコードがいつか自律的振る舞いを創発すると祈る。しない。工学的手段で Agency をコーディングすることはできない。Agency は学習されるものであって、プログラムされるものではない。 +## このリポジトリが本当に教えるもの -あのシステムたちは生まれた瞬間から死んでいる:脆弱で、スケールせず、汎化が根本的に不可能。GOFAI(Good Old-Fashioned AI、古典的記号 AI)の現代版だ -- 何十年も前に学術界が放棄した記号ルールシステムが、LLM のペンキを塗り直して再登場した。パッケージが違うだけで、同じ袋小路。 +まず一文で言うと: -### マインドシフト:「Agent を開発する」から Harness を開発する へ +**モデルが考え、harness がモデルに作業環境を与える。** -「Agent を開発しています」と言うとき、意味できるのは二つだけだ: +その作業環境を作る主な部品は次の通りです。 -**1. モデルを訓練する。** 強化学習、ファインチューニング、RLHF、その他の勾配ベースの手法で重みを調整する。タスクプロセスデータ -- 実ドメインにおける知覚・推論・行動の実際の系列 -- を収集し、モデルの振る舞いを形成する。DeepMind、OpenAI、Tencent AI Lab、Anthropic が行っていること。これが最も本来的な Agent 開発。 +- `Agent Loop`: モデルに聞く -> ツールを実行する -> 結果を返す +- `Tools`: エージェントの手足 +- `Planning`: 大きな作業を途中で迷わせないための小さな構造 +- `Context Management`: アクティブな文脈を小さく保つ +- `Permissions`: モデルの意図をそのまま危険な実行にしない +- `Hooks`: ループを書き換えずに周辺機能を足す +- `Memory`: セッションをまたいで残すべき事実だけを保持する +- `Prompt Construction`: 安定ルールと実行時状態から入力を組み立てる +- `Tasks / Teams / Worktree / MCP`: 単体 agent をより大きな作業基盤へ育てる -**2. Harness を構築する。** モデルに動作環境を提供するコードを書く。私たちの大半が行っていることであり、このリポジトリの核心。 +この教材が目指すのは: -Harness とは、Agent が特定のドメインで機能するために必要なすべて: +- 主線を順序よく理解できること +- 初学者が概念で迷子にならないこと +- 核心メカニズムと重要データ構造を自力で再実装できること -``` -Harness = Tools + Knowledge + Observation + Action Interfaces + Permissions - - Tools: ファイル I/O、シェル、ネットワーク、データベース、ブラウザ - Knowledge: 製品ドキュメント、ドメイン資料、API 仕様、スタイルガイド - Observation: git diff、エラーログ、ブラウザ状態、センサーデータ - Action: CLI コマンド、API 呼び出し、UI インタラクション - Permissions: サンドボックス、承認ワークフロー、信頼境界 -``` - -モデルが決断する。Harness が実行する。モデルが推論する。Harness がコンテキストを提供する。モデルはドライバー。Harness は車両。 +## あえて主線から外しているもの -**コーディング Agent の Harness は IDE、ターミナル、ファイルシステム。** 農業 Agent の Harness はセンサーアレイ、灌漑制御、気象データフィード。ホテル Agent の Harness は予約システム、ゲストコミュニケーションチャネル、施設管理 API。Agent -- 知性、意思決定者 -- は常にモデル。Harness はドメインごとに変わる。Agent はドメインを超えて汎化する。 +実際の製品コードには、agent の本質とは直接関係しない細部も多くあります。 -このリポジトリは車両の作り方を教える。コーディング用の車両だ。だが設計パターンはあらゆるドメインに汎化する:農場管理、ホテル運営、工場製造、物流、医療、教育、科学研究。タスクが知覚され、推論され、実行される必要がある場所ならどこでも -- Agent には Harness が要る。 +たとえば: -### Harness エンジニアの仕事 +- パッケージングや配布の流れ +- クロスプラットフォーム互換層 +- 企業ポリシーやテレメトリ配線 +- 歴史互換のための分岐 +- 製品統合のための細かな glue code -このリポジトリを読んでいるなら、あなたはおそらく Harness エンジニアだ -- それは強力なアイデンティティ。以下があなたの本当の仕事: +こうした要素は本番では重要でも、0 から 1 を教える主線には置きません。 +教学リポジトリの中心は、あくまで「agent がどう動くか」です。 -- **ツールの実装。** Agent に手を与える。ファイル読み書き、シェル実行、API 呼び出し、ブラウザ制御、データベースクエリ。各ツールは Agent が環境内で取れる行動。原子的で、組み合わせ可能で、記述が明確であるように設計する。 +## 想定読者 -- **知識のキュレーション。** Agent にドメイン専門性を与える。製品ドキュメント、アーキテクチャ決定記録、スタイルガイド、規制要件。オンデマンドで読み込み(s05)、前もって詰め込まない。Agent は何が利用可能か知った上で、必要なものを自ら取得すべき。 +このリポジトリは次の読者を想定しています。 -- **コンテキストの管理。** Agent にクリーンな記憶を与える。サブ Agent 隔離(s04)がノイズの漏洩を防ぐ。コンテキスト圧縮(s06)が履歴の氾濫を防ぐ。タスクシステム(s07)が目標を単一の会話を超えて永続化する。 +- 基本的な Python が読める +- 関数、クラス、リスト、辞書は分かる +- でも agent システムは初学者でもよい -- **権限の制御。** Agent に境界を与える。ファイルアクセスのサンドボックス化。破壊的操作への承認要求。Agent と外部システム間の信頼境界の実施。安全工学と Harness 工学の交差点。 +そのため、書き方の原則をはっきり決めています。 -- **タスクプロセスデータの収集。** Agent があなたの Harness 内で実行するすべての行動系列は訓練シグナル。実デプロイメントの知覚-推論-行動トレースは、次世代 Agent モデルをファインチューニングする原材料。あなたの Harness は Agent に仕えるだけでなく -- Agent を進化させる助けにもなる。 +- 新しい概念は、使う前に説明する +- 1つの概念は、できるだけ1か所でまとまって理解できるようにする +- まず「何か」、次に「なぜ必要か」、最後に「どう実装するか」を話す +- 初学者に断片文書を拾わせて自力でつなげさせない -あなたは知性を書いているのではない。知性が住まう世界を構築している。その世界の品質 -- Agent がどれだけ明瞭に知覚でき、どれだけ正確に行動でき、利用可能な知識がどれだけ豊かか -- が、知性がどれだけ効果的に自らを表現できるかを直接決定する。 +## 学習の約束 -**優れた Harness を作れ。Agent が残りをやる。** +この教材を一通り終えたとき、目標は次の 2 つです。 -### なぜ Claude Code か -- Harness Engineering の大師範 +1. 0 から自分で、構造が明快で反復改善できる coding-agent harness を組み立てられること +2. より複雑な実装を読むときに、何が設計主線で何が製品周辺の detail なのかを見分けられること -なぜこのリポジトリは特に Claude Code を解剖するのか? +このリポジトリが重視するのは: -Claude Code は私たちが見てきた中で最もエレガントで完成度の高い Agent Harness だからだ。単一の巧妙なトリックのためではなく、それが *しないこと* のために:Agent そのものになろうとしない。硬直的なワークフローを押し付けない。精緻な決定木でモデルを二度推しない。ツール、知識、コンテキスト管理、権限境界をモデルに提供し -- そして道を譲る。 +- 重要メカニズムと主要データ構造の高い再現度 +- 自分の手で作り直せる実装可能性 +- 途中で心智がねじれにくい読み順と説明密度 -Claude Code の本質を剥き出しにすると: +## 推奨される読み順 -``` -Claude Code = 一つの agent loop - + ツール (bash, read, write, edit, glob, grep, browser...) - + オンデマンド skill ロード - + コンテキスト圧縮 - + サブ Agent スポーン - + 依存グラフ付きタスクシステム - + 非同期メールボックスによるチーム協調 - + worktree 分離による並列実行 - + 権限ガバナンス -``` +日本語版でも主線・bridge doc・web の主要導線は揃えています。 +章順と補助資料は、日本語でもそのまま追えるように保っています。 -これがすべてだ。これが全アーキテクチャ。すべてのコンポーネントは Harness メカニズム -- Agent が住む世界の一部。Agent そのものは? Claude だ。モデル。Anthropic が人類の推論とコードの全幅で訓練した。Harness が Claude を賢くしたのではない。Claude は元々賢い。Harness が Claude に手と目とワークスペースを与えた。 +- 全体マップ: [`docs/ja/s00-architecture-overview.md`](./docs/ja/s00-architecture-overview.md) +- コード読解順: [`docs/ja/s00f-code-reading-order.md`](./docs/ja/s00f-code-reading-order.md) +- 用語集: [`docs/ja/glossary.md`](./docs/ja/glossary.md) +- 教材範囲: [`docs/ja/teaching-scope.md`](./docs/ja/teaching-scope.md) +- データ構造表: [`docs/ja/data-structures.md`](./docs/ja/data-structures.md) -これが Claude Code が理想的な教材である理由だ:**モデルを信頼し、工学的努力を Harness に集中させるとどうなるかを示している。** このリポジトリの各セッション(s01-s12)は Claude Code アーキテクチャから一つの Harness メカニズムをリバースエンジニアリングする。終了時には、Claude Code の仕組みだけでなく、あらゆるドメインのあらゆる Agent に適用される Harness 工学の普遍的原則を理解している。 +## 初めてこのリポジトリを開くなら -教訓は「Claude Code をコピーせよ」ではない。教訓は:**最高の Agent プロダクトは、自分の仕事が Harness であって Intelligence ではないと理解しているエンジニアが作る。** +最初から章をばらばらに開かない方が安定します。 ---- +最も安全な入口は次の順序です。 -## ビジョン:宇宙を本物の Agent で満たす +1. [`docs/ja/s00-architecture-overview.md`](./docs/ja/s00-architecture-overview.md) で全体図をつかむ +2. [`docs/ja/s00d-chapter-order-rationale.md`](./docs/ja/s00d-chapter-order-rationale.md) で、なぜこの順序で学ぶのかを確認する +3. [`docs/ja/s00f-code-reading-order.md`](./docs/ja/s00f-code-reading-order.md) で、ローカルの `agents/*.py` をどの順で開くか確認する +4. `s01-s06 -> s07-s11 -> s12-s14 -> s15-s19` の 4 段階で主線を順に進める +5. 各段階の終わりで一度止まり、最小版を自分で書き直してから次へ進む -これはコーディング Agent だけの話ではない。 +中盤以降で境界が混ざり始めたら、次の順で立て直すのが安定です。 -人間が複雑で多段階の判断集約的な仕事をしているすべてのドメインは、Agent が稼働できるドメインだ -- 正しい Harness さえあれば。このリポジトリのパターンは普遍的だ: +1. [`docs/ja/data-structures.md`](./docs/ja/data-structures.md) +2. [`docs/ja/entity-map.md`](./docs/ja/entity-map.md) +3. いま詰まっている章に近い bridge doc +4. その後で章本文へ戻る -``` -不動産管理 Agent = モデル + 物件センサー + メンテナンスツール + テナント通信 -農業 Agent = モデル + 土壌/気象データ + 灌漑制御 + 作物知識 -ホテル運営 Agent = モデル + 予約システム + ゲストチャネル + 施設 API -医学研究 Agent = モデル + 文献検索 + 実験機器 + プロトコル文書 -製造 Agent = モデル + 生産ラインセンサー + 品質管理 + 物流 -教育 Agent = モデル + カリキュラム知識 + 学生進捗 + 評価ツール -``` +## Web 学習入口 -ループは常に同じ。ツールが変わる。知識が変わる。権限が変わる。Agent -- モデル -- がすべてを汎化する。 - -このリポジトリを読むすべての Harness エンジニアは、ソフトウェアエンジニアリングを遥かに超えたパターンを学んでいる。知的で自動化された未来のためのインフラストラクチャを構築することを学んでいる。実ドメインにデプロイされた優れた Harness の一つ一つが、Agent が知覚し、推論し、行動できる新たな拠点。 - -まずワークショップを満たす。次に農場、病院、工場。次に都市。次に惑星。 - -**Bash is all you need. Real agents are all the universe needs.** - ---- - -``` - THE AGENT PATTERN - ================= - - User --> messages[] --> LLM --> response - | - stop_reason == "tool_use"? - / \ - yes no - | | - execute tools return text - append results - loop back -----------------> messages[] - - - 最小ループ。すべての AI Agent にこのループが必要だ。 - モデルがツール呼び出しと停止を決める。 - コードはモデルの要求を実行するだけ。 - このリポジトリはこのループを囲むすべて -- - Agent を特定ドメインで効果的にする Harness -- の作り方を教える。 -``` +章順、段階境界、章どうしの差分を可視化から入りたい場合は、組み込みの web 教材画面を使えます。 -**12 の段階的セッション、シンプルなループから分離された自律実行まで。** -**各セッションは 1 つの Harness メカニズムを追加する。各メカニズムには 1 つのモットーがある。** - -> **s01**   *"One loop & Bash is all you need"* — 1つのツール + 1つのループ = エージェント -> -> **s02**   *"ツールを足すなら、ハンドラーを1つ足すだけ"* — ループは変わらない。新ツールは dispatch map に登録するだけ -> -> **s03**   *"計画のないエージェントは行き当たりばったり"* — まずステップを書き出し、それから実行 -> -> **s04**   *"大きなタスクを分割し、各サブタスクにクリーンなコンテキストを"* — サブエージェントは独立した messages[] を使い、メイン会話を汚さない -> -> **s05**   *"必要な知識を、必要な時に読み込む"* — system prompt ではなく tool_result で注入 -> -> **s06**   *"コンテキストはいつか溢れる、空ける手段が要る"* — 3層圧縮で無限セッションを実現 -> -> **s07**   *"大きな目標を小タスクに分解し、順序付けし、ディスクに記録する"* — ファイルベースのタスクグラフ、マルチエージェント協調の基盤 -> -> **s08**   *"遅い操作はバックグラウンドへ、エージェントは次を考え続ける"* — デーモンスレッドがコマンド実行、完了後に通知を注入 -> -> **s09**   *"一人で終わらないなら、チームメイトに任せる"* — 永続チームメイト + 非同期メールボックス -> -> **s10**   *"チームメイト間には統一の通信ルールが必要"* — 1つの request-response パターンが全交渉を駆動 -> -> **s11**   *"チームメイトが自らボードを見て、仕事を取る"* — リーダーが逐一割り振る必要はない -> -> **s12**   *"各自のディレクトリで作業し、互いに干渉しない"* — タスクは目標を管理、worktree はディレクトリを管理、IDで紐付け - ---- - -## コアパターン - -```python -def agent_loop(messages): - while True: - response = client.messages.create( - model=MODEL, system=SYSTEM, - messages=messages, tools=TOOLS, - ) - messages.append({"role": "assistant", - "content": response.content}) - - if response.stop_reason != "tool_use": - return - - results = [] - for block in response.content: - if block.type == "tool_use": - output = TOOL_HANDLERS[block.name](**block.input) - results.append({ - "type": "tool_result", - "tool_use_id": block.id, - "content": output, - }) - messages.append({"role": "user", "content": results}) +```sh +cd web +npm install +npm run dev ``` -各セッションはこのループの上に 1 つの Harness メカニズムを重ねる -- ループ自体は変わらない。ループは Agent のもの。メカニズムは Harness のもの。 - -## スコープ (重要) - -このリポジトリは Harness 工学の 0->1 学習プロジェクト -- Agent モデルを囲む環境の構築を学ぶ。 -学習を優先するため、以下の本番メカニズムは意図的に簡略化または省略している: - -- 完全なイベント / Hook バス (例: PreToolUse, SessionStart/End, ConfigChange)。 - s12 では教材用に最小の追記型ライフサイクルイベントのみ実装。 -- ルールベースの権限ガバナンスと信頼フロー -- セッションライフサイクル制御 (resume/fork) と高度な worktree ライフサイクル制御 -- MCP ランタイムの詳細 (transport/OAuth/リソース購読/ポーリング) - -このリポジトリの JSONL メールボックス方式は教材用の実装であり、特定の本番内部実装を主張するものではない。 +開いたあと、まず見ると良いルートは次です。 + +- `/ja`: 日本語の学習入口。最初にどの読み方を選ぶか決める +- `/ja/timeline`: 主線を順にたどる最も安定した入口 +- `/ja/layers`: 4 段階の境界を先に理解する入口 +- `/ja/compare`: 2 章の差やジャンプ診断を見る入口 + +初回読みに最も向くのは `timeline` です。 +途中で境界が混ざったら、先に `layers` と `compare` を見てから本文へ戻る方が安定します。 + +### 橋渡しドキュメント + +これは新しい主線章ではなく、中盤以降の理解をつなぐための補助文書です。 + +- なぜこの章順なのか: [`docs/ja/s00d-chapter-order-rationale.md`](./docs/ja/s00d-chapter-order-rationale.md) +- このリポジトリのコード読解順: [`docs/ja/s00f-code-reading-order.md`](./docs/ja/s00f-code-reading-order.md) +- 参照リポジトリのモジュール対応: [`docs/ja/s00e-reference-module-map.md`](./docs/ja/s00e-reference-module-map.md) +- クエリ制御プレーン: [`docs/ja/s00a-query-control-plane.md`](./docs/ja/s00a-query-control-plane.md) +- 1リクエストの全ライフサイクル: [`docs/ja/s00b-one-request-lifecycle.md`](./docs/ja/s00b-one-request-lifecycle.md) +- クエリ遷移モデル: [`docs/ja/s00c-query-transition-model.md`](./docs/ja/s00c-query-transition-model.md) +- ツール制御プレーン: [`docs/ja/s02a-tool-control-plane.md`](./docs/ja/s02a-tool-control-plane.md) +- ツール実行ランタイム: [`docs/ja/s02b-tool-execution-runtime.md`](./docs/ja/s02b-tool-execution-runtime.md) +- Message / Prompt パイプライン: [`docs/ja/s10a-message-prompt-pipeline.md`](./docs/ja/s10a-message-prompt-pipeline.md) +- ランタイムタスクモデル: [`docs/ja/s13a-runtime-task-model.md`](./docs/ja/s13a-runtime-task-model.md) +- MCP 能力レイヤー: [`docs/ja/s19a-mcp-capability-layers.md`](./docs/ja/s19a-mcp-capability-layers.md) +- Teammate・Task・Lane モデル: [`docs/ja/team-task-lane-model.md`](./docs/ja/team-task-lane-model.md) +- エンティティ地図: [`docs/ja/entity-map.md`](./docs/ja/entity-map.md) + +### 4 段階の主線 + +1. `s01-s06`: まず単体 agent のコアを作る +2. `s07-s11`: 安全性、拡張性、記憶、prompt、recovery を足す +3. `s12-s14`: 一時的な計画を持続的なランタイム作業へ育てる +4. `s15-s19`: チーム、プロトコル、自律動作、分離実行、外部 capability routing へ進む + +### 主線の章 + +| 章 | テーマ | 得られるもの | +|---|---|---| +| `s00` | Architecture Overview | 全体マップ、用語、学習順 | +| `s01` | Agent Loop | 最小の動く agent ループ | +| `s02` | Tool Use | 安定したツール分配 | +| `s03` | Todo / Planning | 可視化されたセッション計画 | +| `s04` | Subagent | 委譲時の新鮮な文脈 | +| `s05` | Skills | 必要な知識だけを後から読む仕組み | +| `s06` | Context Compact | アクティブ文脈を小さく保つ | +| `s07` | Permission System | 実行前の安全ゲート | +| `s08` | Hook System | ループ周辺の拡張点 | +| `s09` | Memory System | セッションをまたぐ長期情報 | +| `s10` | System Prompt | セクション分割された prompt 組み立て | +| `s11` | Error Recovery | 続行・再試行・停止の分岐 | +| `s12` | Task System | 永続タスクグラフ | +| `s13` | Background Tasks | 非ブロッキング実行 | +| `s14` | Cron Scheduler | 時間起点のトリガー | +| `s15` | Agent Teams | 永続チームメイト | +| `s16` | Team Protocols | 共有された協調ルール | +| `s17` | Autonomous Agents | 自律的な認識・再開 | +| `s18` | Worktree Isolation | 分離実行レーン | +| `s19` | MCP & Plugin | 外部 capability routing | ## クイックスタート @@ -236,137 +180,78 @@ def agent_loop(messages): git clone https://github.com/shareAI-lab/learn-claude-code cd learn-claude-code pip install -r requirements.txt -cp .env.example .env # .env を編集して ANTHROPIC_API_KEY を入力 - -python agents/s01_agent_loop.py # ここから開始 -python agents/s12_worktree_task_isolation.py # 全セッションの到達点 -python agents/s_full.py # 総括: 全メカニズム統合 +cp .env.example .env ``` -### Web プラットフォーム - -インタラクティブな可視化、ステップスルーアニメーション、ソースビューア、各セッションのドキュメント。 +その後、`.env` に `ANTHROPIC_API_KEY` または互換エンドポイントを設定してから: ```sh -cd web && npm install && npm run dev # http://localhost:3000 +python agents/s01_agent_loop.py +python agents/s18_worktree_task_isolation.py +python agents/s19_mcp_plugin.py +python agents/s_full.py ``` -## 学習パス - -``` -フェーズ1: ループ フェーズ2: 計画と知識 -================== ============================== -s01 エージェントループ [1] s03 TodoWrite [5] - while + stop_reason TodoManager + nag リマインダー - | | - +-> s02 Tool Use [4] s04 サブエージェント [5] - dispatch map: name->handler 子ごとに新しい messages[] - | - s05 Skills [5] - SKILL.md を tool_result で注入 - | - s06 Context Compact [5] - 3層コンテキスト圧縮 - -フェーズ3: 永続化 フェーズ4: チーム -================== ===================== -s07 タスクシステム [8] s09 エージェントチーム [9] - ファイルベース CRUD + 依存グラフ チームメイト + JSONL メールボックス - | | -s08 バックグラウンドタスク [6] s10 チームプロトコル [12] - デーモンスレッド + 通知キュー シャットダウン + プラン承認 FSM - | - s11 自律エージェント [14] - アイドルサイクル + 自動クレーム - | - s12 Worktree 分離 [16] - タスク調整 + 必要時の分離実行レーン - - [N] = ツール数 -``` - -## プロジェクト構成 - -``` -learn-claude-code/ -| -|-- agents/ # Python リファレンス実装 (s01-s12 + s_full 総括) -|-- docs/{en,zh,ja}/ # メンタルモデル優先のドキュメント (3言語) -|-- web/ # インタラクティブ学習プラットフォーム (Next.js) -|-- skills/ # s05 の Skill ファイル -+-- .github/workflows/ci.yml # CI: 型チェック + ビルド -``` +おすすめの進め方: -## ドキュメント +1. まず `s01` を動かし、最小ループが本当に動くことを確認する +2. `s00` を読みながら `s01 -> s11` を順に進める +3. 単体 agent 本体と control plane が安定して理解できてから `s12 -> s19` に入る +4. 最後に `s_full.py` を見て、全部の機構を一枚の全体像に戻す -メンタルモデル優先: 問題、解決策、ASCII図、最小限のコード。 -[English](./docs/en/) | [中文](./docs/zh/) | [日本語](./docs/ja/) +## 各章の読み方 -| セッション | トピック | モットー | -|-----------|---------|---------| -| [s01](./docs/ja/s01-the-agent-loop.md) | エージェントループ | *One loop & Bash is all you need* | -| [s02](./docs/ja/s02-tool-use.md) | Tool Use | *ツールを足すなら、ハンドラーを1つ足すだけ* | -| [s03](./docs/ja/s03-todo-write.md) | TodoWrite | *計画のないエージェントは行き当たりばったり* | -| [s04](./docs/ja/s04-subagent.md) | サブエージェント | *大きなタスクを分割し、各サブタスクにクリーンなコンテキストを* | -| [s05](./docs/ja/s05-skill-loading.md) | Skills | *必要な知識を、必要な時に読み込む* | -| [s06](./docs/ja/s06-context-compact.md) | Context Compact | *コンテキストはいつか溢れる、空ける手段が要る* | -| [s07](./docs/ja/s07-task-system.md) | タスクシステム | *大きな目標を小タスクに分解し、順序付けし、ディスクに記録する* | -| [s08](./docs/ja/s08-background-tasks.md) | バックグラウンドタスク | *遅い操作はバックグラウンドへ、エージェントは次を考え続ける* | -| [s09](./docs/ja/s09-agent-teams.md) | エージェントチーム | *一人で終わらないなら、チームメイトに任せる* | -| [s10](./docs/ja/s10-team-protocols.md) | チームプロトコル | *チームメイト間には統一の通信ルールが必要* | -| [s11](./docs/ja/s11-autonomous-agents.md) | 自律エージェント | *チームメイトが自らボードを見て、仕事を取る* | -| [s12](./docs/ja/s12-worktree-task-isolation.md) | Worktree + タスク分離 | *各自のディレクトリで作業し、互いに干渉しない* | +各章は、次の順序で読むと理解しやすいです。 -## 次のステップ -- 理解から出荷へ +1. この機構がないと何が困るか +2. 新しい概念は何か +3. 最小で正しい実装は何か +4. 状態はどこに置かれるのか +5. それがループにどう接続されるのか +6. この章ではどこで一度止まり、何を後回しにしてよいのか -12 セッションを終えれば、Harness 工学の内部構造を完全に理解している。その知識を活かす 2 つの方法: +もし読んでいて: -### Kode Agent CLI -- オープンソース Coding Agent CLI +- 「これは主線なのか、補足なのか」 +- 「この状態は結局どこにあるのか」 -> `npm i -g @shareai-lab/kode` +と迷ったら、次を見直してください。 -Skill & LSP 対応、Windows 対応、GLM / MiniMax / DeepSeek 等のオープンモデルに接続可能。インストールしてすぐ使える。 +- [`docs/ja/teaching-scope.md`](./docs/ja/teaching-scope.md) +- [`docs/ja/data-structures.md`](./docs/ja/data-structures.md) +- [`docs/ja/entity-map.md`](./docs/ja/entity-map.md) -GitHub: **[shareAI-lab/Kode-cli](https://github.com/shareAI-lab/Kode-cli)** +## 構成 -### Kode Agent SDK -- アプリにエージェント機能を埋め込む - -公式 Claude Code Agent SDK は内部で完全な CLI プロセスと通信する -- 同時ユーザーごとに独立のターミナルプロセスが必要。Kode SDK は独立ライブラリでユーザーごとのプロセスオーバーヘッドがなく、バックエンド、ブラウザ拡張、組み込みデバイス等に埋め込み可能。 - -GitHub: **[shareAI-lab/Kode-agent-sdk](https://github.com/shareAI-lab/Kode-agent-sdk)** - ---- - -## 姉妹教材: *オンデマンドセッション*から*常時稼働アシスタント*へ - -本リポジトリが教える Harness は **使い捨て型** -- ターミナルを開き、Agent にタスクを与え、終わったら閉じる。次のセッションは白紙から始まる。Claude Code のモデル。 - -[OpenClaw](https://github.com/openclaw/openclaw) は別の可能性を証明した: 同じ agent core の上に 2 つの Harness メカニズムを追加するだけで、Agent は「突かないと動かない」から「30 秒ごとに自分で起きて仕事を探す」に変わる: - -- **ハートビート** -- 30 秒ごとに Harness が Agent にメッセージを送り、やることがあるか確認させる。なければスリープ続行、あれば即座に行動。 -- **Cron** -- Agent が自ら未来のタスクをスケジュールし、時間が来たら自動実行。 +```text +learn-claude-code/ +├── agents/ # 章ごとの実行可能な Python 参考実装 +├── docs/zh/ # 中国語の主線文書 +├── docs/en/ # 英語文書 +├── docs/ja/ # 日本語文書 +├── skills/ # s05 で使う skill ファイル +├── web/ # Web 教学プラットフォーム +└── requirements.txt +``` -さらにマルチチャネル IM ルーティング (WhatsApp / Telegram / Slack / Discord 等 13+ プラットフォーム)、永続コンテキストメモリ、Soul パーソナリティシステムを加えると、Agent は使い捨てツールから常時稼働のパーソナル AI アシスタントへ変貌する。 +## 言語の状態 -**[claw0](https://github.com/shareAI-lab/claw0)** はこれらの Harness メカニズムをゼロから分解する姉妹教材リポジトリ: +中国語が正本であり、更新も最も速いです。 -``` -claw agent = agent core + heartbeat + cron + IM chat + memory + soul -``` - -``` -learn-claude-code claw0 -(agent harness コア: (能動的な常時稼働 harness: - ループ、ツール、計画、 ハートビート、cron、IM チャネル、 - チーム、worktree 分離) メモリ、Soul パーソナリティ) -``` +- `zh`: 最も完全で、最もレビューされている +- `en`: 主線章と主要な橋渡し文書が利用できる +- `ja`: 主線章と主要な橋渡し文書が利用できる -## ライセンス +最も深く、最も更新の速い説明を追うなら、まず中国語版を優先してください。 -MIT +## 最終目標 ---- +読み終わるころには、次の問いに自分の言葉で答えられるようになるはずです。 -**モデルが Agent だ。コードは Harness だ。優れた Harness を作れ。Agent が残りをやる。** +- coding agent の最小状態は何か +- `tool_result` がなぜループの中心なのか +- どういう時に subagent を使うべきか +- permissions、hooks、memory、prompt、task がそれぞれ何を解決するのか +- いつ単体 agent を tasks、teams、worktrees、MCP へ成長させるべきか -**Bash is all you need. Real agents are all the universe needs.** +それを説明できて、自分で似たシステムを作れるなら、このリポジトリの目的は達成です。 diff --git a/README-zh.md b/README-zh.md index 843cce1f3..85b2840b1 100644 --- a/README-zh.md +++ b/README-zh.md @@ -1,234 +1,253 @@ -# Learn Claude Code -- 真正的 Agent Harness 工程 +# Learn Claude Code [English](./README.md) | [中文](./README-zh.md) | [日本語](./README-ja.md) -## 模型就是 Agent +一个面向实现者的教学仓库:从零开始,手搓一个高完成度的 coding agent harness。 -在讨论代码之前,先把一件事彻底说清楚。 +这里教的不是“如何逐行模仿某个官方仓库”,而是“如何抓住真正决定 agent 能力的核心机制”,用清晰、渐进、可自己实现的方式,把一个类似 Claude Code 的系统从 0 做到能用、好用、可扩展。 -**Agent 是模型。不是框架。不是提示词链。不是拖拽式工作流。** +## 这个仓库到底在教什么 -### Agent 到底是什么 +先把一句话说清楚: -Agent 是一个神经网络 -- Transformer、RNN、一个被训练出来的函数 -- 经过数十亿次梯度更新,在行动序列数据上学会了感知环境、推理目标、采取行动。"Agent" 这个词在 AI 领域从诞生之日起就是这个意思。从来都是。 +**模型负责思考。代码负责给模型提供工作环境。** -人类就是 agent。一个由数百万年进化训练出来的生物神经网络,通过感官感知世界,通过大脑推理,通过身体行动。当 DeepMind、OpenAI 或 Anthropic 说 "agent" 时,他们说的和这个领域自诞生以来就一直在说的完全一样:**一个学会了行动的模型。** +这个“工作环境”就是 `harness`。 +对 coding agent 来说,harness 主要由这些部分组成: -历史已经写好了铁证: +- `Agent Loop`:不停地“向模型提问 -> 执行工具 -> 把结果喂回去”。 +- `Tools`:读文件、写文件、改文件、跑命令、搜索内容。 +- `Planning`:把大目标拆成小步骤,不让 agent 乱撞。 +- `Context Management`:避免上下文越跑越脏、越跑越长。 +- `Permissions`:危险操作先过安全关。 +- `Hooks`:不改核心循环,也能扩展行为。 +- `Memory`:把跨会话仍然有价值的信息保存下来。 +- `Prompt Construction`:把系统说明、工具信息、约束和上下文组装好。 +- `Tasks / Teams / Worktree / MCP`:让系统从单 agent 升级成更完整的工作平台。 -- **2013 -- DeepMind DQN 玩 Atari。** 一个神经网络,只接收原始像素和游戏分数,学会了 7 款 Atari 2600 游戏 -- 超越所有先前算法,在其中 3 款上击败人类专家。到 2015 年,同一架构扩展到 [49 款游戏,达到职业人类测试员水平](https://www.nature.com/articles/nature14236),论文发表在 *Nature*。没有游戏专属规则。没有决策树。一个模型,从经验中学习。那个模型就是 agent。 +本仓库的目标,是让你真正理解这些机制为什么存在、最小版本怎么实现、什么时候该升级到更完整的版本。 -- **2019 -- OpenAI Five 征服 Dota 2。** 五个神经网络,在 10 个月内与自己对战了 [45,000 年的 Dota 2](https://openai.com/index/openai-five-defeats-dota-2-world-champions/),在旧金山直播赛上 2-0 击败了 **OG** -- TI8 世界冠军。随后的公开竞技场中,AI 在 42,729 场比赛中胜率 99.4%。没有脚本化的策略。没有元编程的团队协调逻辑。模型完全通过自我对弈学会了团队协作、战术和实时适应。 +## 这个仓库不教什么 -- **2019 -- DeepMind AlphaStar 制霸星际争霸 II。** AlphaStar 在闭门赛中 [10-1 击败职业选手](https://deepmind.google/blog/alphastar-mastering-the-real-time-strategy-game-starcraft-ii/),随后在欧洲服务器上达到[宗师段位](https://www.nature.com/articles/d41586-019-03298-6) -- 90,000 名玩家中的前 0.15%。一个信息不完全、实时决策、组合动作空间远超国际象棋和围棋的游戏。Agent 是什么?是模型。训练出来的。不是编出来的。 +本仓库**不追求**把某个真实生产仓库的所有实现细节逐条抄下来。 -- **2019 -- 腾讯绝悟统治王者荣耀。** 腾讯 AI Lab 的 "绝悟" 于 2019 年 8 月 2 日世冠杯半决赛上[以 5v5 击败 KPL 职业选手](https://www.jiemian.com/article/3371171.html)。在 1v1 模式下,职业选手 [15 场只赢 1 场,最多坚持不到 8 分钟](https://developer.aliyun.com/article/851058)。训练强度:一天等于人类 440 年。到 2021 年,绝悟在全英雄池 BO5 上全面超越 KPL 职业选手水准。没有手工编写的英雄克制表。没有脚本化的阵容编排。一个从零开始通过自我对弈学习整个游戏的模型。 +下面这些内容,如果和 agent 的核心运行机制关系不大,就不会占据主线篇幅: -- **2024-2025 -- LLM Agent 重塑软件工程。** Claude、GPT、Gemini -- 在人类全部代码和推理上训练的大语言模型 -- 被部署为编程 agent。它们阅读代码库,编写实现,调试故障,团队协作。架构与之前每一个 agent 完全相同:一个训练好的模型,放入一个环境,给予感知和行动的工具。唯一的不同是它们学到的东西的规模和解决任务的通用性。 +- 打包、编译、发布流程 +- 跨平台兼容层的全部细节 +- 企业策略、遥测、远程控制、账号体系的完整接线 +- 为了历史兼容或产品集成而出现的大量边角判断 +- 只对某个特定内部运行环境有意义的命名或胶水代码 -每一个里程碑都共享同一个真理:**"Agent" 从来都不是外面那层代码。Agent 永远是模型本身。** +这不是偷懒,而是教学取舍。 -### Agent 不是什么 +一个好的教学仓库,应该优先保证三件事: -"Agent" 这个词已经被一整个提示词水管工产业劫持了。 +1. 读者能从 0 到 1 自己做出来。 +2. 读者不会被大量无关细节打断心智。 +3. 真正关键的机制、数据结构和模块协作关系讲得完整、准确、没有幻觉。 -拖拽式工作流构建器。无代码 "AI Agent" 平台。提示词链编排库。它们共享同一个幻觉:把 LLM API 调用用 if-else 分支、节点图、硬编码路由逻辑串在一起就算是 "构建 Agent" 了。 +## 面向的读者 -不是的。它们做出来的东西是鲁布·戈德堡机械 -- 一个过度工程化的、脆弱的过程式规则流水线,LLM 被楔在里面当一个美化了的文本补全节点。那不是 Agent。那是一个有着宏大妄想的 shell 脚本。 +这个仓库默认读者是: -**提示词水管工式 "Agent" 是不做模型的程序员的意淫。** 他们试图通过堆叠过程式逻辑来暴力模拟智能 -- 庞大的规则树、节点图、链式提示词瀑布流 -- 然后祈祷足够多的胶水代码能涌现出自主行为。不会的。你不可能通过工程手段编码出 agency。Agency 是学出来的,不是编出来的。 +- 会一点 Python +- 知道函数、类、字典、列表这些基础概念 +- 但不一定系统做过 agent、编译器、分布式系统或复杂工程架构 -那些系统从诞生之日起就已经死了:脆弱、不可扩展、根本不具备泛化能力。它们是 GOFAI(Good Old-Fashioned AI,经典符号 AI)的现代还魂 -- 几十年前就被学界抛弃的符号规则系统,现在喷了一层 LLM 的漆又登场了。换了个包装,同一条死路。 +所以这里会坚持几个写法原则: -### 心智转换:从 "开发 Agent" 到开发 Harness +- 新概念先解释再使用。 +- 同一个概念尽量只在一个地方完整讲清。 +- 先讲“它是什么”,再讲“为什么需要”,最后讲“如何实现”。 +- 不把初学者扔进一堆互相引用的碎片文档里自己拼图。 -当一个人说 "我在开发 Agent" 时,他只可能是两个意思之一: +## 学习承诺 -**1. 训练模型。** 通过强化学习、微调、RLHF 或其他基于梯度的方法调整权重。收集任务过程数据 -- 真实领域中感知、推理、行动的实际序列 -- 用它们来塑造模型的行为。这是 DeepMind、OpenAI、腾讯 AI Lab、Anthropic 在做的事。这是最本义的 Agent 开发。 +学完这套内容,你应该能做到两件事: -**2. 构建 Harness。** 编写代码,为模型提供一个可操作的环境。这是我们大多数人在做的事,也是本仓库的核心。 +1. 自己从零写出一个结构清楚、可运行、可迭代的 coding agent harness。 +2. 看懂更复杂系统时,知道哪些是主干机制,哪些只是产品化外围细节。 -Harness 是 agent 在特定领域工作所需要的一切: +我们追求的是: -``` -Harness = Tools + Knowledge + Observation + Action Interfaces + Permissions - - Tools: 文件读写、Shell、网络、数据库、浏览器 - Knowledge: 产品文档、领域资料、API 规范、风格指南 - Observation: git diff、错误日志、浏览器状态、传感器数据 - Action: CLI 命令、API 调用、UI 交互 - Permissions: 沙箱隔离、审批流程、信任边界 -``` - -模型做决策。Harness 执行。模型做推理。Harness 提供上下文。模型是驾驶者。Harness 是载具。 - -**编程 agent 的 harness 是它的 IDE、终端和文件系统。** 农业 agent 的 harness 是传感器阵列、灌溉控制和气象数据。酒店 agent 的 harness 是预订系统、客户沟通渠道和设施管理 API。Agent -- 那个智能、那个决策者 -- 永远是模型。Harness 因领域而变。Agent 跨领域泛化。 - -这个仓库教你造载具。编程用的载具。但设计模式可以泛化到任何领域:庄园管理、农田运营、酒店运作、工厂制造、物流调度、医疗保健、教育培训、科学研究。只要有一个任务需要被感知、推理和执行 -- agent 就需要一个 harness。 - -### Harness 工程师到底在做什么 - -如果你在读这个仓库,你很可能是一名 harness 工程师 -- 这是一个强大的身份。以下是你真正的工作: - -- **实现工具。** 给 agent 一双手。文件读写、Shell 执行、API 调用、浏览器控制、数据库查询。每个工具都是 agent 在环境中可以采取的一个行动。设计它们时要原子化、可组合、描述清晰。 - -- **策划知识。** 给 agent 领域专长。产品文档、架构决策记录、风格指南、合规要求。按需加载(s05),不要前置塞入。Agent 应该知道有什么可用,然后自己拉取所需。 - -- **管理上下文。** 给 agent 干净的记忆。子 agent 隔离(s04)防止噪声泄露。上下文压缩(s06)防止历史淹没。任务系统(s07)让目标持久化到单次对话之外。 - -- **控制权限。** 给 agent 边界。沙箱化文件访问。对破坏性操作要求审批。在 agent 和外部系统之间实施信任边界。这是安全工程与 harness 工程的交汇点。 - -- **收集任务过程数据。** Agent 在你的 harness 中执行的每一条行动序列都是训练信号。真实部署中的感知-推理-行动轨迹是微调下一代 agent 模型的原材料。你的 harness 不仅服务于 agent -- 它还可以帮助进化 agent。 - -你不是在编写智能。你是在构建智能栖居的世界。这个世界的质量 -- agent 能看得多清楚、行动得多精准、可用知识有多丰富 -- 直接决定了智能能多有效地表达自己。 +- 对关键机制和关键数据结构的高保真理解 +- 对实现路径的高可操作性 +- 对教学路径的高可读性 -**造好 Harness。Agent 会完成剩下的。** +而不是把“原始源码里存在过的所有复杂细节”一股脑堆给你。 -### 为什么是 Claude Code -- Harness 工程的大师课 +## 建议阅读顺序 -为什么这个仓库专门拆解 Claude Code? +先读总览,再按顺序向后读。 -因为 Claude Code 是我们所见过的最优雅、最完整的 agent harness 实现。不是因为某个巧妙的技巧,而是因为它 *没做* 的事:它没有试图成为 agent 本身。它没有强加僵化的工作流。它没有用精心设计的决策树去替模型做判断。它给模型提供了工具、知识、上下文管理和权限边界 -- 然后让开了。 +- 总览:[`docs/zh/s00-architecture-overview.md`](./docs/zh/s00-architecture-overview.md) +- 代码阅读顺序:[`docs/zh/s00f-code-reading-order.md`](./docs/zh/s00f-code-reading-order.md) +- 术语表:[`docs/zh/glossary.md`](./docs/zh/glossary.md) +- 教学范围:[`docs/zh/teaching-scope.md`](./docs/zh/teaching-scope.md) +- 数据结构总表:[`docs/zh/data-structures.md`](./docs/zh/data-structures.md) -把 Claude Code 剥到本质来看: +## 第一次打开仓库,最推荐这样走 -``` -Claude Code = 一个 agent loop - + 工具 (bash, read, write, edit, glob, grep, browser...) - + 按需 skill 加载 - + 上下文压缩 - + 子 agent 派生 - + 带依赖图的任务系统 - + 异步邮箱的团队协调 - + worktree 隔离的并行执行 - + 权限治理 -``` - -就这些。这就是全部架构。每一个组件都是 harness 机制 -- 为 agent 构建的栖居世界的一部分。Agent 本身呢?是 Claude。一个模型。由 Anthropic 在人类推理和代码的全部广度上训练而成。Harness 没有让 Claude 变聪明。Claude 本来就聪明。Harness 给了 Claude 双手、双眼和一个工作空间。 - -这就是 Claude Code 作为教学标本的意义:**它展示了当你信任模型、把工程精力集中在 harness 上时会发生什么。** 本仓库的每一个课程(s01-s12)都在逆向工程 Claude Code 架构中的一个 harness 机制。学完之后,你理解的不只是 Claude Code 怎么工作,而是适用于任何领域、任何 agent 的 harness 工程通用原则。 - -启示不是 "复制 Claude Code"。启示是:**最好的 agent 产品,出自那些明白自己的工作是 harness 而非 intelligence 的工程师之手。** - ---- +如果你是第一次进这个仓库,不要随机点章节。 -## 愿景:用真正的 Agent 铺满宇宙 +最稳的入口顺序是: -这不只关乎编程 agent。 +1. 先看 [`docs/zh/s00-architecture-overview.md`](./docs/zh/s00-architecture-overview.md),确认系统全景。 +2. 再看 [`docs/zh/s00d-chapter-order-rationale.md`](./docs/zh/s00d-chapter-order-rationale.md),确认为什么主线必须按这个顺序长出来。 +3. 再看 [`docs/zh/s00f-code-reading-order.md`](./docs/zh/s00f-code-reading-order.md),确认本地 `agents/*.py` 该按什么顺序打开。 +4. 然后按四阶段读主线:`s01-s06 -> s07-s11 -> s12-s14 -> s15-s19`。 +5. 每学完一个阶段,停下来自己手写一个最小版本,不要等全部看完再回头补实现。 -每一个人类从事复杂、多步骤、需要判断力的工作的领域,都是 agent 可以运作的领域 -- 只要有对的 harness。本仓库中的模式是通用的: +如果你读到一半开始打结,最稳的重启顺序是: -``` -庄园管理 agent = 模型 + 物业传感器 + 维护工具 + 租户通信 -农业 agent = 模型 + 土壤/气象数据 + 灌溉控制 + 作物知识 -酒店运营 agent = 模型 + 预订系统 + 客户渠道 + 设施 API -医学研究 agent = 模型 + 文献检索 + 实验仪器 + 协议文档 -制造业 agent = 模型 + 产线传感器 + 质量控制 + 物流系统 -教育 agent = 模型 + 课程知识 + 学生进度 + 评估工具 -``` - -循环永远不变。工具在变。知识在变。权限在变。Agent -- 那个模型 -- 泛化一切。 - -每一个读这个仓库的 harness 工程师都在学习远超软件工程的模式。你在学习为一个智能的、自动化的未来构建基础设施。每一个部署在真实领域的好 harness,都是 agent 能够感知、推理、行动的又一个阵地。 +1. [`docs/zh/data-structures.md`](./docs/zh/data-structures.md) +2. [`docs/zh/entity-map.md`](./docs/zh/entity-map.md) +3. 当前卡住章节对应的桥接文档 +4. 再回当前章节正文 -先铺满工作室。然后是农田、医院、工厂。然后是城市。然后是星球。 +## Web 学习入口 -**Bash is all you need. Real agents are all the universe needs.** +如果你更喜欢先看可视化的主线、阶段和章节差异,可以直接跑本仓库自带的 web 教学界面: ---- - -``` - THE AGENT PATTERN - ================= - - User --> messages[] --> LLM --> response - | - stop_reason == "tool_use"? - / \ - yes no - | | - execute tools return text - append results - loop back -----------------> messages[] - - - 这是最小循环。每个 AI Agent 都需要这个循环。 - 模型决定何时调用工具、何时停止。 - 代码只是执行模型的要求。 - 本仓库教你构建围绕这个循环的一切 -- - 让 agent 在特定领域高效工作的 harness。 -``` - -**12 个递进式课程, 从简单循环到隔离化的自治执行。** -**每个课程添加一个 harness 机制。每个机制有一句格言。** - -> **s01**   *"One loop & Bash is all you need"* — 一个工具 + 一个循环 = 一个 Agent -> -> **s02**   *"加一个工具, 只加一个 handler"* — 循环不用动, 新工具注册进 dispatch map 就行 -> -> **s03**   *"没有计划的 agent 走哪算哪"* — 先列步骤再动手, 完成率翻倍 -> -> **s04**   *"大任务拆小, 每个小任务干净的上下文"* — Subagent 用独立 messages[], 不污染主对话 -> -> **s05**   *"用到什么知识, 临时加载什么知识"* — 通过 tool_result 注入, 不塞 system prompt -> -> **s06**   *"上下文总会满, 要有办法腾地方"* — 三层压缩策略, 换来无限会话 -> -> **s07**   *"大目标要拆成小任务, 排好序, 记在磁盘上"* — 文件持久化的任务图, 为多 agent 协作打基础 -> -> **s08**   *"慢操作丢后台, agent 继续想下一步"* — 后台线程跑命令, 完成后注入通知 -> -> **s09**   *"任务太大一个人干不完, 要能分给队友"* — 持久化队友 + 异步邮箱 -> -> **s10**   *"队友之间要有统一的沟通规矩"* — 一个 request-response 模式驱动所有协商 -> -> **s11**   *"队友自己看看板, 有活就认领"* — 不需要领导逐个分配, 自组织 -> -> **s12**   *"各干各的目录, 互不干扰"* — 任务管目标, worktree 管目录, 按 ID 绑定 - ---- - -## 核心模式 - -```python -def agent_loop(messages): - while True: - response = client.messages.create( - model=MODEL, system=SYSTEM, - messages=messages, tools=TOOLS, - ) - messages.append({"role": "assistant", - "content": response.content}) - - if response.stop_reason != "tool_use": - return - - results = [] - for block in response.content: - if block.type == "tool_use": - output = TOOL_HANDLERS[block.name](**block.input) - results.append({ - "type": "tool_result", - "tool_use_id": block.id, - "content": output, - }) - messages.append({"role": "user", "content": results}) +```sh +cd web +npm install +npm run dev ``` -每个课程在这个循环之上叠加一个 harness 机制 -- 循环本身始终不变。循环属于 agent。机制属于 harness。 - -## 范围说明 (重要) - -本仓库是一个 0->1 的 harness 工程学习项目 -- 构建围绕 agent 模型的工作环境。 -为保证学习路径清晰,仓库有意简化或省略了部分生产机制: - -- 完整事件 / Hook 总线 (例如 PreToolUse、SessionStart/End、ConfigChange)。 - s12 仅提供教学用途的最小 append-only 生命周期事件流。 -- 基于规则的权限治理与信任流程 -- 会话生命周期控制 (resume/fork) 与更完整的 worktree 生命周期控制 -- 完整 MCP 运行时细节 (transport/OAuth/资源订阅/轮询) - -仓库中的团队 JSONL 邮箱协议是教学实现,不是对任何特定生产内部实现的声明。 +然后按这个顺序打开: + +- `/zh`:总入口,适合第一次进入仓库时选学习路线 +- `/zh/timeline`:看整条主线如何按顺序展开 +- `/zh/layers`:看四阶段边界,适合先理解为什么这样分层 +- `/zh/compare`:当你开始分不清两章差异时,用来做相邻对比或阶段跳跃诊断 + +如果你是第一次学,推荐先走 `timeline`。 +如果你已经读到中后段开始混,优先看 `layers` 和 `compare`,不要先硬钻源码。 + +### 桥接阅读 + +下面这些文档不是新的主线章节,而是帮助你把中后半程真正讲透的“桥接层”: + +- 为什么是这个章节顺序:[`docs/zh/s00d-chapter-order-rationale.md`](./docs/zh/s00d-chapter-order-rationale.md) +- 本仓库代码阅读顺序:[`docs/zh/s00f-code-reading-order.md`](./docs/zh/s00f-code-reading-order.md) +- 参考仓库模块映射图:[`docs/zh/s00e-reference-module-map.md`](./docs/zh/s00e-reference-module-map.md) +- 查询控制平面:[`docs/zh/s00a-query-control-plane.md`](./docs/zh/s00a-query-control-plane.md) +- 一次请求的完整生命周期:[`docs/zh/s00b-one-request-lifecycle.md`](./docs/zh/s00b-one-request-lifecycle.md) +- Query 转移模型:[`docs/zh/s00c-query-transition-model.md`](./docs/zh/s00c-query-transition-model.md) +- 工具控制平面:[`docs/zh/s02a-tool-control-plane.md`](./docs/zh/s02a-tool-control-plane.md) +- 工具执行运行时:[`docs/zh/s02b-tool-execution-runtime.md`](./docs/zh/s02b-tool-execution-runtime.md) +- 消息与提示词管道:[`docs/zh/s10a-message-prompt-pipeline.md`](./docs/zh/s10a-message-prompt-pipeline.md) +- 运行时任务模型:[`docs/zh/s13a-runtime-task-model.md`](./docs/zh/s13a-runtime-task-model.md) +- 队友-任务-车道模型:[`docs/zh/team-task-lane-model.md`](./docs/zh/team-task-lane-model.md) +- MCP 能力层地图:[`docs/zh/s19a-mcp-capability-layers.md`](./docs/zh/s19a-mcp-capability-layers.md) +- 系统实体边界图:[`docs/zh/entity-map.md`](./docs/zh/entity-map.md) + +### 四阶段主线 + +| 阶段 | 目标 | 章节 | +|---|---|---| +| 阶段 1 | 先做出一个能工作的单 agent | `s01-s06` | +| 阶段 2 | 再补安全、扩展、记忆、提示词、恢复 | `s07-s11` | +| 阶段 3 | 把临时清单升级成真正的任务系统 | `s12-s14` | +| 阶段 4 | 从单 agent 升级成多 agent 与外部工具平台 | `s15-s19` | + +### 全部章节 + +| 章节 | 主题 | 你会得到什么 | +|---|---|---| +| [s00](./docs/zh/s00-architecture-overview.md) | 架构总览 | 全局地图、名词、学习顺序 | +| [s01](./docs/zh/s01-the-agent-loop.md) | Agent Loop | 最小可运行循环 | +| [s02](./docs/zh/s02-tool-use.md) | Tool Use | 工具注册、分发和 tool_result | +| [s03](./docs/zh/s03-todo-write.md) | Todo / Planning | 最小计划系统 | +| [s04](./docs/zh/s04-subagent.md) | Subagent | 上下文隔离与任务委派 | +| [s05](./docs/zh/s05-skill-loading.md) | Skills | 按需加载知识 | +| [s06](./docs/zh/s06-context-compact.md) | Context Compact | 上下文预算与压缩 | +| [s07](./docs/zh/s07-permission-system.md) | Permission System | 危险操作前的权限管道 | +| [s08](./docs/zh/s08-hook-system.md) | Hook System | 不改循环也能扩展行为 | +| [s09](./docs/zh/s09-memory-system.md) | Memory System | 跨会话持久信息 | +| [s10](./docs/zh/s10-system-prompt.md) | System Prompt | 提示词组装流水线 | +| [s11](./docs/zh/s11-error-recovery.md) | Error Recovery | 错误恢复与续行 | +| [s12](./docs/zh/s12-task-system.md) | Task System | 持久化任务图 | +| [s13](./docs/zh/s13-background-tasks.md) | Background Tasks | 后台执行与通知 | +| [s14](./docs/zh/s14-cron-scheduler.md) | Cron Scheduler | 定时触发 | +| [s15](./docs/zh/s15-agent-teams.md) | Agent Teams | 多 agent 协作基础 | +| [s16](./docs/zh/s16-team-protocols.md) | Team Protocols | 团队通信协议 | +| [s17](./docs/zh/s17-autonomous-agents.md) | Autonomous Agents | 自治认领与调度 | +| [s18](./docs/zh/s18-worktree-task-isolation.md) | Worktree Isolation | 并行隔离工作目录 | +| [s19](./docs/zh/s19-mcp-plugin.md) | MCP & Plugin | 外部工具接入 | + +## 章节总索引:每章最该盯住什么 + +如果你是第一次系统学这套内容,不要把注意力平均分给所有细节。 +每章都先盯住 3 件事: + +1. 这一章新增了什么能力。 +2. 这一章的关键状态放在哪里。 +3. 学完以后,你自己能不能把这个最小机制手写出来。 + +下面这张表,就是整套仓库最实用的“主线索引”。 + +| 章节 | 最该盯住的数据结构 / 实体 | 这一章结束后你手里应该多出什么 | +|---|---|---| +| `s01` | `messages` / `LoopState` | 一个最小可运行的 agent loop | +| `s02` | `ToolSpec` / `ToolDispatchMap` / `tool_result` | 一个能真正读写文件、执行动作的工具系统 | +| `s03` | `TodoItem` / `PlanState` | 一个能把大目标拆成步骤的最小计划层 | +| `s04` | `SubagentContext` / 子 `messages` | 一个能隔离上下文、做一次性委派的子 agent 机制 | +| `s05` | `SkillMeta` / `SkillContent` / `SkillRegistry` | 一个按需加载知识、不把所有知识塞进 prompt 的技能层 | +| `s06` | `CompactSummary` / `PersistedOutputMarker` | 一个能控制上下文膨胀的压缩层 | +| `s07` | `PermissionRule` / `PermissionDecision` | 一条明确的“危险操作先过闸”的权限管道 | +| `s08` | `HookEvent` / `HookResult` | 一套不改主循环也能扩展行为的插口系统 | +| `s09` | `MemoryEntry` / `MemoryStore` | 一套区分“临时上下文”和“跨会话记忆”的持久层 | +| `s10` | `PromptParts` / `SystemPromptBlock` | 一条可管理、可组装的输入管道 | +| `s11` | `RecoveryState` / `TransitionReason` | 一套出错后还能继续往前走的恢复分支 | +| `s12` | `TaskRecord` / `TaskStatus` | 一张持久化的工作图,而不只是会话内清单 | +| `s13` | `RuntimeTaskState` / `Notification` | 一套慢任务后台执行、结果延后回来的运行时层 | +| `s14` | `ScheduleRecord` / `CronTrigger` | 一套“时间到了就能自动开工”的定时触发层 | +| `s15` | `TeamMember` / `MessageEnvelope` | 一个长期存在、能反复接活的 agent 团队雏形 | +| `s16` | `ProtocolEnvelope` / `RequestRecord` | 一套团队之间可追踪、可批准、可拒绝的协议层 | +| `s17` | `ClaimPolicy` / `AutonomyState` | 一套队友能自己找活、自己恢复工作的自治层 | +| `s18` | `WorktreeRecord` / `TaskBinding` | 一套任务与隔离工作目录绑定的并行执行车道 | +| `s19` | `MCPServerConfig` / `CapabilityRoute` | 一套把外部工具与外部能力接入主系统的总线 | + +## 如果你是初学者,最推荐这样读 + +### 读法 1:最稳主线 + +适合第一次系统接触 agent 的读者。 + +按这个顺序读: + +`s00 -> s01 -> s02 -> s03 -> s04 -> s05 -> s06 -> s07 -> s08 -> s09 -> s10 -> s11 -> s12 -> s13 -> s14 -> s15 -> s16 -> s17 -> s18 -> s19` + +### 读法 2:先做出能跑的,再补完整 + +适合“想先把系统搭出来,再慢慢补完”的读者。 + +按这个顺序读: + +1. `s01-s06` +2. `s07-s11` +3. `s12-s14` +4. `s15-s19` + +### 读法 3:卡住时这样回看 + +如果你在中后半程开始打结,先不要硬往下冲。 + +回看顺序建议是: + +1. [`docs/zh/s00-architecture-overview.md`](./docs/zh/s00-architecture-overview.md) +2. [`docs/zh/data-structures.md`](./docs/zh/data-structures.md) +3. [`docs/zh/entity-map.md`](./docs/zh/entity-map.md) +4. 当前卡住的那一章 + +因为读者真正卡住时,往往不是“代码没看懂”,而是: + +- 这个机制到底接在系统哪一层 +- 这个状态到底存在哪个结构里 +- 这个名词和另一个看起来很像的名词到底差在哪 ## 快速开始 @@ -236,137 +255,102 @@ def agent_loop(messages): git clone https://github.com/shareAI-lab/learn-claude-code cd learn-claude-code pip install -r requirements.txt -cp .env.example .env # 编辑 .env 填入你的 ANTHROPIC_API_KEY - -python agents/s01_agent_loop.py # 从这里开始 -python agents/s12_worktree_task_isolation.py # 完整递进终点 -python agents/s_full.py # 总纲: 全部机制合一 +cp .env.example .env ``` -### Web 平台 - -交互式可视化、分步动画、源码查看器, 以及每个课程的文档。 +把 `.env` 里的 `ANTHROPIC_API_KEY` 或兼容接口配置好以后: ```sh -cd web && npm install && npm run dev # http://localhost:3000 +python agents/s01_agent_loop.py +python agents/s18_worktree_task_isolation.py +python agents/s19_mcp_plugin.py +python agents/s_full.py ``` -## 学习路径 +建议顺序: -``` -第一阶段: 循环 第二阶段: 规划与知识 -================== ============================== -s01 Agent Loop [1] s03 TodoWrite [5] - while + stop_reason TodoManager + nag 提醒 - | | - +-> s02 Tool Use [4] s04 Subagent [5] - dispatch map: name->handler 每个 Subagent 独立 messages[] - | - s05 Skills [5] - SKILL.md 通过 tool_result 注入 - | - s06 Context Compact [5] - 三层 Context Compact - -第三阶段: 持久化 第四阶段: 团队 -================== ===================== -s07 Task System [8] s09 Agent Teams [9] - 文件持久化 CRUD + 依赖图 队友 + JSONL 邮箱 - | | -s08 Background Tasks [6] s10 Team Protocols [12] - 守护线程 + 通知队列 关机 + 计划审批 FSM - | - s11 Autonomous Agents [14] - 空闲轮询 + 自动认领 - | - s12 Worktree Isolation [16] - Task 协调 + 按需隔离执行通道 - - [N] = 工具数量 -``` +1. 先跑 `s01`,确认最小循环真的能工作。 +2. 一边读 `s00`,一边按顺序跑 `s01 -> s10`。 +3. 等前 10 章吃透后,再进入 `s11 -> s19`。 +4. 最后再看 `s_full.py`,把所有机制放回同一张图里。 -## 项目结构 +## 如何读这套教程 -``` -learn-claude-code/ -| -|-- agents/ # Python 参考实现 (s01-s12 + s_full 总纲) -|-- docs/{en,zh,ja}/ # 心智模型优先的文档 (3 种语言) -|-- web/ # 交互式学习平台 (Next.js) -|-- skills/ # s05 的 Skill 文件 -+-- .github/workflows/ci.yml # CI: 类型检查 + 构建 -``` +每章都建议按这个顺序看: -## 文档 +1. `问题`:没有这个机制会出现什么痛点。 +2. `概念定义`:先把新名词讲清楚。 +3. `最小实现`:先做最小但正确的版本。 +4. `核心数据结构`:搞清楚状态到底存在哪里。 +5. `主循环如何接入`:它如何与 agent loop 协作。 +6. `这一章先停在哪里`:先守住什么边界,哪些扩展可以后放。 -心智模型优先: 问题、方案、ASCII 图、最小化代码。 -[English](./docs/en/) | [中文](./docs/zh/) | [日本語](./docs/ja/) +如果你是初学者,不要着急追求“一次看懂所有复杂机制”。 +先把每章的最小实现真的写出来,再理解升级版边界,会轻松很多。 -| 课程 | 主题 | 格言 | -|------|------|------| -| [s01](./docs/zh/s01-the-agent-loop.md) | Agent Loop | *One loop & Bash is all you need* | -| [s02](./docs/zh/s02-tool-use.md) | Tool Use | *加一个工具, 只加一个 handler* | -| [s03](./docs/zh/s03-todo-write.md) | TodoWrite | *没有计划的 agent 走哪算哪* | -| [s04](./docs/zh/s04-subagent.md) | Subagent | *大任务拆小, 每个小任务干净的上下文* | -| [s05](./docs/zh/s05-skill-loading.md) | Skills | *用到什么知识, 临时加载什么知识* | -| [s06](./docs/zh/s06-context-compact.md) | Context Compact | *上下文总会满, 要有办法腾地方* | -| [s07](./docs/zh/s07-task-system.md) | Task System | *大目标要拆成小任务, 排好序, 记在磁盘上* | -| [s08](./docs/zh/s08-background-tasks.md) | Background Tasks | *慢操作丢后台, agent 继续想下一步* | -| [s09](./docs/zh/s09-agent-teams.md) | Agent Teams | *任务太大一个人干不完, 要能分给队友* | -| [s10](./docs/zh/s10-team-protocols.md) | Team Protocols | *队友之间要有统一的沟通规矩* | -| [s11](./docs/zh/s11-autonomous-agents.md) | Autonomous Agents | *队友自己看看板, 有活就认领* | -| [s12](./docs/zh/s12-worktree-task-isolation.md) | Worktree + Task Isolation | *各干各的目录, 互不干扰* | +如果你在阅读中经常冒出这两类问题: -## 学完之后 -- 从理解到落地 +- “这一段到底算主线,还是维护者补充?” +- “这个状态到底存在哪个结构里?” -12 个课程走完, 你已经从内到外理解了 harness 工程的运作原理。两种方式把知识变成产品: +建议随时回看: -### Kode Agent CLI -- 开源 Coding Agent CLI +- [`docs/zh/teaching-scope.md`](./docs/zh/teaching-scope.md) +- [`docs/zh/data-structures.md`](./docs/zh/data-structures.md) +- [`docs/zh/entity-map.md`](./docs/zh/entity-map.md) -> `npm i -g @shareai-lab/kode` +## 本仓库的教学取舍 -支持 Skill & LSP, 适配 Windows, 可接 GLM / MiniMax / DeepSeek 等开放模型。装完即用。 +为了保证“从 0 到 1 可实现”,本仓库会刻意做这些取舍: -GitHub: **[shareAI-lab/Kode-cli](https://github.com/shareAI-lab/Kode-cli)** +- 先教最小正确版本,再讲扩展边界。 +- 如果一个真实机制很复杂,但主干思想并不复杂,就先讲主干思想。 +- 如果一个高级名词出现了,就解释它是什么,不假设读者天然知道。 +- 如果一个真实系统里某些边角分支对教学价值不高,就直接删掉。 -### Kode Agent SDK -- 把 Agent 能力嵌入你的应用 +这意味着本仓库追求的是: -官方 Claude Code Agent SDK 底层与完整 CLI 进程通信 -- 每个并发用户 = 一个终端进程。Kode SDK 是独立库, 无 per-user 进程开销, 可嵌入后端、浏览器插件、嵌入式设备等任意运行时。 +**核心机制高保真,外围细节有取舍。** -GitHub: **[shareAI-lab/Kode-agent-sdk](https://github.com/shareAI-lab/Kode-agent-sdk)** +这也是教学仓库最合理的做法。 ---- +## 项目结构 -## 姊妹教程: 从*被动临时会话*到*主动常驻助手* +```text +learn-claude-code/ +├── agents/ # 每一章对应一个可运行的 Python 参考实现 +├── docs/zh/ # 中文主线文档 +├── docs/en/ # 英文文档,当前为部分同步 +├── docs/ja/ # 日文文档,当前为部分同步 +├── skills/ # s05 使用的技能文件 +├── web/ # Web 教学平台 +└── requirements.txt +``` -本仓库教的 harness 属于 **用完即走** 型 -- 开终端、给 agent 任务、做完关掉, 下次重开是全新会话。Claude Code 就是这种模式。 +## 语言说明 -但 [OpenClaw](https://github.com/openclaw/openclaw) 证明了另一种可能: 在同样的 agent core 之上, 加两个 harness 机制就能让 agent 从 "踹一下动一下" 变成 "自己隔 30 秒醒一次找活干": +当前仓库以中文文档为主线,最完整、更新也最快。 -- **心跳 (Heartbeat)** -- 每 30 秒 harness 给 agent 发一条消息, 让它检查有没有事可做。没事就继续睡, 有事立刻行动。 -- **定时任务 (Cron)** -- agent 可以给自己安排未来要做的事, 到点自动执行。 +- `zh`:主线版本 +- `en`:部分同步 +- `ja`:部分同步 -再加上 IM 多通道路由 (WhatsApp/Telegram/Slack/Discord 等 13+ 平台)、不清空的上下文记忆、Soul 人格系统, agent 就从一个临时工具变成了始终在线的个人 AI 助手。 +如果你要系统学习,请优先看中文。 -**[claw0](https://github.com/shareAI-lab/claw0)** 是我们的姊妹教学仓库, 从零拆解这些 harness 机制: +## 最后的目标 -``` -claw agent = agent core + heartbeat + cron + IM chat + memory + soul -``` +读完这套内容,你不应该只是“知道 Claude Code 很厉害”。 -``` -learn-claude-code claw0 -(agent harness 内核: (主动式常驻 harness: - 循环、工具、规划、 心跳、定时任务、IM 通道、 - 团队、worktree 隔离) 记忆、Soul 人格) -``` +你应该能自己回答这些问题: -## 许可证 +- 一个 coding agent 最小要有哪些状态? +- 工具调用和 `tool_result` 为什么是核心接口? +- 为什么要做子 agent,而不是把所有内容都塞在一个对话里? +- 权限、hook、memory、prompt、task 这些机制分别解决什么问题? +- 一个系统什么时候该从单 agent 升级成任务图、团队、worktree 和 MCP? -MIT +如果这些问题你都能清楚回答,而且能自己写出一个相似系统,那这套仓库就达到了它的目的。 --- -**模型就是 Agent。代码是 Harness。造好 Harness,Agent 会完成剩下的。** - -**Bash is all you need. Real agents are all the universe needs.** +**这不是“照着源码抄”。这是“抓住真正关键的设计,然后自己做出来”。** diff --git a/README.md b/README.md index 02561fef1..2d2672b01 100644 --- a/README.md +++ b/README.md @@ -1,233 +1,182 @@ [English](./README.md) | [中文](./README-zh.md) | [日本語](./README-ja.md) -# Learn Claude Code -- Harness Engineering for Real Agents -## The Model IS the Agent +# Learn Claude Code -Before we talk about code, let's get one thing absolutely straight. +A teaching repository for implementers who want to build a high-completion coding-agent harness from scratch. -**An agent is a model. Not a framework. Not a prompt chain. Not a drag-and-drop workflow.** +This repo does not try to mirror every product detail from a production codebase. It focuses on the mechanisms that actually decide whether an agent can work well: -### What an Agent IS +- the loop +- tools +- planning +- delegation +- context control +- permissions +- hooks +- memory +- prompt assembly +- tasks +- teams +- isolated execution lanes +- external capability routing -An agent is a neural network -- a Transformer, an RNN, a learned function -- that has been trained, through billions of gradient updates on action-sequence data, to perceive an environment, reason about goals, and take actions to achieve them. The word "agent" in AI has always meant this. Always. +The goal is simple: -A human is an agent. A biological neural network, shaped by millions of years of evolutionary training, perceiving the world through senses, reasoning through a brain, acting through a body. When DeepMind, OpenAI, or Anthropic say "agent," they mean the same thing the field has meant since its inception: **a model that has learned to act.** +**understand the real design backbone well enough that you can rebuild it yourself.** -The proof is written in history: +## What This Repo Is Really Teaching -- **2013 -- DeepMind DQN plays Atari.** A single neural network, receiving only raw pixels and game scores, learned to play 7 Atari 2600 games -- surpassing all prior algorithms and beating human experts on 3 of them. By 2015, the same architecture scaled to [49 games and matched professional human testers](https://www.nature.com/articles/nature14236), published in *Nature*. No game-specific rules. No decision trees. One model, learning from experience. That model was the agent. +One sentence first: -- **2019 -- OpenAI Five conquers Dota 2.** Five neural networks, having played [45,000 years of Dota 2](https://openai.com/index/openai-five-defeats-dota-2-world-champions/) against themselves in 10 months, defeated **OG** -- the reigning TI8 world champions -- 2-0 on a San Francisco livestream. In a subsequent public arena, the AI won 99.4% of 42,729 games against all comers. No scripted strategies. No meta-programmed team coordination. The models learned teamwork, tactics, and real-time adaptation entirely through self-play. +**The model does the reasoning. The harness gives the model a working environment.** -- **2019 -- DeepMind AlphaStar masters StarCraft II.** AlphaStar [beat professional players 10-1](https://deepmind.google/blog/alphastar-mastering-the-real-time-strategy-game-starcraft-ii/) in a closed-door match, and later achieved [Grandmaster status](https://www.nature.com/articles/d41586-019-03298-6) on European servers -- top 0.15% of 90,000 players. A game with imperfect information, real-time decisions, and a combinatorial action space that dwarfs chess and Go. The agent? A model. Trained. Not scripted. +That working environment is made of a few cooperating parts: -- **2019 -- Tencent Jueyu dominates Honor of Kings.** Tencent AI Lab's "Jueyu" [defeated KPL professional players](https://www.jiemian.com/article/3371171.html) in a full 5v5 match at the World Champion Cup. In 1v1 mode, pros won only [1 out of 15 games and never survived past 8 minutes](https://developer.aliyun.com/article/851058). Training intensity: one day equaled 440 human years. By 2021, Jueyu surpassed KPL pros across the full hero pool. No handcrafted matchup tables. No scripted compositions. A model that learned the entire game from scratch through self-play. +- `Agent Loop`: ask the model, run tools, append results, continue +- `Tools`: the agent's hands +- `Planning`: a small structure that keeps multi-step work from drifting +- `Context Management`: keep the active context small and coherent +- `Permissions`: do not let model intent turn into unsafe execution directly +- `Hooks`: extend behavior around the loop without rewriting the loop +- `Memory`: keep only durable facts that should survive sessions +- `Prompt Construction`: assemble the model input from stable rules and runtime state +- `Tasks / Teams / Worktree / MCP`: grow the single-agent core into a larger working platform -- **2024-2025 -- LLM agents reshape software engineering.** Claude, GPT, Gemini -- large language models trained on the entirety of human code and reasoning -- are deployed as coding agents. They read codebases, write implementations, debug failures, coordinate in teams. The architecture is identical to every agent before them: a trained model, placed in an environment, given tools to perceive and act. The only difference is the scale of what they've learned and the generality of the tasks they solve. +This is the teaching promise of the repo: -Every one of these milestones shares the same truth: **the "agent" is never the surrounding code. The agent is always the model.** +- teach the mainline in a clean order +- explain unfamiliar concepts before relying on them +- stay close to real system structure +- avoid drowning the learner in irrelevant product details -### What an Agent Is NOT +## What This Repo Deliberately Does Not Teach -The word "agent" has been hijacked by an entire cottage industry of prompt plumbing. +This repo is not trying to preserve every detail that may exist in a real production system. -Drag-and-drop workflow builders. No-code "AI agent" platforms. Prompt-chain orchestration libraries. They all share the same delusion: that wiring together LLM API calls with if-else branches, node graphs, and hardcoded routing logic constitutes "building an agent." +If a detail is not central to the agent's core operating model, it should not dominate the teaching line. That includes things like: -It doesn't. What they build is a Rube Goldberg machine -- an over-engineered, brittle pipeline of procedural rules, with an LLM wedged in as a glorified text-completion node. That is not an agent. That is a shell script with delusions of grandeur. +- packaging and release mechanics +- cross-platform compatibility layers +- enterprise policy glue +- telemetry and account wiring +- historical compatibility branches +- product-specific naming accidents -**Prompt plumbing "agents" are the fantasy of programmers who don't train models.** They attempt to brute-force intelligence by stacking procedural logic -- massive rule trees, node graphs, chain-of-prompt waterfalls -- and praying that enough glue code will somehow emergently produce autonomous behavior. It won't. You cannot engineer your way to agency. Agency is learned, not programmed. +Those details may matter in production. They do not belong at the center of a 0-to-1 teaching path. -Those systems are dead on arrival: fragile, unscalable, fundamentally incapable of generalization. They are the modern resurrection of GOFAI (Good Old-Fashioned AI) -- the symbolic rule systems the field abandoned decades ago, now spray-painted with an LLM veneer. Different packaging, same dead end. +## Who This Is For -### The Mind Shift: From "Developing Agents" to Developing Harness +The assumed reader: -When someone says "I'm developing an agent," they can only mean one of two things: +- knows basic Python +- understands functions, classes, lists, and dictionaries +- may be completely new to agent systems -**1. Training the model.** Adjusting weights through reinforcement learning, fine-tuning, RLHF, or other gradient-based methods. Collecting task-process data -- the actual sequences of perception, reasoning, and action in real domains -- and using it to shape the model's behavior. This is what DeepMind, OpenAI, Tencent AI Lab, and Anthropic do. This is agent development in the truest sense. +So the repo tries to keep a few strong teaching rules: -**2. Building the harness.** Writing the code that gives the model an environment to operate in. This is what most of us do, and it is the focus of this repository. +- explain a concept before using it +- keep one concept fully explained in one main place +- start from "what it is", then "why it exists", then "how to implement it" +- avoid forcing beginners to assemble the system from scattered fragments -A harness is everything the agent needs to function in a specific domain: +## Recommended Reading Order -``` -Harness = Tools + Knowledge + Observation + Action Interfaces + Permissions - - Tools: file I/O, shell, network, database, browser - Knowledge: product docs, domain references, API specs, style guides - Observation: git diff, error logs, browser state, sensor data - Action: CLI commands, API calls, UI interactions - Permissions: sandboxing, approval workflows, trust boundaries -``` - -The model decides. The harness executes. The model reasons. The harness provides context. The model is the driver. The harness is the vehicle. - -**A coding agent's harness is its IDE, terminal, and filesystem access.** A farm agent's harness is its sensor array, irrigation controls, and weather data feeds. A hotel agent's harness is its booking system, guest communication channels, and facility management APIs. The agent -- the intelligence, the decision-maker -- is always the model. The harness changes per domain. The agent generalizes across them. - -This repo teaches you to build vehicles. Vehicles for coding. But the design patterns generalize to any domain: farm management, hotel operations, manufacturing, logistics, healthcare, education, scientific research. Anywhere a task needs to be perceived, reasoned about, and acted upon -- an agent needs a harness. - -### What Harness Engineers Actually Do - -If you are reading this repository, you are likely a harness engineer -- and that is a powerful thing to be. Here is your real job: - -- **Implement tools.** Give the agent hands. File read/write, shell execution, API calls, browser control, database queries. Each tool is an action the agent can take in its environment. Design them to be atomic, composable, and well-described. - -- **Curate knowledge.** Give the agent domain expertise. Product documentation, architectural decision records, style guides, regulatory requirements. Load them on-demand (s05), not upfront. The agent should know what's available and pull what it needs. - -- **Manage context.** Give the agent clean memory. Subagent isolation (s04) prevents noise from leaking. Context compression (s06) prevents history from overwhelming. Task systems (s07) persist goals beyond any single conversation. - -- **Control permissions.** Give the agent boundaries. Sandbox file access. Require approval for destructive operations. Enforce trust boundaries between the agent and external systems. This is where safety engineering meets harness engineering. - -- **Collect task-process data.** Every action sequence the agent executes in your harness is training signal. The perception-reasoning-action traces from real deployments are the raw material for fine-tuning the next generation of agent models. Your harness doesn't just serve the agent -- it can help improve the agent. - -You are not writing the intelligence. You are building the world the intelligence inhabits. The quality of that world -- how clearly the agent can perceive, how precisely it can act, how rich its available knowledge is -- directly determines how effectively the intelligence can express itself. - -**Build great harnesses. The agent will do the rest.** +The English docs are intended to stand on their own. The chapter order, bridge docs, and mechanism map are aligned across locales, so you can stay inside one language while following the main learning path. -### Why Claude Code -- A Masterclass in Harness Engineering +- Overview: [`docs/en/s00-architecture-overview.md`](./docs/en/s00-architecture-overview.md) +- Code Reading Order: [`docs/en/s00f-code-reading-order.md`](./docs/en/s00f-code-reading-order.md) +- Glossary: [`docs/en/glossary.md`](./docs/en/glossary.md) +- Teaching Scope: [`docs/en/teaching-scope.md`](./docs/en/teaching-scope.md) +- Data Structures: [`docs/en/data-structures.md`](./docs/en/data-structures.md) -Why does this repository dissect Claude Code specifically? +## If This Is Your First Visit, Start Here -Because Claude Code is the most elegant and fully-realized agent harness we have seen. Not because of any single clever trick, but because of what it *doesn't* do: it doesn't try to be the agent. It doesn't impose rigid workflows. It doesn't second-guess the model with elaborate decision trees. It provides the model with tools, knowledge, context management, and permission boundaries -- then gets out of the way. +Do not open random chapters first. -Look at what Claude Code actually is, stripped to its essence: +The safest path is: -``` -Claude Code = one agent loop - + tools (bash, read, write, edit, glob, grep, browser...) - + on-demand skill loading - + context compression - + subagent spawning - + task system with dependency graph - + team coordination with async mailboxes - + worktree isolation for parallel execution - + permission governance -``` - -That's it. That's the entire architecture. Every component is a harness mechanism -- a piece of the world built for the agent to inhabit. The agent itself? It's Claude. A model. Trained by Anthropic on the full breadth of human reasoning and code. The harness doesn't make Claude smart. Claude is already smart. The harness gives Claude hands, eyes, and a workspace. - -This is why Claude Code is the ideal teaching subject: **it demonstrates what happens when you trust the model and focus your engineering on the harness.** Every session in this repository (s01-s12) reverse-engineers one harness mechanism from Claude Code's architecture. By the end, you understand not just how Claude Code works, but the universal principles of harness engineering that apply to any agent in any domain. +1. Read [`docs/en/s00-architecture-overview.md`](./docs/en/s00-architecture-overview.md) for the full system map. +2. Read [`docs/en/s00d-chapter-order-rationale.md`](./docs/en/s00d-chapter-order-rationale.md) so the chapter order makes sense before you dive into mechanism detail. +3. Read [`docs/en/s00f-code-reading-order.md`](./docs/en/s00f-code-reading-order.md) so you know which local files to open first. +4. Follow the four stages in order: `s01-s06 -> s07-s11 -> s12-s14 -> s15-s19`. +5. After each stage, stop and rebuild the smallest version yourself before continuing. -The lesson is not "copy Claude Code." The lesson is: **the best agent products are built by engineers who understand that their job is harness, not intelligence.** +If the middle and late chapters start to blur together, reset in this order: ---- +1. [`docs/en/data-structures.md`](./docs/en/data-structures.md) +2. [`docs/en/entity-map.md`](./docs/en/entity-map.md) +3. the bridge docs closest to the chapter you are stuck on +4. then return to the chapter body -## The Vision: Fill the Universe with Real Agents +## Web Learning Interface -This is not just about coding agents. - -Every domain where humans perform complex, multi-step, judgment-intensive work is a domain where agents can operate -- given the right harness. The patterns in this repository are universal: - -``` -Estate management agent = model + property sensors + maintenance tools + tenant comms -Agricultural agent = model + soil/weather data + irrigation controls + crop knowledge -Hotel operations agent = model + booking system + guest channels + facility APIs -Medical research agent = model + literature search + lab instruments + protocol docs -Manufacturing agent = model + production line sensors + quality controls + logistics -Education agent = model + curriculum knowledge + student progress + assessment tools -``` - -The loop is always the same. The tools change. The knowledge changes. The permissions change. The agent -- the model -- generalizes. - -Every harness engineer reading this repository is learning patterns that apply far beyond software engineering. You are learning to build the infrastructure for an intelligent, automated future. Every well-designed harness deployed in a real domain is one more place where an agent can perceive, reason, and act. - -First we fill the workshops. Then the farms, the hospitals, the factories. Then the cities. Then the planet. - -**Bash is all you need. Real agents are all the universe needs.** - ---- - -``` - THE AGENT PATTERN - ================= - - User --> messages[] --> LLM --> response - | - stop_reason == "tool_use"? - / \ - yes no - | | - execute tools return text - append results - loop back -----------------> messages[] - - - That's the minimal loop. Every AI agent needs this loop. - The MODEL decides when to call tools and when to stop. - The CODE just executes what the model asks for. - This repo teaches you to build what surrounds this loop -- - the harness that makes the agent effective in a specific domain. -``` +If you want a more visual way to understand the chapter order, stage boundaries, and chapter-to-chapter upgrades, run the built-in teaching site: -**12 progressive sessions, from a simple loop to isolated autonomous execution.** -**Each session adds one harness mechanism. Each mechanism has one motto.** - -> **s01**   *"One loop & Bash is all you need"* — one tool + one loop = an agent -> -> **s02**   *"Adding a tool means adding one handler"* — the loop stays the same; new tools register into the dispatch map -> -> **s03**   *"An agent without a plan drifts"* — list the steps first, then execute; completion doubles -> -> **s04**   *"Break big tasks down; each subtask gets a clean context"* — subagents use independent messages[], keeping the main conversation clean -> -> **s05**   *"Load knowledge when you need it, not upfront"* — inject via tool_result, not the system prompt -> -> **s06**   *"Context will fill up; you need a way to make room"* — three-layer compression strategy for infinite sessions -> -> **s07**   *"Break big goals into small tasks, order them, persist to disk"* — a file-based task graph with dependencies, laying the foundation for multi-agent collaboration -> -> **s08**   *"Run slow operations in the background; the agent keeps thinking"* — daemon threads run commands, inject notifications on completion -> -> **s09**   *"When the task is too big for one, delegate to teammates"* — persistent teammates + async mailboxes -> -> **s10**   *"Teammates need shared communication rules"* — one request-response pattern drives all negotiation -> -> **s11**   *"Teammates scan the board and claim tasks themselves"* — no need for the lead to assign each one -> -> **s12**   *"Each works in its own directory, no interference"* — tasks manage goals, worktrees manage directories, bound by ID - ---- - -## The Core Pattern - -```python -def agent_loop(messages): - while True: - response = client.messages.create( - model=MODEL, system=SYSTEM, - messages=messages, tools=TOOLS, - ) - messages.append({"role": "assistant", - "content": response.content}) - - if response.stop_reason != "tool_use": - return - - results = [] - for block in response.content: - if block.type == "tool_use": - output = TOOL_HANDLERS[block.name](**block.input) - results.append({ - "type": "tool_result", - "tool_use_id": block.id, - "content": output, - }) - messages.append({"role": "user", "content": results}) +```sh +cd web +npm install +npm run dev ``` -Every session layers one harness mechanism on top of this loop -- without changing the loop itself. The loop belongs to the agent. The mechanisms belong to the harness. - -## Scope (Important) - -This repository is a 0->1 learning project for harness engineering -- building the environment that surrounds an agent model. -It intentionally simplifies or omits several production mechanisms: - -- Full event/hook buses (for example PreToolUse, SessionStart/End, ConfigChange). - s12 includes only a minimal append-only lifecycle event stream for teaching. -- Rule-based permission governance and trust workflows -- Session lifecycle controls (resume/fork) and advanced worktree lifecycle controls -- Full MCP runtime details (transport/OAuth/resource subscribe/polling) - -Treat the team JSONL mailbox protocol in this repo as a teaching implementation, not a claim about any specific production internals. +Then use these routes: + +- `/en`: the English entry page for choosing a reading path +- `/en/timeline`: the cleanest view of the full mainline +- `/en/layers`: the four-stage boundary map +- `/en/compare`: adjacent-step comparison and jump diagnosis + +For a first pass, start with `timeline`. +If you are already in the middle and chapter boundaries are getting fuzzy, use `layers` and `compare` before you go deeper into source code. + +### Bridge Docs + +These are not extra main chapters. They are bridge documents that make the middle and late system easier to understand: + +- Chapter order rationale: [`docs/en/s00d-chapter-order-rationale.md`](./docs/en/s00d-chapter-order-rationale.md) +- Code reading order: [`docs/en/s00f-code-reading-order.md`](./docs/en/s00f-code-reading-order.md) +- Reference module map: [`docs/en/s00e-reference-module-map.md`](./docs/en/s00e-reference-module-map.md) +- Query control plane: [`docs/en/s00a-query-control-plane.md`](./docs/en/s00a-query-control-plane.md) +- One request lifecycle: [`docs/en/s00b-one-request-lifecycle.md`](./docs/en/s00b-one-request-lifecycle.md) +- Query transition model: [`docs/en/s00c-query-transition-model.md`](./docs/en/s00c-query-transition-model.md) +- Tool control plane: [`docs/en/s02a-tool-control-plane.md`](./docs/en/s02a-tool-control-plane.md) +- Tool execution runtime: [`docs/en/s02b-tool-execution-runtime.md`](./docs/en/s02b-tool-execution-runtime.md) +- Message and prompt pipeline: [`docs/en/s10a-message-prompt-pipeline.md`](./docs/en/s10a-message-prompt-pipeline.md) +- Runtime task model: [`docs/en/s13a-runtime-task-model.md`](./docs/en/s13a-runtime-task-model.md) +- MCP capability layers: [`docs/en/s19a-mcp-capability-layers.md`](./docs/en/s19a-mcp-capability-layers.md) +- Team-task-lane model: [`docs/en/team-task-lane-model.md`](./docs/en/team-task-lane-model.md) +- Entity map: [`docs/en/entity-map.md`](./docs/en/entity-map.md) + +### Four Stages + +1. `s01-s06`: build a useful single-agent core +2. `s07-s11`: add safety, extension points, memory, prompt assembly, and recovery +3. `s12-s14`: turn temporary session planning into durable runtime work +4. `s15-s19`: move into teams, protocols, autonomy, isolated execution, and external capability routing + +### Main Chapters + +| Chapter | Topic | What you get | +|---|---|---| +| `s00` | Architecture Overview | the global map, key terms, and learning order | +| `s01` | Agent Loop | the smallest working agent loop | +| `s02` | Tool Use | a stable tool dispatch layer | +| `s03` | Todo / Planning | a visible session plan | +| `s04` | Subagent | fresh context per delegated subtask | +| `s05` | Skills | load specialized knowledge only when needed | +| `s06` | Context Compact | keep the active window small | +| `s07` | Permission System | a safety gate before execution | +| `s08` | Hook System | extension points around the loop | +| `s09` | Memory System | durable cross-session knowledge | +| `s10` | System Prompt | section-based prompt assembly | +| `s11` | Error Recovery | continuation and retry branches | +| `s12` | Task System | persistent task graph | +| `s13` | Background Tasks | non-blocking execution | +| `s14` | Cron Scheduler | time-based triggers | +| `s15` | Agent Teams | persistent teammates | +| `s16` | Team Protocols | shared coordination rules | +| `s17` | Autonomous Agents | self-claiming and self-resume | +| `s18` | Worktree Isolation | isolated execution lanes | +| `s19` | MCP & Plugin | external capability routing | ## Quick Start @@ -235,143 +184,78 @@ Treat the team JSONL mailbox protocol in this repo as a teaching implementation, git clone https://github.com/shareAI-lab/learn-claude-code cd learn-claude-code pip install -r requirements.txt -cp .env.example .env # Edit .env with your ANTHROPIC_API_KEY - -python agents/s01_agent_loop.py # Start here -python agents/s12_worktree_task_isolation.py # Full progression endpoint -python agents/s_full.py # Capstone: all mechanisms combined +cp .env.example .env ``` -### Web Platform - -Interactive visualizations, step-through diagrams, source viewer, and documentation. +Then configure `ANTHROPIC_API_KEY` or a compatible endpoint in `.env`, and run: ```sh -cd web && npm install && npm run dev # http://localhost:3000 -``` - -## Learning Path - -``` -Phase 1: THE LOOP Phase 2: PLANNING & KNOWLEDGE -================== ============================== -s01 The Agent Loop [1] s03 TodoWrite [5] - while + stop_reason TodoManager + nag reminder - | | - +-> s02 Tool Use [4] s04 Subagents [5] - dispatch map: name->handler fresh messages[] per child - | - s05 Skills [5] - SKILL.md via tool_result - | - s06 Context Compact [5] - 3-layer compression - -Phase 3: PERSISTENCE Phase 4: TEAMS -================== ===================== -s07 Tasks [8] s09 Agent Teams [9] - file-based CRUD + deps graph teammates + JSONL mailboxes - | | -s08 Background Tasks [6] s10 Team Protocols [12] - daemon threads + notify queue shutdown + plan approval FSM - | - s11 Autonomous Agents [14] - idle cycle + auto-claim - | - s12 Worktree Isolation [16] - task coordination + optional isolated execution lanes - - [N] = number of tools -``` - -## Architecture - +python agents/s01_agent_loop.py +python agents/s18_worktree_task_isolation.py +python agents/s19_mcp_plugin.py +python agents/s_full.py ``` -learn-claude-code/ -| -|-- agents/ # Python reference implementations (s01-s12 + s_full capstone) -|-- docs/{en,zh,ja}/ # Mental-model-first documentation (3 languages) -|-- web/ # Interactive learning platform (Next.js) -|-- skills/ # Skill files for s05 -+-- .github/workflows/ci.yml # CI: typecheck + build -``` - -## Documentation - -Mental-model-first: problem, solution, ASCII diagram, minimal code. -Available in [English](./docs/en/) | [中文](./docs/zh/) | [日本語](./docs/ja/). - -| Session | Topic | Motto | -|---------|-------|-------| -| [s01](./docs/en/s01-the-agent-loop.md) | The Agent Loop | *One loop & Bash is all you need* | -| [s02](./docs/en/s02-tool-use.md) | Tool Use | *Adding a tool means adding one handler* | -| [s03](./docs/en/s03-todo-write.md) | TodoWrite | *An agent without a plan drifts* | -| [s04](./docs/en/s04-subagent.md) | Subagents | *Break big tasks down; each subtask gets a clean context* | -| [s05](./docs/en/s05-skill-loading.md) | Skills | *Load knowledge when you need it, not upfront* | -| [s06](./docs/en/s06-context-compact.md) | Context Compact | *Context will fill up; you need a way to make room* | -| [s07](./docs/en/s07-task-system.md) | Tasks | *Break big goals into small tasks, order them, persist to disk* | -| [s08](./docs/en/s08-background-tasks.md) | Background Tasks | *Run slow operations in the background; the agent keeps thinking* | -| [s09](./docs/en/s09-agent-teams.md) | Agent Teams | *When the task is too big for one, delegate to teammates* | -| [s10](./docs/en/s10-team-protocols.md) | Team Protocols | *Teammates need shared communication rules* | -| [s11](./docs/en/s11-autonomous-agents.md) | Autonomous Agents | *Teammates scan the board and claim tasks themselves* | -| [s12](./docs/en/s12-worktree-task-isolation.md) | Worktree + Task Isolation | *Each works in its own directory, no interference* | - -## What's Next -- from understanding to shipping - -After the 12 sessions you understand how harness engineering works inside out. Two ways to put that knowledge to work: - -### Kode Agent CLI -- Open-Source Coding Agent CLI -> `npm i -g @shareai-lab/kode` +Suggested order: -Skill & LSP support, Windows-ready, pluggable with GLM / MiniMax / DeepSeek and other open models. Install and go. +1. Run `s01` and make sure the minimal loop really works. +2. Read `s00`, then move through `s01 -> s11` in order. +3. Only after the single-agent core plus its control plane feel stable, continue into `s12 -> s19`. +4. Read `s_full.py` last, after the mechanisms already make sense separately. -GitHub: **[shareAI-lab/Kode-cli](https://github.com/shareAI-lab/Kode-cli)** +## How To Read Each Chapter -### Kode Agent SDK -- Embed Agent Capabilities in Your App +Each chapter is easier to absorb if you keep the same reading rhythm: -The official Claude Code Agent SDK communicates with a full CLI process under the hood -- each concurrent user means a separate terminal process. Kode SDK is a standalone library with no per-user process overhead, embeddable in backends, browser extensions, embedded devices, or any runtime. +1. what problem appears without this mechanism +2. what the new concept means +3. what the smallest correct implementation looks like +4. where the state actually lives +5. how it plugs back into the loop +6. where to stop first, and what can wait until later -GitHub: **[shareAI-lab/Kode-agent-sdk](https://github.com/shareAI-lab/Kode-agent-sdk)** +If you keep asking: ---- +- "Is this core mainline or just a side detail?" +- "Where does this state actually live?" -## Sister Repo: from *on-demand sessions* to *always-on assistant* +go back to: -The harness this repo teaches is **use-and-discard** -- open a terminal, give the agent a task, close when done, next session starts blank. That is the Claude Code model. +- [`docs/en/teaching-scope.md`](./docs/en/teaching-scope.md) +- [`docs/en/data-structures.md`](./docs/en/data-structures.md) +- [`docs/en/entity-map.md`](./docs/en/entity-map.md) -[OpenClaw](https://github.com/openclaw/openclaw) proved another possibility: on top of the same agent core, two harness mechanisms turn the agent from "poke it to make it move" into "it wakes up every 30 seconds to look for work": +## Repository Structure -- **Heartbeat** -- every 30s the harness sends the agent a message to check if there is anything to do. Nothing? Go back to sleep. Something? Act immediately. -- **Cron** -- the agent can schedule its own future tasks, executed automatically when the time comes. - -Add multi-channel IM routing (WhatsApp / Telegram / Slack / Discord, 13+ platforms), persistent context memory, and a Soul personality system, and the agent goes from a disposable tool to an always-on personal AI assistant. - -**[claw0](https://github.com/shareAI-lab/claw0)** is our companion teaching repo that deconstructs these harness mechanisms from scratch: - -``` -claw agent = agent core + heartbeat + cron + IM chat + memory + soul +```text +learn-claude-code/ +├── agents/ # runnable Python reference implementations per chapter +├── docs/zh/ # Chinese mainline docs +├── docs/en/ # English docs +├── docs/ja/ # Japanese docs +├── skills/ # skill files used in s05 +├── web/ # web teaching platform +└── requirements.txt ``` -``` -learn-claude-code claw0 -(agent harness core: (proactive always-on harness: - loop, tools, planning, heartbeat, cron, IM channels, - teams, worktree isolation) memory, soul personality) -``` +## Language Status -## About -
+Chinese is still the canonical teaching line and the fastest-moving version. -Scan with WeChat to follow us, -or follow on X: [shareAI-Lab](https://x.com/baicai003) +- `zh`: most reviewed and most complete +- `en`: main chapters plus the major bridge docs are available +- `ja`: main chapters plus the major bridge docs are available -## License +If you want the fullest and most frequently refined explanation path, use the Chinese docs first. -MIT +## End Goal ---- +By the end of the repo, you should be able to answer these questions clearly: -**The model is the agent. The code is the harness. Build great harnesses. The agent will do the rest.** +- what is the minimum state a coding agent needs? +- why is `tool_result` the center of the loop? +- when should you use a subagent instead of stuffing more into one context? +- what problem do permissions, hooks, memory, prompt assembly, and tasks each solve? +- when should a single-agent system grow into tasks, teams, worktrees, and MCP? -**Bash is all you need. Real agents are all the universe needs.** +If you can answer those questions clearly and build a similar system yourself, this repo has done its job. diff --git a/agents/__init__.py b/agents/__init__.py index fc7a46075..3efd78a10 100644 --- a/agents/__init__.py +++ b/agents/__init__.py @@ -1,3 +1,3 @@ -# agents/ - Harness implementations (s01-s12) + full reference (s_full) +# agents/ - Harness implementations (s01-s19) + capstone reference (s_full) # Each file is self-contained and runnable: python agents/s01_agent_loop.py # The model is the agent. These files are the harness. diff --git a/agents/s01_agent_loop.py b/agents/s01_agent_loop.py index 8455ebff4..81db3aa3c 100644 --- a/agents/s01_agent_loop.py +++ b/agents/s01_agent_loop.py @@ -1,31 +1,23 @@ #!/usr/bin/env python3 -# Harness: the loop -- the model's first connection to the real world. +# Harness: the loop -- keep feeding real tool results back into the model. """ s01_agent_loop.py - The Agent Loop -The entire secret of an AI coding agent in one pattern: - - while stop_reason == "tool_use": - response = LLM(messages, tools) - execute tools - append results - - +----------+ +-------+ +---------+ - | User | ---> | LLM | ---> | Tool | - | prompt | | | | execute | - +----------+ +---+---+ +----+----+ - ^ | - | tool_result | - +---------------+ - (loop continues) - -This is the core loop: feed tool results back to the model -until the model decides to stop. Production agents layer -policy, hooks, and lifecycle controls on top. +This file teaches the smallest useful coding-agent pattern: + + user message + -> model reply + -> if tool_use: execute tools + -> write tool_result back to messages + -> continue + +It intentionally keeps the loop small, but still makes the loop state explicit +so later chapters can grow from the same structure. """ import os import subprocess +from dataclasses import dataclass try: import readline @@ -49,11 +41,14 @@ client = Anthropic(base_url=os.getenv("ANTHROPIC_BASE_URL")) MODEL = os.environ["MODEL_ID"] -SYSTEM = f"You are a coding agent at {os.getcwd()}. Use bash to solve tasks. Act, don't explain." +SYSTEM = ( + f"You are a coding agent at {os.getcwd()}. " + "Use bash to inspect and change the workspace. Act first, then report clearly." +) TOOLS = [{ "name": "bash", - "description": "Run a shell command.", + "description": "Run a shell command in the current workspace.", "input_schema": { "type": "object", "properties": {"command": {"type": "string"}}, @@ -62,43 +57,92 @@ }] +@dataclass +class LoopState: + # The minimal loop state: history, loop count, and why we continue. + messages: list + turn_count: int = 1 + transition_reason: str | None = None + + def run_bash(command: str) -> str: dangerous = ["rm -rf /", "sudo", "shutdown", "reboot", "> /dev/"] - if any(d in command for d in dangerous): + if any(item in command for item in dangerous): return "Error: Dangerous command blocked" try: - r = subprocess.run(command, shell=True, cwd=os.getcwd(), - capture_output=True, text=True, timeout=120) - out = (r.stdout + r.stderr).strip() - return out[:50000] if out else "(no output)" + result = subprocess.run( + command, + shell=True, + cwd=os.getcwd(), + capture_output=True, + text=True, + timeout=120, + ) except subprocess.TimeoutExpired: return "Error: Timeout (120s)" except (FileNotFoundError, OSError) as e: return f"Error: {e}" - -# -- The core pattern: a while loop that calls tools until the model stops -- -def agent_loop(messages: list): - while True: - response = client.messages.create( - model=MODEL, system=SYSTEM, messages=messages, - tools=TOOLS, max_tokens=8000, - ) - # Append assistant turn - messages.append({"role": "assistant", "content": response.content}) - # If the model didn't call a tool, we're done - if response.stop_reason != "tool_use": - return - # Execute each tool call, collect results - results = [] - for block in response.content: - if block.type == "tool_use": - print(f"\033[33m$ {block.input['command']}\033[0m") - output = run_bash(block.input["command"]) - print(output[:200]) - results.append({"type": "tool_result", "tool_use_id": block.id, - "content": output}) - messages.append({"role": "user", "content": results}) + output = (result.stdout + result.stderr).strip() + return output[:50000] if output else "(no output)" + + +def extract_text(content) -> str: + if not isinstance(content, list): + return "" + texts = [] + for block in content: + text = getattr(block, "text", None) + if text: + texts.append(text) + return "\n".join(texts).strip() + + +def execute_tool_calls(response_content) -> list[dict]: + results = [] + for block in response_content: + if block.type != "tool_use": + continue + command = block.input["command"] + print(f"\033[33m$ {command}\033[0m") + output = run_bash(command) + print(output[:200]) + results.append({ + "type": "tool_result", + "tool_use_id": block.id, + "content": output, + }) + return results + + +def run_one_turn(state: LoopState) -> bool: + response = client.messages.create( + model=MODEL, + system=SYSTEM, + messages=state.messages, + tools=TOOLS, + max_tokens=8000, + ) + state.messages.append({"role": "assistant", "content": response.content}) + + if response.stop_reason != "tool_use": + state.transition_reason = None + return False + + results = execute_tool_calls(response.content) + if not results: + state.transition_reason = None + return False + + state.messages.append({"role": "user", "content": results}) + state.turn_count += 1 + state.transition_reason = "tool_result" + return True + + +def agent_loop(state: LoopState) -> None: + while run_one_turn(state): + pass if __name__ == "__main__": @@ -110,11 +154,12 @@ def agent_loop(messages: list): break if query.strip().lower() in ("q", "exit", ""): break + history.append({"role": "user", "content": query}) - agent_loop(history) - response_content = history[-1]["content"] - if isinstance(response_content, list): - for block in response_content: - if hasattr(block, "text"): - print(block.text) + state = LoopState(messages=history) + agent_loop(state) + + final_text = extract_text(history[-1]["content"]) + if final_text: + print(final_text) print() diff --git a/agents/s02_tool_use.py b/agents/s02_tool_use.py index 8e434c04a..793ef3a07 100644 --- a/agents/s02_tool_use.py +++ b/agents/s02_tool_use.py @@ -1,20 +1,11 @@ #!/usr/bin/env python3 # Harness: tool dispatch -- expanding what the model can reach. """ -s02_tool_use.py - Tools +s02_tool_use.py - Tool dispatch + message normalization -The agent loop from s01 didn't change. We just added tools to the array -and a dispatch map to route calls. - - +----------+ +-------+ +------------------+ - | User | ---> | LLM | ---> | Tool Dispatch | - | prompt | | | | { | - +----------+ +---+---+ | bash: run_bash | - ^ | read: run_read | - | | write: run_wr | - +----------+ edit: run_edit | - tool_result| } | - +------------------+ +The agent loop from s01 didn't change. We added tools to the dispatch map, +and a normalize_messages() function that cleans up the message list before +each API call. Key insight: "The loop didn't change at all. I just added tools." """ @@ -91,6 +82,11 @@ def run_edit(path: str, old_text: str, new_text: str) -> str: return f"Error: {e}" +# -- Concurrency safety classification -- +# Read-only tools can safely run in parallel; mutating tools must be serialized. +CONCURRENCY_SAFE = {"read_file"} +CONCURRENCY_UNSAFE = {"write_file", "edit_file"} + # -- The dispatch map: {tool_name: handler} -- TOOL_HANDLERS = { "bash": lambda **kw: run_bash(kw["command"]), @@ -111,10 +107,73 @@ def run_edit(path: str, old_text: str, new_text: str) -> str: ] +def normalize_messages(messages: list) -> list: + """Clean up messages before sending to the API. + + Three jobs: + 1. Strip internal metadata fields the API doesn't understand + 2. Ensure every tool_use has a matching tool_result (insert placeholder if missing) + 3. Merge consecutive same-role messages (API requires strict alternation) + """ + cleaned = [] + for msg in messages: + clean = {"role": msg["role"]} + if isinstance(msg.get("content"), str): + clean["content"] = msg["content"] + elif isinstance(msg.get("content"), list): + clean["content"] = [ + {k: v for k, v in block.items() + if not k.startswith("_")} + for block in msg["content"] + if isinstance(block, dict) + ] + else: + clean["content"] = msg.get("content", "") + cleaned.append(clean) + + # Collect existing tool_result IDs + existing_results = set() + for msg in cleaned: + if isinstance(msg.get("content"), list): + for block in msg["content"]: + if isinstance(block, dict) and block.get("type") == "tool_result": + existing_results.add(block.get("tool_use_id")) + + # Find orphaned tool_use blocks and insert placeholder results + for msg in cleaned: + if msg["role"] != "assistant" or not isinstance(msg.get("content"), list): + continue + for block in msg["content"]: + if not isinstance(block, dict): + continue + if block.get("type") == "tool_use" and block.get("id") not in existing_results: + cleaned.append({"role": "user", "content": [ + {"type": "tool_result", "tool_use_id": block["id"], + "content": "(cancelled)"} + ]}) + + # Merge consecutive same-role messages + if not cleaned: + return cleaned + merged = [cleaned[0]] + for msg in cleaned[1:]: + if msg["role"] == merged[-1]["role"]: + prev = merged[-1] + prev_c = prev["content"] if isinstance(prev["content"], list) \ + else [{"type": "text", "text": str(prev["content"])}] + curr_c = msg["content"] if isinstance(msg["content"], list) \ + else [{"type": "text", "text": str(msg["content"])}] + prev["content"] = prev_c + curr_c + else: + merged.append(msg) + return merged + + def agent_loop(messages: list): while True: response = client.messages.create( - model=MODEL, system=SYSTEM, messages=messages, + model=MODEL, system=SYSTEM, + messages=normalize_messages(messages), tools=TOOLS, max_tokens=8000, ) messages.append({"role": "assistant", "content": response.content}) diff --git a/agents/s03_todo_write.py b/agents/s03_todo_write.py index 4c7076c55..e2c95f77b 100644 --- a/agents/s03_todo_write.py +++ b/agents/s03_todo_write.py @@ -1,34 +1,16 @@ #!/usr/bin/env python3 -# Harness: planning -- keeping the model on course without scripting the route. +# Harness: planning -- keep the current session plan outside the model's head. """ -s03_todo_write.py - TodoWrite - -The model tracks its own progress via a TodoManager. A nag reminder -forces it to keep updating when it forgets. - - +----------+ +-------+ +---------+ - | User | ---> | LLM | ---> | Tools | - | prompt | | | | + todo | - +----------+ +---+---+ +----+----+ - ^ | - | tool_result | - +---------------+ - | - +-----------+-----------+ - | TodoManager state | - | [ ] task A | - | [>] task B <- doing | - | [x] task C | - +-----------------------+ - | - if rounds_since_todo >= 3: - inject - -Key insight: "The agent can track its own progress -- and I can see it." +s03_todo_write.py - Session Planning with TodoWrite + +This chapter is about a lightweight session plan, not a durable task graph. +The model can rewrite its current plan, keep one active step in focus, and get +nudged if it stops refreshing the plan for too many rounds. """ import os import subprocess +from dataclasses import dataclass, field from pathlib import Path from anthropic import Anthropic @@ -42,153 +24,295 @@ WORKDIR = Path.cwd() client = Anthropic(base_url=os.getenv("ANTHROPIC_BASE_URL")) MODEL = os.environ["MODEL_ID"] +PLAN_REMINDER_INTERVAL = 3 SYSTEM = f"""You are a coding agent at {WORKDIR}. -Use the todo tool to plan multi-step tasks. Mark in_progress before starting, completed when done. -Prefer tools over prose.""" +Use the todo tool for multi-step work. +Keep exactly one step in_progress when a task has multiple steps. +Refresh the plan as work advances. Prefer tools over prose.""" + + +@dataclass +class PlanItem: + content: str + status: str = "pending" + active_form: str = "" + + +@dataclass +class PlanningState: + items: list[PlanItem] = field(default_factory=list) + rounds_since_update: int = 0 -# -- TodoManager: structured state the LLM writes to -- class TodoManager: def __init__(self): - self.items = [] + self.state = PlanningState() def update(self, items: list) -> str: - if len(items) > 20: - raise ValueError("Max 20 todos allowed") - validated = [] + if len(items) > 12: + raise ValueError("Keep the session plan short (max 12 items)") + + normalized = [] in_progress_count = 0 - for i, item in enumerate(items): - text = str(item.get("text", "")).strip() - status = str(item.get("status", "pending")).lower() - item_id = str(item.get("id", str(i + 1))) - if not text: - raise ValueError(f"Item {item_id}: text required") - if status not in ("pending", "in_progress", "completed"): - raise ValueError(f"Item {item_id}: invalid status '{status}'") + for index, raw_item in enumerate(items): + content = str(raw_item.get("content", "")).strip() + status = str(raw_item.get("status", "pending")).lower() + active_form = str(raw_item.get("activeForm", "")).strip() + + if not content: + raise ValueError(f"Item {index}: content required") + if status not in {"pending", "in_progress", "completed"}: + raise ValueError(f"Item {index}: invalid status '{status}'") if status == "in_progress": in_progress_count += 1 - validated.append({"id": item_id, "text": text, "status": status}) + + normalized.append(PlanItem( + content=content, + status=status, + active_form=active_form, + )) + if in_progress_count > 1: - raise ValueError("Only one task can be in_progress at a time") - self.items = validated + raise ValueError("Only one plan item can be in_progress") + + self.state.items = normalized + self.state.rounds_since_update = 0 return self.render() + def note_round_without_update(self) -> None: + self.state.rounds_since_update += 1 + + def reminder(self) -> str | None: + if not self.state.items: + return None + if self.state.rounds_since_update < PLAN_REMINDER_INTERVAL: + return None + return "Refresh your current plan before continuing." + def render(self) -> str: - if not self.items: - return "No todos." + if not self.state.items: + return "No session plan yet." + lines = [] - for item in self.items: - marker = {"pending": "[ ]", "in_progress": "[>]", "completed": "[x]"}[item["status"]] - lines.append(f"{marker} #{item['id']}: {item['text']}") - done = sum(1 for t in self.items if t["status"] == "completed") - lines.append(f"\n({done}/{len(self.items)} completed)") + for item in self.state.items: + marker = { + "pending": "[ ]", + "in_progress": "[>]", + "completed": "[x]", + }[item.status] + line = f"{marker} {item.content}" + if item.status == "in_progress" and item.active_form: + line += f" ({item.active_form})" + lines.append(line) + + completed = sum(1 for item in self.state.items if item.status == "completed") + lines.append(f"\n({completed}/{len(self.state.items)} completed)") return "\n".join(lines) TODO = TodoManager() -# -- Tool implementations -- -def safe_path(p: str) -> Path: - path = (WORKDIR / p).resolve() +def safe_path(path_str: str) -> Path: + path = (WORKDIR / path_str).resolve() if not path.is_relative_to(WORKDIR): - raise ValueError(f"Path escapes workspace: {p}") + raise ValueError(f"Path escapes workspace: {path_str}") return path + def run_bash(command: str) -> str: dangerous = ["rm -rf /", "sudo", "shutdown", "reboot", "> /dev/"] - if any(d in command for d in dangerous): + if any(item in command for item in dangerous): return "Error: Dangerous command blocked" try: - r = subprocess.run(command, shell=True, cwd=WORKDIR, - capture_output=True, text=True, timeout=120) - out = (r.stdout + r.stderr).strip() - return out[:50000] if out else "(no output)" + result = subprocess.run( + command, + shell=True, + cwd=WORKDIR, + capture_output=True, + text=True, + timeout=120, + ) except subprocess.TimeoutExpired: return "Error: Timeout (120s)" -def run_read(path: str, limit: int = None) -> str: + output = (result.stdout + result.stderr).strip() + return output[:50000] if output else "(no output)" + + +def run_read(path: str, limit: int | None = None) -> str: try: lines = safe_path(path).read_text().splitlines() if limit and limit < len(lines): - lines = lines[:limit] + [f"... ({len(lines) - limit} more)"] + lines = lines[:limit] + [f"... ({len(lines) - limit} more lines)"] return "\n".join(lines)[:50000] - except Exception as e: - return f"Error: {e}" + except Exception as exc: + return f"Error: {exc}" + def run_write(path: str, content: str) -> str: try: - fp = safe_path(path) - fp.parent.mkdir(parents=True, exist_ok=True) - fp.write_text(content) - return f"Wrote {len(content)} bytes" - except Exception as e: - return f"Error: {e}" + file_path = safe_path(path) + file_path.parent.mkdir(parents=True, exist_ok=True) + file_path.write_text(content) + return f"Wrote {len(content)} bytes to {path}" + except Exception as exc: + return f"Error: {exc}" + def run_edit(path: str, old_text: str, new_text: str) -> str: try: - fp = safe_path(path) - content = fp.read_text() + file_path = safe_path(path) + content = file_path.read_text() if old_text not in content: return f"Error: Text not found in {path}" - fp.write_text(content.replace(old_text, new_text, 1)) + file_path.write_text(content.replace(old_text, new_text, 1)) return f"Edited {path}" - except Exception as e: - return f"Error: {e}" + except Exception as exc: + return f"Error: {exc}" TOOL_HANDLERS = { - "bash": lambda **kw: run_bash(kw["command"]), - "read_file": lambda **kw: run_read(kw["path"], kw.get("limit")), + "bash": lambda **kw: run_bash(kw["command"]), + "read_file": lambda **kw: run_read(kw["path"], kw.get("limit")), "write_file": lambda **kw: run_write(kw["path"], kw["content"]), - "edit_file": lambda **kw: run_edit(kw["path"], kw["old_text"], kw["new_text"]), - "todo": lambda **kw: TODO.update(kw["items"]), + "edit_file": lambda **kw: run_edit(kw["path"], kw["old_text"], kw["new_text"]), + "todo": lambda **kw: TODO.update(kw["items"]), } TOOLS = [ - {"name": "bash", "description": "Run a shell command.", - "input_schema": {"type": "object", "properties": {"command": {"type": "string"}}, "required": ["command"]}}, - {"name": "read_file", "description": "Read file contents.", - "input_schema": {"type": "object", "properties": {"path": {"type": "string"}, "limit": {"type": "integer"}}, "required": ["path"]}}, - {"name": "write_file", "description": "Write content to file.", - "input_schema": {"type": "object", "properties": {"path": {"type": "string"}, "content": {"type": "string"}}, "required": ["path", "content"]}}, - {"name": "edit_file", "description": "Replace exact text in file.", - "input_schema": {"type": "object", "properties": {"path": {"type": "string"}, "old_text": {"type": "string"}, "new_text": {"type": "string"}}, "required": ["path", "old_text", "new_text"]}}, - {"name": "todo", "description": "Update task list. Track progress on multi-step tasks.", - "input_schema": {"type": "object", "properties": {"items": {"type": "array", "items": {"type": "object", "properties": {"id": {"type": "string"}, "text": {"type": "string"}, "status": {"type": "string", "enum": ["pending", "in_progress", "completed"]}}, "required": ["id", "text", "status"]}}}, "required": ["items"]}}, + { + "name": "bash", + "description": "Run a shell command.", + "input_schema": { + "type": "object", + "properties": {"command": {"type": "string"}}, + "required": ["command"], + }, + }, + { + "name": "read_file", + "description": "Read file contents.", + "input_schema": { + "type": "object", + "properties": { + "path": {"type": "string"}, + "limit": {"type": "integer"}, + }, + "required": ["path"], + }, + }, + { + "name": "write_file", + "description": "Write content to a file.", + "input_schema": { + "type": "object", + "properties": { + "path": {"type": "string"}, + "content": {"type": "string"}, + }, + "required": ["path", "content"], + }, + }, + { + "name": "edit_file", + "description": "Replace exact text in a file once.", + "input_schema": { + "type": "object", + "properties": { + "path": {"type": "string"}, + "old_text": {"type": "string"}, + "new_text": {"type": "string"}, + }, + "required": ["path", "old_text", "new_text"], + }, + }, + { + "name": "todo", + "description": "Rewrite the current session plan for multi-step work.", + "input_schema": { + "type": "object", + "properties": { + "items": { + "type": "array", + "items": { + "type": "object", + "properties": { + "content": {"type": "string"}, + "status": { + "type": "string", + "enum": ["pending", "in_progress", "completed"], + }, + "activeForm": { + "type": "string", + "description": "Optional present-continuous label.", + }, + }, + "required": ["content", "status"], + }, + }, + }, + "required": ["items"], + }, + }, ] -# -- Agent loop with nag reminder injection -- -def agent_loop(messages: list): - rounds_since_todo = 0 +def extract_text(content) -> str: + if not isinstance(content, list): + return "" + texts = [] + for block in content: + text = getattr(block, "text", None) + if text: + texts.append(text) + return "\n".join(texts).strip() + + +def agent_loop(messages: list) -> None: while True: - # Nag reminder is injected below, alongside tool results response = client.messages.create( - model=MODEL, system=SYSTEM, messages=messages, - tools=TOOLS, max_tokens=8000, + model=MODEL, + system=SYSTEM, + messages=messages, + tools=TOOLS, + max_tokens=8000, ) messages.append({"role": "assistant", "content": response.content}) + if response.stop_reason != "tool_use": return + results = [] used_todo = False for block in response.content: - if block.type == "tool_use": - handler = TOOL_HANDLERS.get(block.name) - try: - output = handler(**block.input) if handler else f"Unknown tool: {block.name}" - except Exception as e: - output = f"Error: {e}" - print(f"> {block.name}:") - print(str(output)[:200]) - results.append({"type": "tool_result", "tool_use_id": block.id, "content": str(output)}) - if block.name == "todo": - used_todo = True - rounds_since_todo = 0 if used_todo else rounds_since_todo + 1 - if rounds_since_todo >= 3: - results.append({"type": "text", "text": "Update your todos."}) + if block.type != "tool_use": + continue + + handler = TOOL_HANDLERS.get(block.name) + try: + output = handler(**block.input) if handler else f"Unknown tool: {block.name}" + except Exception as exc: + output = f"Error: {exc}" + + print(f"> {block.name}: {str(output)[:200]}") + results.append({ + "type": "tool_result", + "tool_use_id": block.id, + "content": str(output), + }) + if block.name == "todo": + used_todo = True + + if used_todo: + TODO.state.rounds_since_update = 0 + else: + TODO.note_round_without_update() + reminder = TODO.reminder() + if reminder: + results.insert(0, {"type": "text", "text": reminder}) + messages.append({"role": "user", "content": results}) @@ -201,11 +325,11 @@ def agent_loop(messages: list): break if query.strip().lower() in ("q", "exit", ""): break + history.append({"role": "user", "content": query}) agent_loop(history) - response_content = history[-1]["content"] - if isinstance(response_content, list): - for block in response_content: - if hasattr(block, "text"): - print(block.text) + + final_text = extract_text(history[-1]["content"]) + if final_text: + print(final_text) print() diff --git a/agents/s04_subagent.py b/agents/s04_subagent.py index dda2737f6..965a36a32 100644 --- a/agents/s04_subagent.py +++ b/agents/s04_subagent.py @@ -20,10 +20,31 @@ Parent context stays clean. Subagent context is discarded. -Key insight: "Process isolation gives context isolation for free." +Key insight: "Fresh messages=[] gives context isolation. The parent stays clean." + +Note: Real Claude Code also uses in-process isolation (not OS-level process +forking). The child runs in the same process with a fresh message array and +isolated tool context -- same pattern as this teaching implementation. + + Comparison with real Claude Code: + +-------------------+------------------+----------------------------------+ + | Aspect | This demo | Real Claude Code | + +-------------------+------------------+----------------------------------+ + | Backend | in-process only | 5 backends: in-process, tmux, | + | | | iTerm2, fork, remote | + | Context isolation | fresh messages=[]| createSubagentContext() isolates | + | | | ~20 fields (tools, permissions, | + | | | cwd, env, hooks, etc.) | + | Tool filtering | manually curated | resolveAgentTools() filters from | + | | | parent pool; allowedTools | + | | | replaces all allow rules | + | Agent definition | hardcoded system | .claude/agents/*.md with YAML | + | | prompt | frontmatter (AgentTemplate) | + +-------------------+------------------+----------------------------------+ """ import os +import re import subprocess from pathlib import Path @@ -43,6 +64,37 @@ SUBAGENT_SYSTEM = f"You are a coding subagent at {WORKDIR}. Complete the given task, then summarize your findings." +class AgentTemplate: + """ + Parse agent definition from markdown frontmatter. + + Real Claude Code loads agent definitions from .claude/agents/*.md. + Frontmatter fields: name, tools, disallowedTools, skills, hooks, + model, effort, permissionMode, maxTurns, memory, isolation, color, + background, initialPrompt, mcpServers. + 3 sources: built-in, custom (.claude/agents/), plugin-provided. + """ + def __init__(self, path): + self.path = Path(path) + self.name = self.path.stem + self.config = {} + self.system_prompt = "" + self._parse() + + def _parse(self): + text = self.path.read_text() + match = re.match(r"^---\s*\n(.*?)\n---\s*\n(.*)", text, re.DOTALL) + if not match: + self.system_prompt = text + return + for line in match.group(1).splitlines(): + if ":" in line: + k, _, v = line.partition(":") + self.config[k.strip()] = v.strip() + self.system_prompt = match.group(2).strip() + self.name = self.config.get("name", self.name) + + # -- Tool implementations shared by parent and child -- def safe_path(p: str) -> Path: path = (WORKDIR / p).resolve() diff --git a/agents/s05_skill_loading.py b/agents/s05_skill_loading.py index e14167a6c..6f9696f10 100644 --- a/agents/s05_skill_loading.py +++ b/agents/s05_skill_loading.py @@ -1,44 +1,21 @@ #!/usr/bin/env python3 -# Harness: on-demand knowledge -- domain expertise, loaded when the model asks. +# Harness: on-demand knowledge -- discover skills cheaply, load them only when needed. """ s05_skill_loading.py - Skills -Two-layer skill injection that avoids bloating the system prompt: - - Layer 1 (cheap): skill names in system prompt (~100 tokens/skill) - Layer 2 (on demand): full skill body in tool_result - - skills/ - pdf/ - SKILL.md <-- frontmatter (name, description) + body - code-review/ - SKILL.md - - System prompt: - +--------------------------------------+ - | You are a coding agent. | - | Skills available: | - | - pdf: Process PDF files... | <-- Layer 1: metadata only - | - code-review: Review code... | - +--------------------------------------+ - - When model calls load_skill("pdf"): - +--------------------------------------+ - | tool_result: | - | | - | Full PDF processing instructions | <-- Layer 2: full body - | Step 1: ... | - | Step 2: ... | - | | - +--------------------------------------+ - -Key insight: "Don't put everything in the system prompt. Load on demand." +This chapter teaches a two-layer skill model: + +1. Put a cheap skill catalog in the system prompt. +2. Load the full skill body only when the model asks for it. + +That keeps the prompt small while still giving the model access to reusable, +task-specific guidance. """ import os import re import subprocess -import yaml +from dataclasses import dataclass from pathlib import Path from anthropic import Anthropic @@ -55,156 +32,250 @@ SKILLS_DIR = WORKDIR / "skills" -# -- SkillLoader: scan skills//SKILL.md with YAML frontmatter -- -class SkillLoader: +@dataclass +class SkillManifest: + name: str + description: str + path: Path + + +@dataclass +class SkillDocument: + manifest: SkillManifest + body: str + + +class SkillRegistry: def __init__(self, skills_dir: Path): self.skills_dir = skills_dir - self.skills = {} + self.documents: dict[str, SkillDocument] = {} self._load_all() - def _load_all(self): + def _load_all(self) -> None: if not self.skills_dir.exists(): return - for f in sorted(self.skills_dir.rglob("SKILL.md")): - text = f.read_text() - meta, body = self._parse_frontmatter(text) - name = meta.get("name", f.parent.name) - self.skills[name] = {"meta": meta, "body": body, "path": str(f)} - - def _parse_frontmatter(self, text: str) -> tuple: - """Parse YAML frontmatter between --- delimiters.""" + + for path in sorted(self.skills_dir.rglob("SKILL.md")): + meta, body = self._parse_frontmatter(path.read_text()) + name = meta.get("name", path.parent.name) + description = meta.get("description", "No description") + manifest = SkillManifest(name=name, description=description, path=path) + self.documents[name] = SkillDocument(manifest=manifest, body=body.strip()) + + def _parse_frontmatter(self, text: str) -> tuple[dict, str]: match = re.match(r"^---\n(.*?)\n---\n(.*)", text, re.DOTALL) if not match: return {}, text - try: - meta = yaml.safe_load(match.group(1)) or {} - except yaml.YAMLError: - meta = {} - return meta, match.group(2).strip() - - def get_descriptions(self) -> str: - """Layer 1: short descriptions for the system prompt.""" - if not self.skills: + + meta = {} + for line in match.group(1).strip().splitlines(): + if ":" not in line: + continue + key, value = line.split(":", 1) + meta[key.strip()] = value.strip() + return meta, match.group(2) + + def describe_available(self) -> str: + if not self.documents: return "(no skills available)" lines = [] - for name, skill in self.skills.items(): - desc = skill["meta"].get("description", "No description") - tags = skill["meta"].get("tags", "") - line = f" - {name}: {desc}" - if tags: - line += f" [{tags}]" - lines.append(line) + for name in sorted(self.documents): + manifest = self.documents[name].manifest + lines.append(f"- {manifest.name}: {manifest.description}") return "\n".join(lines) - def get_content(self, name: str) -> str: - """Layer 2: full skill body returned in tool_result.""" - skill = self.skills.get(name) - if not skill: - return f"Error: Unknown skill '{name}'. Available: {', '.join(self.skills.keys())}" - return f"\n{skill['body']}\n" + def load_full_text(self, name: str) -> str: + document = self.documents.get(name) + if not document: + known = ", ".join(sorted(self.documents)) or "(none)" + return f"Error: Unknown skill '{name}'. Available skills: {known}" + + return ( + f"\n" + f"{document.body}\n" + "" + ) -SKILL_LOADER = SkillLoader(SKILLS_DIR) +SKILL_REGISTRY = SkillRegistry(SKILLS_DIR) -# Layer 1: skill metadata injected into system prompt SYSTEM = f"""You are a coding agent at {WORKDIR}. -Use load_skill to access specialized knowledge before tackling unfamiliar topics. +Use load_skill when a task needs specialized instructions before you act. Skills available: -{SKILL_LOADER.get_descriptions()}""" +{SKILL_REGISTRY.describe_available()} +""" -# -- Tool implementations -- -def safe_path(p: str) -> Path: - path = (WORKDIR / p).resolve() +def safe_path(path_str: str) -> Path: + path = (WORKDIR / path_str).resolve() if not path.is_relative_to(WORKDIR): - raise ValueError(f"Path escapes workspace: {p}") + raise ValueError(f"Path escapes workspace: {path_str}") return path + def run_bash(command: str) -> str: dangerous = ["rm -rf /", "sudo", "shutdown", "reboot", "> /dev/"] - if any(d in command for d in dangerous): + if any(item in command for item in dangerous): return "Error: Dangerous command blocked" try: - r = subprocess.run(command, shell=True, cwd=WORKDIR, - capture_output=True, text=True, timeout=120) - out = (r.stdout + r.stderr).strip() - return out[:50000] if out else "(no output)" + result = subprocess.run( + command, + shell=True, + cwd=WORKDIR, + capture_output=True, + text=True, + timeout=120, + ) except subprocess.TimeoutExpired: return "Error: Timeout (120s)" -def run_read(path: str, limit: int = None) -> str: + output = (result.stdout + result.stderr).strip() + return output[:50000] if output else "(no output)" + + +def run_read(path: str, limit: int | None = None) -> str: try: lines = safe_path(path).read_text().splitlines() if limit and limit < len(lines): - lines = lines[:limit] + [f"... ({len(lines) - limit} more)"] + lines = lines[:limit] + [f"... ({len(lines) - limit} more lines)"] return "\n".join(lines)[:50000] - except Exception as e: - return f"Error: {e}" + except Exception as exc: + return f"Error: {exc}" + def run_write(path: str, content: str) -> str: try: - fp = safe_path(path) - fp.parent.mkdir(parents=True, exist_ok=True) - fp.write_text(content) - return f"Wrote {len(content)} bytes" - except Exception as e: - return f"Error: {e}" + file_path = safe_path(path) + file_path.parent.mkdir(parents=True, exist_ok=True) + file_path.write_text(content) + return f"Wrote {len(content)} bytes to {path}" + except Exception as exc: + return f"Error: {exc}" + def run_edit(path: str, old_text: str, new_text: str) -> str: try: - fp = safe_path(path) - content = fp.read_text() + file_path = safe_path(path) + content = file_path.read_text() if old_text not in content: return f"Error: Text not found in {path}" - fp.write_text(content.replace(old_text, new_text, 1)) + file_path.write_text(content.replace(old_text, new_text, 1)) return f"Edited {path}" - except Exception as e: - return f"Error: {e}" + except Exception as exc: + return f"Error: {exc}" TOOL_HANDLERS = { - "bash": lambda **kw: run_bash(kw["command"]), - "read_file": lambda **kw: run_read(kw["path"], kw.get("limit")), + "bash": lambda **kw: run_bash(kw["command"]), + "read_file": lambda **kw: run_read(kw["path"], kw.get("limit")), "write_file": lambda **kw: run_write(kw["path"], kw["content"]), - "edit_file": lambda **kw: run_edit(kw["path"], kw["old_text"], kw["new_text"]), - "load_skill": lambda **kw: SKILL_LOADER.get_content(kw["name"]), + "edit_file": lambda **kw: run_edit(kw["path"], kw["old_text"], kw["new_text"]), + "load_skill": lambda **kw: SKILL_REGISTRY.load_full_text(kw["name"]), } TOOLS = [ - {"name": "bash", "description": "Run a shell command.", - "input_schema": {"type": "object", "properties": {"command": {"type": "string"}}, "required": ["command"]}}, - {"name": "read_file", "description": "Read file contents.", - "input_schema": {"type": "object", "properties": {"path": {"type": "string"}, "limit": {"type": "integer"}}, "required": ["path"]}}, - {"name": "write_file", "description": "Write content to file.", - "input_schema": {"type": "object", "properties": {"path": {"type": "string"}, "content": {"type": "string"}}, "required": ["path", "content"]}}, - {"name": "edit_file", "description": "Replace exact text in file.", - "input_schema": {"type": "object", "properties": {"path": {"type": "string"}, "old_text": {"type": "string"}, "new_text": {"type": "string"}}, "required": ["path", "old_text", "new_text"]}}, - {"name": "load_skill", "description": "Load specialized knowledge by name.", - "input_schema": {"type": "object", "properties": {"name": {"type": "string", "description": "Skill name to load"}}, "required": ["name"]}}, + { + "name": "bash", + "description": "Run a shell command.", + "input_schema": { + "type": "object", + "properties": {"command": {"type": "string"}}, + "required": ["command"], + }, + }, + { + "name": "read_file", + "description": "Read file contents.", + "input_schema": { + "type": "object", + "properties": { + "path": {"type": "string"}, + "limit": {"type": "integer"}, + }, + "required": ["path"], + }, + }, + { + "name": "write_file", + "description": "Write content to a file.", + "input_schema": { + "type": "object", + "properties": { + "path": {"type": "string"}, + "content": {"type": "string"}, + }, + "required": ["path", "content"], + }, + }, + { + "name": "edit_file", + "description": "Replace exact text in a file once.", + "input_schema": { + "type": "object", + "properties": { + "path": {"type": "string"}, + "old_text": {"type": "string"}, + "new_text": {"type": "string"}, + }, + "required": ["path", "old_text", "new_text"], + }, + }, + { + "name": "load_skill", + "description": "Load the full body of a named skill into the current context.", + "input_schema": { + "type": "object", + "properties": {"name": {"type": "string"}}, + "required": ["name"], + }, + }, ] -def agent_loop(messages: list): +def extract_text(content) -> str: + if not isinstance(content, list): + return "" + texts = [] + for block in content: + text = getattr(block, "text", None) + if text: + texts.append(text) + return "\n".join(texts).strip() + + +def agent_loop(messages: list) -> None: while True: response = client.messages.create( - model=MODEL, system=SYSTEM, messages=messages, - tools=TOOLS, max_tokens=8000, + model=MODEL, + system=SYSTEM, + messages=messages, + tools=TOOLS, + max_tokens=8000, ) messages.append({"role": "assistant", "content": response.content}) + if response.stop_reason != "tool_use": return + results = [] for block in response.content: - if block.type == "tool_use": - handler = TOOL_HANDLERS.get(block.name) - try: - output = handler(**block.input) if handler else f"Unknown tool: {block.name}" - except Exception as e: - output = f"Error: {e}" - print(f"> {block.name}:") - print(str(output)[:200]) - results.append({"type": "tool_result", "tool_use_id": block.id, "content": str(output)}) + if block.type != "tool_use": + continue + + handler = TOOL_HANDLERS.get(block.name) + try: + output = handler(**block.input) if handler else f"Unknown tool: {block.name}" + except Exception as exc: + output = f"Error: {exc}" + + print(f"> {block.name}: {str(output)[:200]}") + results.append({ + "type": "tool_result", + "tool_use_id": block.id, + "content": str(output), + }) + messages.append({"role": "user", "content": results}) @@ -217,11 +288,11 @@ def agent_loop(messages: list): break if query.strip().lower() in ("q", "exit", ""): break + history.append({"role": "user", "content": query}) agent_loop(history) - response_content = history[-1]["content"] - if isinstance(response_content, list): - for block in response_content: - if hasattr(block, "text"): - print(block.text) + + final_text = extract_text(history[-1]["content"]) + if final_text: + print(final_text) print() diff --git a/agents/s06_context_compact.py b/agents/s06_context_compact.py index 0fde70efd..e75f13ccd 100644 --- a/agents/s06_context_compact.py +++ b/agents/s06_context_compact.py @@ -1,43 +1,24 @@ #!/usr/bin/env python3 -# Harness: compression -- clean memory for infinite sessions. +# Harness: compression -- keep the active context small enough to keep working. """ -s06_context_compact.py - Compact - -Three-layer compression pipeline so the agent can work forever: - - Every turn: - +------------------+ - | Tool call result | - +------------------+ - | - v - [Layer 1: micro_compact] (silent, every turn) - Replace non-read_file tool_result content older than last 3 - with "[Previous: used {tool_name}]" - | - v - [Check: tokens > 50000?] - | | - no yes - | | - v v - continue [Layer 2: auto_compact] - Save full transcript to .transcripts/ - Ask LLM to summarize conversation. - Replace all messages with [summary]. - | - v - [Layer 3: compact tool] - Model calls compact -> immediate summarization. - Same as auto, triggered manually. - -Key insight: "The agent can forget strategically and keep working forever." +s06_context_compact.py - Context Compact + +This teaching version keeps the compact model intentionally small: + +1. Large tool output is persisted to disk and replaced with a preview marker. +2. Older tool results are micro-compacted into short placeholders. +3. When the whole conversation gets too large, the agent summarizes it and + continues from that summary. + +The goal is not to model every production branch. The goal is to make the +active-context idea explicit and teachable. """ import json import os import subprocess import time +from dataclasses import dataclass, field from pathlib import Path from anthropic import Anthropic @@ -52,193 +33,332 @@ client = Anthropic(base_url=os.getenv("ANTHROPIC_BASE_URL")) MODEL = os.environ["MODEL_ID"] -SYSTEM = f"You are a coding agent at {WORKDIR}. Use tools to solve tasks." +SYSTEM = ( + f"You are a coding agent at {WORKDIR}. " + "Keep working step by step, and use compact if the conversation gets too long." +) -THRESHOLD = 50000 +CONTEXT_LIMIT = 50000 +KEEP_RECENT_TOOL_RESULTS = 3 +PERSIST_THRESHOLD = 30000 +PREVIEW_CHARS = 2000 TRANSCRIPT_DIR = WORKDIR / ".transcripts" -KEEP_RECENT = 3 -PRESERVE_RESULT_TOOLS = {"read_file"} +TOOL_RESULTS_DIR = WORKDIR / ".task_outputs" / "tool-results" + + +@dataclass +class CompactState: + has_compacted: bool = False + last_summary: str = "" + recent_files: list[str] = field(default_factory=list) + + +def estimate_context_size(messages: list) -> int: + return len(str(messages)) + +def track_recent_file(state: CompactState, path: str) -> None: + if path in state.recent_files: + state.recent_files.remove(path) + state.recent_files.append(path) + if len(state.recent_files) > 5: + state.recent_files[:] = state.recent_files[-5:] -def estimate_tokens(messages: list) -> int: - """Rough token count: ~4 chars per token.""" - return len(str(messages)) // 4 + +def safe_path(path_str: str) -> Path: + path = (WORKDIR / path_str).resolve() + if not path.is_relative_to(WORKDIR): + raise ValueError(f"Path escapes workspace: {path_str}") + return path + + +def persist_large_output(tool_use_id: str, output: str) -> str: + if len(output) <= PERSIST_THRESHOLD: + return output + + TOOL_RESULTS_DIR.mkdir(parents=True, exist_ok=True) + stored_path = TOOL_RESULTS_DIR / f"{tool_use_id}.txt" + if not stored_path.exists(): + stored_path.write_text(output) + + preview = output[:PREVIEW_CHARS] + rel_path = stored_path.relative_to(WORKDIR) + return ( + "\n" + f"Full output saved to: {rel_path}\n" + "Preview:\n" + f"{preview}\n" + "" + ) + + +def collect_tool_result_blocks(messages: list) -> list[tuple[int, int, dict]]: + blocks = [] + for message_index, message in enumerate(messages): + content = message.get("content") + if message.get("role") != "user" or not isinstance(content, list): + continue + for block_index, block in enumerate(content): + if isinstance(block, dict) and block.get("type") == "tool_result": + blocks.append((message_index, block_index, block)) + return blocks -# -- Layer 1: micro_compact - replace old tool results with placeholders -- def micro_compact(messages: list) -> list: - # Collect (msg_index, part_index, tool_result_dict) for all tool_result entries - tool_results = [] - for msg_idx, msg in enumerate(messages): - if msg["role"] == "user" and isinstance(msg.get("content"), list): - for part_idx, part in enumerate(msg["content"]): - if isinstance(part, dict) and part.get("type") == "tool_result": - tool_results.append((msg_idx, part_idx, part)) - if len(tool_results) <= KEEP_RECENT: + tool_results = collect_tool_result_blocks(messages) + if len(tool_results) <= KEEP_RECENT_TOOL_RESULTS: return messages - # Find tool_name for each result by matching tool_use_id in prior assistant messages - tool_name_map = {} - for msg in messages: - if msg["role"] == "assistant": - content = msg.get("content", []) - if isinstance(content, list): - for block in content: - if hasattr(block, "type") and block.type == "tool_use": - tool_name_map[block.id] = block.name - # Clear old results (keep last KEEP_RECENT). Preserve read_file outputs because - # they are reference material; compacting them forces the agent to re-read files. - to_clear = tool_results[:-KEEP_RECENT] - for _, _, result in to_clear: - if not isinstance(result.get("content"), str) or len(result["content"]) <= 100: - continue - tool_id = result.get("tool_use_id", "") - tool_name = tool_name_map.get(tool_id, "unknown") - if tool_name in PRESERVE_RESULT_TOOLS: + + for _, _, block in tool_results[:-KEEP_RECENT_TOOL_RESULTS]: + content = block.get("content", "") + if not isinstance(content, str) or len(content) <= 120: continue - result["content"] = f"[Previous: used {tool_name}]" + block["content"] = "[Earlier tool result compacted. Re-run the tool if you need full detail.]" return messages -# -- Layer 2: auto_compact - save transcript, summarize, replace messages -- -def auto_compact(messages: list) -> list: - # Save full transcript to disk - TRANSCRIPT_DIR.mkdir(exist_ok=True) - transcript_path = TRANSCRIPT_DIR / f"transcript_{int(time.time())}.jsonl" - with open(transcript_path, "w") as f: - for msg in messages: - f.write(json.dumps(msg, default=str) + "\n") - print(f"[transcript saved: {transcript_path}]") - # Ask LLM to summarize - conversation_text = json.dumps(messages, default=str)[-80000:] +def write_transcript(messages: list) -> Path: + TRANSCRIPT_DIR.mkdir(parents=True, exist_ok=True) + path = TRANSCRIPT_DIR / f"transcript_{int(time.time())}.jsonl" + with path.open("w") as handle: + for message in messages: + handle.write(json.dumps(message, default=str) + "\n") + return path + + +def summarize_history(messages: list) -> str: + conversation = json.dumps(messages, default=str)[:80000] + prompt = ( + "Summarize this coding-agent conversation so work can continue.\n" + "Preserve:\n" + "1. The current goal\n" + "2. Important findings and decisions\n" + "3. Files read or changed\n" + "4. Remaining work\n" + "5. User constraints and preferences\n" + "Be compact but concrete.\n\n" + f"{conversation}" + ) response = client.messages.create( model=MODEL, - messages=[{"role": "user", "content": - "Summarize this conversation for continuity. Include: " - "1) What was accomplished, 2) Current state, 3) Key decisions made. " - "Be concise but preserve critical details.\n\n" + conversation_text}], + messages=[{"role": "user", "content": prompt}], max_tokens=2000, ) - summary = next((block.text for block in response.content if hasattr(block, "text")), "") - if not summary: - summary = "No summary generated." - # Replace all messages with compressed summary - return [ - {"role": "user", "content": f"[Conversation compressed. Transcript: {transcript_path}]\n\n{summary}"}, - ] - - -# -- Tool implementations -- -def safe_path(p: str) -> Path: - path = (WORKDIR / p).resolve() - if not path.is_relative_to(WORKDIR): - raise ValueError(f"Path escapes workspace: {p}") - return path + return response.content[0].text.strip() + -def run_bash(command: str) -> str: +def compact_history(messages: list, state: CompactState, focus: str | None = None) -> list: + transcript_path = write_transcript(messages) + print(f"[transcript saved: {transcript_path}]") + + summary = summarize_history(messages) + if focus: + summary += f"\n\nFocus to preserve next: {focus}" + if state.recent_files: + recent_lines = "\n".join(f"- {path}" for path in state.recent_files) + summary += f"\n\nRecent files to reopen if needed:\n{recent_lines}" + + state.has_compacted = True + state.last_summary = summary + + return [{ + "role": "user", + "content": ( + "This conversation was compacted so the agent can continue working.\n\n" + f"{summary}" + ), + }] + + +def run_bash(command: str, tool_use_id: str) -> str: dangerous = ["rm -rf /", "sudo", "shutdown", "reboot", "> /dev/"] - if any(d in command for d in dangerous): + if any(item in command for item in dangerous): return "Error: Dangerous command blocked" try: - r = subprocess.run(command, shell=True, cwd=WORKDIR, - capture_output=True, text=True, timeout=120) - out = (r.stdout + r.stderr).strip() - return out[:50000] if out else "(no output)" + result = subprocess.run( + command, + shell=True, + cwd=WORKDIR, + capture_output=True, + text=True, + timeout=120, + ) except subprocess.TimeoutExpired: return "Error: Timeout (120s)" -def run_read(path: str, limit: int = None) -> str: + output = (result.stdout + result.stderr).strip() or "(no output)" + return persist_large_output(tool_use_id, output) + + +def run_read(path: str, tool_use_id: str, state: CompactState, limit: int | None = None) -> str: try: + track_recent_file(state, path) lines = safe_path(path).read_text().splitlines() if limit and limit < len(lines): - lines = lines[:limit] + [f"... ({len(lines) - limit} more)"] - return "\n".join(lines)[:50000] - except Exception as e: - return f"Error: {e}" + lines = lines[:limit] + [f"... ({len(lines) - limit} more lines)"] + output = "\n".join(lines) + return persist_large_output(tool_use_id, output) + except Exception as exc: + return f"Error: {exc}" + def run_write(path: str, content: str) -> str: try: - fp = safe_path(path) - fp.parent.mkdir(parents=True, exist_ok=True) - fp.write_text(content) - return f"Wrote {len(content)} bytes" - except Exception as e: - return f"Error: {e}" + file_path = safe_path(path) + file_path.parent.mkdir(parents=True, exist_ok=True) + file_path.write_text(content) + return f"Wrote {len(content)} bytes to {path}" + except Exception as exc: + return f"Error: {exc}" + def run_edit(path: str, old_text: str, new_text: str) -> str: try: - fp = safe_path(path) - content = fp.read_text() + file_path = safe_path(path) + content = file_path.read_text() if old_text not in content: return f"Error: Text not found in {path}" - fp.write_text(content.replace(old_text, new_text, 1)) + file_path.write_text(content.replace(old_text, new_text, 1)) return f"Edited {path}" - except Exception as e: - return f"Error: {e}" + except Exception as exc: + return f"Error: {exc}" -TOOL_HANDLERS = { - "bash": lambda **kw: run_bash(kw["command"]), - "read_file": lambda **kw: run_read(kw["path"], kw.get("limit")), - "write_file": lambda **kw: run_write(kw["path"], kw["content"]), - "edit_file": lambda **kw: run_edit(kw["path"], kw["old_text"], kw["new_text"]), - "compact": lambda **kw: "Manual compression requested.", -} - TOOLS = [ - {"name": "bash", "description": "Run a shell command.", - "input_schema": {"type": "object", "properties": {"command": {"type": "string"}}, "required": ["command"]}}, - {"name": "read_file", "description": "Read file contents.", - "input_schema": {"type": "object", "properties": {"path": {"type": "string"}, "limit": {"type": "integer"}}, "required": ["path"]}}, - {"name": "write_file", "description": "Write content to file.", - "input_schema": {"type": "object", "properties": {"path": {"type": "string"}, "content": {"type": "string"}}, "required": ["path", "content"]}}, - {"name": "edit_file", "description": "Replace exact text in file.", - "input_schema": {"type": "object", "properties": {"path": {"type": "string"}, "old_text": {"type": "string"}, "new_text": {"type": "string"}}, "required": ["path", "old_text", "new_text"]}}, - {"name": "compact", "description": "Trigger manual conversation compression.", - "input_schema": {"type": "object", "properties": {"focus": {"type": "string", "description": "What to preserve in the summary"}}}}, + { + "name": "bash", + "description": "Run a shell command.", + "input_schema": { + "type": "object", + "properties": {"command": {"type": "string"}}, + "required": ["command"], + }, + }, + { + "name": "read_file", + "description": "Read file contents.", + "input_schema": { + "type": "object", + "properties": { + "path": {"type": "string"}, + "limit": {"type": "integer"}, + }, + "required": ["path"], + }, + }, + { + "name": "write_file", + "description": "Write content to a file.", + "input_schema": { + "type": "object", + "properties": { + "path": {"type": "string"}, + "content": {"type": "string"}, + }, + "required": ["path", "content"], + }, + }, + { + "name": "edit_file", + "description": "Replace exact text in a file once.", + "input_schema": { + "type": "object", + "properties": { + "path": {"type": "string"}, + "old_text": {"type": "string"}, + "new_text": {"type": "string"}, + }, + "required": ["path", "old_text", "new_text"], + }, + }, + { + "name": "compact", + "description": "Summarize earlier conversation so work can continue in a smaller context.", + "input_schema": { + "type": "object", + "properties": { + "focus": {"type": "string"}, + }, + }, + }, ] -def agent_loop(messages: list): +def extract_text(content) -> str: + if not isinstance(content, list): + return "" + texts = [] + for block in content: + text = getattr(block, "text", None) + if text: + texts.append(text) + return "\n".join(texts).strip() + + +def execute_tool(block, state: CompactState) -> str: + if block.name == "bash": + return run_bash(block.input["command"], block.id) + if block.name == "read_file": + return run_read(block.input["path"], block.id, state, block.input.get("limit")) + if block.name == "write_file": + return run_write(block.input["path"], block.input["content"]) + if block.name == "edit_file": + return run_edit(block.input["path"], block.input["old_text"], block.input["new_text"]) + if block.name == "compact": + return "Compacting conversation..." + return f"Unknown tool: {block.name}" + + +def agent_loop(messages: list, state: CompactState) -> None: while True: - # Layer 1: micro_compact before each LLM call - micro_compact(messages) - # Layer 2: auto_compact if token estimate exceeds threshold - if estimate_tokens(messages) > THRESHOLD: - print("[auto_compact triggered]") - messages[:] = auto_compact(messages) + messages[:] = micro_compact(messages) + + if estimate_context_size(messages) > CONTEXT_LIMIT: + print("[auto compact]") + messages[:] = compact_history(messages, state) + response = client.messages.create( - model=MODEL, system=SYSTEM, messages=messages, - tools=TOOLS, max_tokens=8000, + model=MODEL, + system=SYSTEM, + messages=messages, + tools=TOOLS, + max_tokens=8000, ) messages.append({"role": "assistant", "content": response.content}) + if response.stop_reason != "tool_use": return + results = [] manual_compact = False + compact_focus = None for block in response.content: - if block.type == "tool_use": - if block.name == "compact": - manual_compact = True - output = "Compressing..." - else: - handler = TOOL_HANDLERS.get(block.name) - try: - output = handler(**block.input) if handler else f"Unknown tool: {block.name}" - except Exception as e: - output = f"Error: {e}" - print(f"> {block.name}:") - print(str(output)[:200]) - results.append({"type": "tool_result", "tool_use_id": block.id, "content": str(output)}) + if block.type != "tool_use": + continue + + output = execute_tool(block, state) + if block.name == "compact": + manual_compact = True + compact_focus = (block.input or {}).get("focus") + + print(f"> {block.name}: {str(output)[:200]}") + results.append({ + "type": "tool_result", + "tool_use_id": block.id, + "content": str(output), + }) + messages.append({"role": "user", "content": results}) - # Layer 3: manual compact triggered by the compact tool + if manual_compact: print("[manual compact]") - messages[:] = auto_compact(messages) - return + messages[:] = compact_history(messages, state, focus=compact_focus) if __name__ == "__main__": history = [] + compact_state = CompactState() + while True: try: query = input("\033[36ms06 >> \033[0m") @@ -246,11 +366,11 @@ def agent_loop(messages: list): break if query.strip().lower() in ("q", "exit", ""): break + history.append({"role": "user", "content": query}) - agent_loop(history) - response_content = history[-1]["content"] - if isinstance(response_content, list): - for block in response_content: - if hasattr(block, "text"): - print(block.text) + agent_loop(history, compact_state) + + final_text = extract_text(history[-1]["content"]) + if final_text: + print(final_text) print() diff --git a/agents/s07_permission_system.py b/agents/s07_permission_system.py new file mode 100644 index 000000000..747b904ee --- /dev/null +++ b/agents/s07_permission_system.py @@ -0,0 +1,419 @@ +#!/usr/bin/env python3 +# Harness: safety -- the pipeline between intent and execution. +""" +s07_permission_system.py - Permission System + +Every tool call passes through a permission pipeline before execution. + +Teaching pipeline: + 1. deny rules + 2. mode check + 3. allow rules + 4. ask user + +This version intentionally teaches three modes first: + - default + - plan + - auto + +That is enough to build a real, understandable permission system without +burying readers under every advanced policy branch on day one. + +Key insight: "Safety is a pipeline, not a boolean." +""" + +import json +import os +import re +import subprocess +from fnmatch import fnmatch +from pathlib import Path + +from anthropic import Anthropic +from dotenv import load_dotenv + +load_dotenv(override=True) + +if os.getenv("ANTHROPIC_BASE_URL"): + os.environ.pop("ANTHROPIC_AUTH_TOKEN", None) + +WORKDIR = Path.cwd() +client = Anthropic(base_url=os.getenv("ANTHROPIC_BASE_URL")) +MODEL = os.environ["MODEL_ID"] + +# -- Permission modes -- +# Teaching version starts with three clear modes first. +MODES = ("default", "plan", "auto") + +READ_ONLY_TOOLS = {"read_file", "bash_readonly"} + +# Tools that modify state +WRITE_TOOLS = {"write_file", "edit_file", "bash"} + + +# -- Bash security validation -- +class BashSecurityValidator: + """ + Validate bash commands for obviously dangerous patterns. + + The teaching version deliberately keeps this small and easy to read. + First catch a few high-risk patterns, then let the permission pipeline + decide whether to deny or ask the user. + """ + + VALIDATORS = [ + ("shell_metachar", r"[;&|`$]"), # shell metacharacters + ("sudo", r"\bsudo\b"), # privilege escalation + ("rm_rf", r"\brm\s+(-[a-zA-Z]*)?r"), # recursive delete + ("cmd_substitution", r"\$\("), # command substitution + ("ifs_injection", r"\bIFS\s*="), # IFS manipulation + ] + + def validate(self, command: str) -> list: + """ + Check a bash command against all validators. + + Returns list of (validator_name, matched_pattern) tuples for failures. + An empty list means the command passed all validators. + """ + failures = [] + for name, pattern in self.VALIDATORS: + if re.search(pattern, command): + failures.append((name, pattern)) + return failures + + def is_safe(self, command: str) -> bool: + """Convenience: returns True only if no validators triggered.""" + return len(self.validate(command)) == 0 + + def describe_failures(self, command: str) -> str: + """Human-readable summary of validation failures.""" + failures = self.validate(command) + if not failures: + return "No issues detected" + parts = [f"{name} (pattern: {pattern})" for name, pattern in failures] + return "Security flags: " + ", ".join(parts) + + +# -- Workspace trust -- +def is_workspace_trusted(workspace: Path = None) -> bool: + """ + Check if a workspace has been explicitly marked as trusted. + + The teaching version uses a simple marker file. A more complete system + can layer richer trust flows on top of the same idea. + """ + ws = workspace or WORKDIR + trust_marker = ws / ".claude" / ".claude_trusted" + return trust_marker.exists() + + +# Singleton validator instance used by the permission pipeline +bash_validator = BashSecurityValidator() + + +# -- Permission rules -- +# Rules are checked in order: first match wins. +# Format: {"tool": "", "path": "", "behavior": "allow|deny|ask"} +DEFAULT_RULES = [ + # Always deny dangerous patterns + {"tool": "bash", "content": "rm -rf /", "behavior": "deny"}, + {"tool": "bash", "content": "sudo *", "behavior": "deny"}, + # Allow reading anything + {"tool": "read_file", "path": "*", "behavior": "allow"}, +] + + +class PermissionManager: + """ + Manages permission decisions for tool calls. + + Pipeline: deny_rules -> mode_check -> allow_rules -> ask_user + + The teaching version keeps the decision path short on purpose so readers + can implement it themselves before adding more advanced policy layers. + """ + + def __init__(self, mode: str = "default", rules: list = None): + if mode not in MODES: + raise ValueError(f"Unknown mode: {mode}. Choose from {MODES}") + self.mode = mode + self.rules = rules or list(DEFAULT_RULES) + # Simple denial tracking helps surface when the agent is repeatedly + # asking for actions the system will not allow. + self.consecutive_denials = 0 + self.max_consecutive_denials = 3 + + def check(self, tool_name: str, tool_input: dict) -> dict: + """ + Returns: {"behavior": "allow"|"deny"|"ask", "reason": str} + """ + # Step 0: Bash security validation (before deny rules) + # Teaching version checks early for clarity. + if tool_name == "bash": + command = tool_input.get("command", "") + failures = bash_validator.validate(command) + if failures: + # Severe patterns (sudo, rm_rf) get immediate deny + severe = {"sudo", "rm_rf"} + severe_hits = [f for f in failures if f[0] in severe] + if severe_hits: + desc = bash_validator.describe_failures(command) + return {"behavior": "deny", + "reason": f"Bash validator: {desc}"} + # Other patterns escalate to ask (user can still approve) + desc = bash_validator.describe_failures(command) + return {"behavior": "ask", + "reason": f"Bash validator flagged: {desc}"} + + # Step 1: Deny rules (bypass-immune, checked first always) + for rule in self.rules: + if rule["behavior"] != "deny": + continue + if self._matches(rule, tool_name, tool_input): + return {"behavior": "deny", + "reason": f"Blocked by deny rule: {rule}"} + + # Step 2: Mode-based decisions + if self.mode == "plan": + # Plan mode: deny all write operations, allow reads + if tool_name in WRITE_TOOLS: + return {"behavior": "deny", + "reason": "Plan mode: write operations are blocked"} + return {"behavior": "allow", "reason": "Plan mode: read-only allowed"} + + if self.mode == "auto": + # Auto mode: auto-allow read-only tools, ask for writes + if tool_name in READ_ONLY_TOOLS or tool_name == "read_file": + return {"behavior": "allow", + "reason": "Auto mode: read-only tool auto-approved"} + # Teaching: fall through to allow rules, then ask + pass + + # Step 3: Allow rules + for rule in self.rules: + if rule["behavior"] != "allow": + continue + if self._matches(rule, tool_name, tool_input): + self.consecutive_denials = 0 + return {"behavior": "allow", + "reason": f"Matched allow rule: {rule}"} + + # Step 4: Ask user (default behavior for unmatched tools) + return {"behavior": "ask", + "reason": f"No rule matched for {tool_name}, asking user"} + + def ask_user(self, tool_name: str, tool_input: dict) -> bool: + """Interactive approval prompt. Returns True if approved.""" + preview = json.dumps(tool_input, ensure_ascii=False)[:200] + print(f"\n [Permission] {tool_name}: {preview}") + try: + answer = input(" Allow? (y/n/always): ").strip().lower() + except (EOFError, KeyboardInterrupt): + return False + + if answer == "always": + # Add permanent allow rule for this tool + self.rules.append({"tool": tool_name, "path": "*", "behavior": "allow"}) + self.consecutive_denials = 0 + return True + if answer in ("y", "yes"): + self.consecutive_denials = 0 + return True + + # Track denials for circuit breaker + self.consecutive_denials += 1 + if self.consecutive_denials >= self.max_consecutive_denials: + print(f" [{self.consecutive_denials} consecutive denials -- " + "consider switching to plan mode]") + return False + + def _matches(self, rule: dict, tool_name: str, tool_input: dict) -> bool: + """Check if a rule matches the tool call.""" + # Tool name match + if rule.get("tool") and rule["tool"] != "*": + if rule["tool"] != tool_name: + return False + # Path pattern match + if "path" in rule and rule["path"] != "*": + path = tool_input.get("path", "") + if not fnmatch(path, rule["path"]): + return False + # Content pattern match (for bash commands) + if "content" in rule: + command = tool_input.get("command", "") + if not fnmatch(command, rule["content"]): + return False + return True + + +# -- Tool implementations -- +def safe_path(p: str) -> Path: + path = (WORKDIR / p).resolve() + if not path.is_relative_to(WORKDIR): + raise ValueError(f"Path escapes workspace: {p}") + return path + + +def run_bash(command: str) -> str: + try: + r = subprocess.run(command, shell=True, cwd=WORKDIR, + capture_output=True, text=True, timeout=120) + out = (r.stdout + r.stderr).strip() + return out[:50000] if out else "(no output)" + except subprocess.TimeoutExpired: + return "Error: Timeout (120s)" + + +def run_read(path: str, limit: int = None) -> str: + try: + lines = safe_path(path).read_text().splitlines() + if limit and limit < len(lines): + lines = lines[:limit] + [f"... ({len(lines) - limit} more)"] + return "\n".join(lines)[:50000] + except Exception as e: + return f"Error: {e}" + + +def run_write(path: str, content: str) -> str: + try: + fp = safe_path(path) + fp.parent.mkdir(parents=True, exist_ok=True) + fp.write_text(content) + return f"Wrote {len(content)} bytes" + except Exception as e: + return f"Error: {e}" + + +def run_edit(path: str, old_text: str, new_text: str) -> str: + try: + fp = safe_path(path) + content = fp.read_text() + if old_text not in content: + return f"Error: Text not found in {path}" + fp.write_text(content.replace(old_text, new_text, 1)) + return f"Edited {path}" + except Exception as e: + return f"Error: {e}" + + +TOOL_HANDLERS = { + "bash": lambda **kw: run_bash(kw["command"]), + "read_file": lambda **kw: run_read(kw["path"], kw.get("limit")), + "write_file": lambda **kw: run_write(kw["path"], kw["content"]), + "edit_file": lambda **kw: run_edit(kw["path"], kw["old_text"], kw["new_text"]), +} + +TOOLS = [ + {"name": "bash", "description": "Run a shell command.", + "input_schema": {"type": "object", "properties": {"command": {"type": "string"}}, "required": ["command"]}}, + {"name": "read_file", "description": "Read file contents.", + "input_schema": {"type": "object", "properties": {"path": {"type": "string"}, "limit": {"type": "integer"}}, "required": ["path"]}}, + {"name": "write_file", "description": "Write content to file.", + "input_schema": {"type": "object", "properties": {"path": {"type": "string"}, "content": {"type": "string"}}, "required": ["path", "content"]}}, + {"name": "edit_file", "description": "Replace exact text in file.", + "input_schema": {"type": "object", "properties": {"path": {"type": "string"}, "old_text": {"type": "string"}, "new_text": {"type": "string"}}, "required": ["path", "old_text", "new_text"]}}, +] + +SYSTEM = f"""You are a coding agent at {WORKDIR}. Use tools to solve tasks. +The user controls permissions. Some tool calls may be denied.""" + + +def agent_loop(messages: list, perms: PermissionManager): + """ + The permission-aware agent loop. + + For each tool call: + 1. LLM requests tool use + 2. Permission pipeline checks: deny_rules -> mode -> allow_rules -> ask + 3. If allowed: execute tool, return result + 4. If denied: return rejection message to LLM + """ + while True: + response = client.messages.create( + model=MODEL, system=SYSTEM, messages=messages, + tools=TOOLS, max_tokens=8000, + ) + messages.append({"role": "assistant", "content": response.content}) + + if response.stop_reason != "tool_use": + return + + results = [] + for block in response.content: + if block.type != "tool_use": + continue + + # -- Permission check -- + decision = perms.check(block.name, block.input or {}) + + if decision["behavior"] == "deny": + output = f"Permission denied: {decision['reason']}" + print(f" [DENIED] {block.name}: {decision['reason']}") + + elif decision["behavior"] == "ask": + if perms.ask_user(block.name, block.input or {}): + handler = TOOL_HANDLERS.get(block.name) + output = handler(**(block.input or {})) if handler else f"Unknown: {block.name}" + print(f"> {block.name}: {str(output)[:200]}") + else: + output = f"Permission denied by user for {block.name}" + print(f" [USER DENIED] {block.name}") + + else: # allow + handler = TOOL_HANDLERS.get(block.name) + output = handler(**(block.input or {})) if handler else f"Unknown: {block.name}" + print(f"> {block.name}: {str(output)[:200]}") + + results.append({ + "type": "tool_result", + "tool_use_id": block.id, + "content": str(output), + }) + + messages.append({"role": "user", "content": results}) + + +if __name__ == "__main__": + # Choose permission mode at startup + print("Permission modes: default, plan, auto") + mode_input = input("Mode (default): ").strip().lower() or "default" + if mode_input not in MODES: + mode_input = "default" + + perms = PermissionManager(mode=mode_input) + print(f"[Permission mode: {mode_input}]") + + history = [] + while True: + try: + query = input("\033[36ms07 >> \033[0m") + except (EOFError, KeyboardInterrupt): + break + if query.strip().lower() in ("q", "exit", ""): + break + + # /mode command to switch modes at runtime + if query.startswith("/mode"): + parts = query.split() + if len(parts) == 2 and parts[1] in MODES: + perms.mode = parts[1] + print(f"[Switched to {parts[1]} mode]") + else: + print(f"Usage: /mode <{'|'.join(MODES)}>") + continue + + # /rules command to show current rules + if query.strip() == "/rules": + for i, rule in enumerate(perms.rules): + print(f" {i}: {rule}") + continue + + history.append({"role": "user", "content": query}) + agent_loop(history, perms) + response_content = history[-1]["content"] + if isinstance(response_content, list): + for block in response_content: + if hasattr(block, "text"): + print(block.text) + print() diff --git a/agents/s08_hook_system.py b/agents/s08_hook_system.py new file mode 100644 index 000000000..f689989bf --- /dev/null +++ b/agents/s08_hook_system.py @@ -0,0 +1,340 @@ +#!/usr/bin/env python3 +# Harness: extensibility -- injecting behavior without touching the loop. +""" +s08_hook_system.py - Hook System + +Hooks are extension points around the main loop. +They let readers add behavior without rewriting the loop itself. + +Teaching version: + - SessionStart + - PreToolUse + - PostToolUse + +Teaching exit-code contract: + - 0 -> continue + - 1 -> block + - 2 -> inject a message + +This is intentionally simpler than a production system. The goal here is to +teach the extension pattern clearly before introducing event-specific edge +cases. + +Key insight: "Extend the agent without touching the loop." +""" + +import json +import os +import subprocess +from pathlib import Path + +from anthropic import Anthropic +from dotenv import load_dotenv + +load_dotenv(override=True) + +if os.getenv("ANTHROPIC_BASE_URL"): + os.environ.pop("ANTHROPIC_AUTH_TOKEN", None) + +WORKDIR = Path.cwd() +client = Anthropic(base_url=os.getenv("ANTHROPIC_BASE_URL")) +MODEL = os.environ["MODEL_ID"] + +# The teaching version keeps only the three clearest events. More complete +# systems can grow the event surface later. + +HOOK_EVENTS = ("PreToolUse", "PostToolUse", "SessionStart") +HOOK_TIMEOUT = 30 # seconds +# Real CC timeouts: +# TOOL_HOOK_EXECUTION_TIMEOUT_MS = 600000 (10 minutes for tool hooks) +# SESSION_END_HOOK_TIMEOUT_MS = 1500 (1.5 seconds for SessionEnd hooks) + +# Workspace trust marker. Hooks only run if this file exists (or SDK mode). +TRUST_MARKER = WORKDIR / ".claude" / ".claude_trusted" + + +class HookManager: + """ + Load and execute hooks from .hooks.json configuration. + + The hook manager does three simple jobs: + - load hook definitions + - run matching commands for an event + - aggregate block / message results for the caller + """ + + def __init__(self, config_path: Path = None, sdk_mode: bool = False): + self.hooks = {"PreToolUse": [], "PostToolUse": [], "SessionStart": []} + self._sdk_mode = sdk_mode + config_path = config_path or (WORKDIR / ".hooks.json") + if config_path.exists(): + try: + config = json.loads(config_path.read_text()) + for event in HOOK_EVENTS: + self.hooks[event] = config.get("hooks", {}).get(event, []) + print(f"[Hooks loaded from {config_path}]") + except Exception as e: + print(f"[Hook config error: {e}]") + + def _check_workspace_trust(self) -> bool: + """ + Check whether the current workspace is trusted. + + The teaching version uses a simple trust marker file. + In SDK mode, trust is treated as implicit. + """ + if self._sdk_mode: + return True + return TRUST_MARKER.exists() + + def run_hooks(self, event: str, context: dict = None) -> dict: + """ + Execute all hooks for an event. + + Returns: {"blocked": bool, "messages": list[str]} + - blocked: True if any hook returned exit code 1 + - messages: stderr content from exit-code-2 hooks (to inject) + """ + result = {"blocked": False, "messages": []} + + # Trust gate: refuse to run hooks in untrusted workspaces + if not self._check_workspace_trust(): + return result + + hooks = self.hooks.get(event, []) + + for hook_def in hooks: + # Check matcher (tool name filter for PreToolUse/PostToolUse) + matcher = hook_def.get("matcher") + if matcher and context: + tool_name = context.get("tool_name", "") + if matcher != "*" and matcher != tool_name: + continue + + command = hook_def.get("command", "") + if not command: + continue + + # Build environment with hook context + env = dict(os.environ) + if context: + env["HOOK_EVENT"] = event + env["HOOK_TOOL_NAME"] = context.get("tool_name", "") + env["HOOK_TOOL_INPUT"] = json.dumps( + context.get("tool_input", {}), ensure_ascii=False)[:10000] + if "tool_output" in context: + env["HOOK_TOOL_OUTPUT"] = str( + context["tool_output"])[:10000] + + try: + r = subprocess.run( + command, shell=True, cwd=WORKDIR, env=env, + capture_output=True, text=True, timeout=HOOK_TIMEOUT, + ) + + if r.returncode == 0: + # Continue silently + if r.stdout.strip(): + print(f" [hook:{event}] {r.stdout.strip()[:100]}") + + # Optional structured stdout: small extension point that + # keeps the teaching contract simple. + try: + hook_output = json.loads(r.stdout) + if "updatedInput" in hook_output and context: + context["tool_input"] = hook_output["updatedInput"] + if "additionalContext" in hook_output: + result["messages"].append( + hook_output["additionalContext"]) + if "permissionDecision" in hook_output: + result["permission_override"] = ( + hook_output["permissionDecision"]) + except (json.JSONDecodeError, TypeError): + pass # stdout was not JSON -- normal for simple hooks + + elif r.returncode == 1: + # Block execution + result["blocked"] = True + reason = r.stderr.strip() or "Blocked by hook" + result["block_reason"] = reason + print(f" [hook:{event}] BLOCKED: {reason[:200]}") + + elif r.returncode == 2: + # Inject message + msg = r.stderr.strip() + if msg: + result["messages"].append(msg) + print(f" [hook:{event}] INJECT: {msg[:200]}") + + except subprocess.TimeoutExpired: + print(f" [hook:{event}] Timeout ({HOOK_TIMEOUT}s)") + except Exception as e: + print(f" [hook:{event}] Error: {e}") + + return result + + +# -- Tool implementations (same as s02) -- +def safe_path(p: str) -> Path: + path = (WORKDIR / p).resolve() + if not path.is_relative_to(WORKDIR): + raise ValueError(f"Path escapes workspace: {p}") + return path + + +def run_bash(command: str) -> str: + dangerous = ["rm -rf /", "sudo", "shutdown", "reboot", "> /dev/"] + if any(d in command for d in dangerous): + return "Error: Dangerous command blocked" + try: + r = subprocess.run(command, shell=True, cwd=WORKDIR, + capture_output=True, text=True, timeout=120) + out = (r.stdout + r.stderr).strip() + return out[:50000] if out else "(no output)" + except subprocess.TimeoutExpired: + return "Error: Timeout (120s)" + + +def run_read(path: str, limit: int = None) -> str: + try: + lines = safe_path(path).read_text().splitlines() + if limit and limit < len(lines): + lines = lines[:limit] + [f"... ({len(lines) - limit} more)"] + return "\n".join(lines)[:50000] + except Exception as e: + return f"Error: {e}" + + +def run_write(path: str, content: str) -> str: + try: + fp = safe_path(path) + fp.parent.mkdir(parents=True, exist_ok=True) + fp.write_text(content) + return f"Wrote {len(content)} bytes" + except Exception as e: + return f"Error: {e}" + + +def run_edit(path: str, old_text: str, new_text: str) -> str: + try: + fp = safe_path(path) + content = fp.read_text() + if old_text not in content: + return f"Error: Text not found in {path}" + fp.write_text(content.replace(old_text, new_text, 1)) + return f"Edited {path}" + except Exception as e: + return f"Error: {e}" + + +TOOL_HANDLERS = { + "bash": lambda **kw: run_bash(kw["command"]), + "read_file": lambda **kw: run_read(kw["path"], kw.get("limit")), + "write_file": lambda **kw: run_write(kw["path"], kw["content"]), + "edit_file": lambda **kw: run_edit(kw["path"], kw["old_text"], kw["new_text"]), +} + +TOOLS = [ + {"name": "bash", "description": "Run a shell command.", + "input_schema": {"type": "object", "properties": {"command": {"type": "string"}}, "required": ["command"]}}, + {"name": "read_file", "description": "Read file contents.", + "input_schema": {"type": "object", "properties": {"path": {"type": "string"}, "limit": {"type": "integer"}}, "required": ["path"]}}, + {"name": "write_file", "description": "Write content to file.", + "input_schema": {"type": "object", "properties": {"path": {"type": "string"}, "content": {"type": "string"}}, "required": ["path", "content"]}}, + {"name": "edit_file", "description": "Replace exact text in file.", + "input_schema": {"type": "object", "properties": {"path": {"type": "string"}, "old_text": {"type": "string"}, "new_text": {"type": "string"}}, "required": ["path", "old_text", "new_text"]}}, +] + +SYSTEM = f"You are a coding agent at {WORKDIR}. Use tools to solve tasks." + + +def agent_loop(messages: list, hooks: HookManager): + """ + The hook-aware agent loop. + + The teaching version keeps only the clearest integration points: + SessionStart, PreToolUse, execute tool, PostToolUse. + """ + while True: + response = client.messages.create( + model=MODEL, system=SYSTEM, messages=messages, + tools=TOOLS, max_tokens=8000, + ) + messages.append({"role": "assistant", "content": response.content}) + + if response.stop_reason != "tool_use": + return + + results = [] + for block in response.content: + if block.type != "tool_use": + continue + + tool_input = dict(block.input or {}) + ctx = {"tool_name": block.name, "tool_input": tool_input} + + # -- PreToolUse hooks -- + pre_result = hooks.run_hooks("PreToolUse", ctx) + + # Inject hook messages into results + for msg in pre_result.get("messages", []): + results.append({ + "type": "tool_result", "tool_use_id": block.id, + "content": f"[Hook message]: {msg}", + }) + + if pre_result.get("blocked"): + reason = pre_result.get("block_reason", "Blocked by hook") + output = f"Tool blocked by PreToolUse hook: {reason}" + results.append({ + "type": "tool_result", "tool_use_id": block.id, + "content": output, + }) + continue + + # -- Execute tool -- + handler = TOOL_HANDLERS.get(block.name) + try: + output = handler(**tool_input) if handler else f"Unknown: {block.name}" + except Exception as e: + output = f"Error: {e}" + print(f"> {block.name}: {str(output)[:200]}") + + # -- PostToolUse hooks -- + ctx["tool_output"] = output + post_result = hooks.run_hooks("PostToolUse", ctx) + + # Inject post-hook messages + for msg in post_result.get("messages", []): + output += f"\n[Hook note]: {msg}" + + results.append({ + "type": "tool_result", "tool_use_id": block.id, + "content": str(output), + }) + + messages.append({"role": "user", "content": results}) + + +if __name__ == "__main__": + hooks = HookManager() + + # Fire SessionStart hooks + hooks.run_hooks("SessionStart", {"tool_name": "", "tool_input": {}}) + + history = [] + while True: + try: + query = input("\033[36ms08 >> \033[0m") + except (EOFError, KeyboardInterrupt): + break + if query.strip().lower() in ("q", "exit", ""): + break + history.append({"role": "user", "content": query}) + agent_loop(history, hooks) + response_content = history[-1]["content"] + if isinstance(response_content, list): + for block in response_content: + if hasattr(block, "text"): + print(block.text) + print() diff --git a/agents/s09_memory_system.py b/agents/s09_memory_system.py new file mode 100644 index 000000000..32dd0b7b5 --- /dev/null +++ b/agents/s09_memory_system.py @@ -0,0 +1,534 @@ +#!/usr/bin/env python3 +# Harness: persistence -- remembering across the session boundary. +""" +s09_memory_system.py - Memory System + +This teaching version focuses on one core idea: +some information should survive the current conversation, but not everything +belongs in memory. + +Use memory for: + - user preferences + - repeated user feedback + - project facts that are NOT obvious from the current code + - pointers to external resources + +Do NOT use memory for: + - code structure that can be re-read from the repo + - temporary task state + - secrets + +Storage layout: + .memory/ + MEMORY.md + prefer_tabs.md + review_style.md + incident_board.md + +Each memory is a small Markdown file with frontmatter. +The agent can save a memory through save_memory(), and the memory index +is rebuilt after each write. + +An optional "Dream" pass can later consolidate, deduplicate, and prune +stored memories. It is useful, but it is not the first thing readers need +to understand. + +Key insight: "Memory only stores cross-session information that is still +worth recalling later and is not easy to re-derive from the current repo." +""" + +import json +import os +import re +import subprocess +from pathlib import Path + +from anthropic import Anthropic +from dotenv import load_dotenv + +load_dotenv(override=True) + +if os.getenv("ANTHROPIC_BASE_URL"): + os.environ.pop("ANTHROPIC_AUTH_TOKEN", None) + +WORKDIR = Path.cwd() +client = Anthropic(base_url=os.getenv("ANTHROPIC_BASE_URL")) +MODEL = os.environ["MODEL_ID"] + +MEMORY_DIR = WORKDIR / ".memory" +MEMORY_INDEX = MEMORY_DIR / "MEMORY.md" +MEMORY_TYPES = ("user", "feedback", "project", "reference") +MAX_INDEX_LINES = 200 + + +class MemoryManager: + """ + Load, build, and save persistent memories across sessions. + + The teaching version keeps memory explicit: + one Markdown file per memory, plus one compact index file. + """ + + def __init__(self, memory_dir: Path = None): + self.memory_dir = memory_dir or MEMORY_DIR + self.memories = {} # name -> {description, type, content} + + def load_all(self): + """Load MEMORY.md index and all individual memory files.""" + self.memories = {} + if not self.memory_dir.exists(): + return + + # Scan all .md files except MEMORY.md + for md_file in sorted(self.memory_dir.glob("*.md")): + if md_file.name == "MEMORY.md": + continue + parsed = self._parse_frontmatter(md_file.read_text()) + if parsed: + name = parsed.get("name", md_file.stem) + self.memories[name] = { + "description": parsed.get("description", ""), + "type": parsed.get("type", "project"), + "content": parsed.get("content", ""), + "file": md_file.name, + } + + count = len(self.memories) + if count > 0: + print(f"[Memory loaded: {count} memories from {self.memory_dir}]") + + def load_memory_prompt(self) -> str: + """Build a memory section for injection into the system prompt.""" + if not self.memories: + return "" + + sections = [] + sections.append("# Memories (persistent across sessions)") + sections.append("") + + # Group by type for readability + for mem_type in MEMORY_TYPES: + typed = {k: v for k, v in self.memories.items() if v["type"] == mem_type} + if not typed: + continue + sections.append(f"## [{mem_type}]") + for name, mem in typed.items(): + sections.append(f"### {name}: {mem['description']}") + if mem["content"].strip(): + sections.append(mem["content"].strip()) + sections.append("") + + return "\n".join(sections) + + def save_memory(self, name: str, description: str, mem_type: str, content: str) -> str: + """ + Save a memory to disk and update the index. + + Returns a status message. + """ + if mem_type not in MEMORY_TYPES: + return f"Error: type must be one of {MEMORY_TYPES}" + + # Sanitize name for filename + safe_name = re.sub(r"[^a-zA-Z0-9_-]", "_", name.lower()) + if not safe_name: + return "Error: invalid memory name" + + self.memory_dir.mkdir(parents=True, exist_ok=True) + + # Write individual memory file with frontmatter + frontmatter = ( + f"---\n" + f"name: {name}\n" + f"description: {description}\n" + f"type: {mem_type}\n" + f"---\n" + f"{content}\n" + ) + file_name = f"{safe_name}.md" + file_path = self.memory_dir / file_name + file_path.write_text(frontmatter) + + # Update in-memory store + self.memories[name] = { + "description": description, + "type": mem_type, + "content": content, + "file": file_name, + } + + # Rebuild MEMORY.md index + self._rebuild_index() + + return f"Saved memory '{name}' [{mem_type}] to {file_path.relative_to(WORKDIR)}" + + def _rebuild_index(self): + """Rebuild MEMORY.md from current in-memory state, capped at 200 lines.""" + lines = ["# Memory Index", ""] + for name, mem in self.memories.items(): + lines.append(f"- {name}: {mem['description']} [{mem['type']}]") + if len(lines) >= MAX_INDEX_LINES: + lines.append(f"... (truncated at {MAX_INDEX_LINES} lines)") + break + self.memory_dir.mkdir(parents=True, exist_ok=True) + MEMORY_INDEX.write_text("\n".join(lines) + "\n") + + def _parse_frontmatter(self, text: str) -> dict | None: + """Parse --- delimited frontmatter + body content.""" + match = re.match(r"^---\s*\n(.*?)\n---\s*\n(.*)", text, re.DOTALL) + if not match: + return None + header, body = match.group(1), match.group(2) + result = {"content": body.strip()} + for line in header.splitlines(): + if ":" in line: + key, _, value = line.partition(":") + result[key.strip()] = value.strip() + return result + + +class DreamConsolidator: + """ + Auto-consolidation of memories between sessions ("Dream"). + + This is an optional later-stage feature. Its job is to prevent the memory + store from growing into a noisy pile by merging, deduplicating, and + pruning entries over time. + """ + + COOLDOWN_SECONDS = 86400 # 24 hours between consolidations + SCAN_THROTTLE_SECONDS = 600 # 10 minutes between scan attempts + MIN_SESSION_COUNT = 5 # need enough data to consolidate + LOCK_STALE_SECONDS = 3600 # PID lock considered stale after 1 hour + + PHASES = [ + "Orient: scan MEMORY.md index for structure and categories", + "Gather: read individual memory files for full content", + "Consolidate: merge related memories, remove stale entries", + "Prune: enforce 200-line limit on MEMORY.md index", + ] + + def __init__(self, memory_dir: Path = None): + self.memory_dir = memory_dir or MEMORY_DIR + self.lock_file = self.memory_dir / ".dream_lock" + self.enabled = True + self.mode = "default" + self.last_consolidation_time = 0.0 + self.last_scan_time = 0.0 + self.session_count = 0 + + def should_consolidate(self) -> tuple[bool, str]: + """ + Check 7 gates in sequence. All must pass. + Returns (can_run, reason) where reason explains the first failed gate. + """ + import time + + now = time.time() + + # Gate 1: enabled flag + if not self.enabled: + return False, "Gate 1: consolidation is disabled" + + # Gate 2: memory directory exists and has memory files + if not self.memory_dir.exists(): + return False, "Gate 2: memory directory does not exist" + memory_files = list(self.memory_dir.glob("*.md")) + # Exclude MEMORY.md itself from the count + memory_files = [f for f in memory_files if f.name != "MEMORY.md"] + if not memory_files: + return False, "Gate 2: no memory files found" + + # Gate 3: not in plan mode (only consolidate in active modes) + if self.mode == "plan": + return False, "Gate 3: plan mode does not allow consolidation" + + # Gate 4: 24-hour cooldown since last consolidation + time_since_last = now - self.last_consolidation_time + if time_since_last < self.COOLDOWN_SECONDS: + remaining = int(self.COOLDOWN_SECONDS - time_since_last) + return False, f"Gate 4: cooldown active, {remaining}s remaining" + + # Gate 5: 10-minute throttle since last scan attempt + time_since_scan = now - self.last_scan_time + if time_since_scan < self.SCAN_THROTTLE_SECONDS: + remaining = int(self.SCAN_THROTTLE_SECONDS - time_since_scan) + return False, f"Gate 5: scan throttle active, {remaining}s remaining" + + # Gate 6: need at least 5 sessions worth of data + if self.session_count < self.MIN_SESSION_COUNT: + return False, f"Gate 6: only {self.session_count} sessions, need {self.MIN_SESSION_COUNT}" + + # Gate 7: no active lock file (check PID staleness) + if not self._acquire_lock(): + return False, "Gate 7: lock held by another process" + + return True, "All 7 gates passed" + + def consolidate(self) -> list[str]: + """ + Run the 4-phase consolidation process. + + The teaching version returns phase descriptions to make the flow + visible without requiring an extra LLM pass here. + """ + import time + + can_run, reason = self.should_consolidate() + if not can_run: + print(f"[Dream] Cannot consolidate: {reason}") + return [] + + print("[Dream] Starting consolidation...") + self.last_scan_time = time.time() + + completed_phases = [] + for i, phase in enumerate(self.PHASES, 1): + print(f"[Dream] Phase {i}/4: {phase}") + completed_phases.append(phase) + + self.last_consolidation_time = time.time() + self._release_lock() + print(f"[Dream] Consolidation complete: {len(completed_phases)} phases executed") + return completed_phases + + def _acquire_lock(self) -> bool: + """ + Acquire a PID-based lock file. Returns False if locked by another + live process. Stale locks (older than LOCK_STALE_SECONDS) are removed. + """ + import time + + if self.lock_file.exists(): + try: + lock_data = self.lock_file.read_text().strip() + pid_str, timestamp_str = lock_data.split(":", 1) + pid = int(pid_str) + lock_time = float(timestamp_str) + + # Check if lock is stale + if (time.time() - lock_time) > self.LOCK_STALE_SECONDS: + print(f"[Dream] Removing stale lock from PID {pid}") + self.lock_file.unlink() + else: + # Check if owning process is still alive + try: + os.kill(pid, 0) + return False # process alive, lock is valid + except OSError: + print(f"[Dream] Removing lock from dead PID {pid}") + self.lock_file.unlink() + except (ValueError, OSError): + # Corrupted lock file, remove it + self.lock_file.unlink(missing_ok=True) + + # Write new lock + try: + self.memory_dir.mkdir(parents=True, exist_ok=True) + self.lock_file.write_text(f"{os.getpid()}:{time.time()}") + return True + except OSError: + return False + + def _release_lock(self): + """Release the lock file if we own it.""" + try: + if self.lock_file.exists(): + lock_data = self.lock_file.read_text().strip() + pid_str = lock_data.split(":")[0] + if int(pid_str) == os.getpid(): + self.lock_file.unlink() + except (ValueError, OSError): + pass + + +# -- Tool implementations -- +def safe_path(p: str) -> Path: + path = (WORKDIR / p).resolve() + if not path.is_relative_to(WORKDIR): + raise ValueError(f"Path escapes workspace: {p}") + return path + + +def run_bash(command: str) -> str: + dangerous = ["rm -rf /", "sudo", "shutdown", "reboot", "> /dev/"] + if any(d in command for d in dangerous): + return "Error: Dangerous command blocked" + try: + r = subprocess.run(command, shell=True, cwd=WORKDIR, + capture_output=True, text=True, timeout=120) + out = (r.stdout + r.stderr).strip() + return out[:50000] if out else "(no output)" + except subprocess.TimeoutExpired: + return "Error: Timeout (120s)" + + +def run_read(path: str, limit: int = None) -> str: + try: + lines = safe_path(path).read_text().splitlines() + if limit and limit < len(lines): + lines = lines[:limit] + [f"... ({len(lines) - limit} more)"] + return "\n".join(lines)[:50000] + except Exception as e: + return f"Error: {e}" + + +def run_write(path: str, content: str) -> str: + try: + fp = safe_path(path) + fp.parent.mkdir(parents=True, exist_ok=True) + fp.write_text(content) + return f"Wrote {len(content)} bytes" + except Exception as e: + return f"Error: {e}" + + +def run_edit(path: str, old_text: str, new_text: str) -> str: + try: + fp = safe_path(path) + content = fp.read_text() + if old_text not in content: + return f"Error: Text not found in {path}" + fp.write_text(content.replace(old_text, new_text, 1)) + return f"Edited {path}" + except Exception as e: + return f"Error: {e}" + + +# Global memory manager +memory_mgr = MemoryManager() + + +def run_save_memory(name: str, description: str, mem_type: str, content: str) -> str: + return memory_mgr.save_memory(name, description, mem_type, content) + + +TOOL_HANDLERS = { + "bash": lambda **kw: run_bash(kw["command"]), + "read_file": lambda **kw: run_read(kw["path"], kw.get("limit")), + "write_file": lambda **kw: run_write(kw["path"], kw["content"]), + "edit_file": lambda **kw: run_edit(kw["path"], kw["old_text"], kw["new_text"]), + "save_memory": lambda **kw: run_save_memory(kw["name"], kw["description"], kw["type"], kw["content"]), +} + +TOOLS = [ + {"name": "bash", "description": "Run a shell command.", + "input_schema": {"type": "object", "properties": {"command": {"type": "string"}}, "required": ["command"]}}, + {"name": "read_file", "description": "Read file contents.", + "input_schema": {"type": "object", "properties": {"path": {"type": "string"}, "limit": {"type": "integer"}}, "required": ["path"]}}, + {"name": "write_file", "description": "Write content to file.", + "input_schema": {"type": "object", "properties": {"path": {"type": "string"}, "content": {"type": "string"}}, "required": ["path", "content"]}}, + {"name": "edit_file", "description": "Replace exact text in file.", + "input_schema": {"type": "object", "properties": {"path": {"type": "string"}, "old_text": {"type": "string"}, "new_text": {"type": "string"}}, "required": ["path", "old_text", "new_text"]}}, + {"name": "save_memory", "description": "Save a persistent memory that survives across sessions.", + "input_schema": {"type": "object", "properties": { + "name": {"type": "string", "description": "Short identifier (e.g. prefer_tabs, db_schema)"}, + "description": {"type": "string", "description": "One-line summary of what this memory captures"}, + "type": {"type": "string", "enum": ["user", "feedback", "project", "reference"], + "description": "user=preferences, feedback=corrections, project=non-obvious project conventions or decision reasons, reference=external resource pointers"}, + "content": {"type": "string", "description": "Full memory content (multi-line OK)"}, + }, "required": ["name", "description", "type", "content"]}}, +] + +MEMORY_GUIDANCE = """ +When to save memories: +- User states a preference ("I like tabs", "always use pytest") -> type: user +- User corrects you ("don't do X", "that was wrong because...") -> type: feedback +- You learn a project fact that is not easy to infer from current code alone + (for example: a rule exists because of compliance, or a legacy module must + stay untouched for business reasons) -> type: project +- You learn where an external resource lives (ticket board, dashboard, docs URL) + -> type: reference + +When NOT to save: +- Anything easily derivable from code (function signatures, file structure, directory layout) +- Temporary task state (current branch, open PR numbers, current TODOs) +- Secrets or credentials (API keys, passwords) +""" + + +def build_system_prompt() -> str: + """Assemble system prompt with memory content included.""" + parts = [f"You are a coding agent at {WORKDIR}. Use tools to solve tasks."] + + # Inject memory content if available + memory_section = memory_mgr.load_memory_prompt() + if memory_section: + parts.append(memory_section) + + parts.append(MEMORY_GUIDANCE) + return "\n\n".join(parts) + + +def agent_loop(messages: list): + """ + Agent loop with memory-aware system prompt. + + The system prompt is rebuilt each call so newly saved memories + are visible in the next LLM turn within the same session. + """ + while True: + system = build_system_prompt() + response = client.messages.create( + model=MODEL, system=system, messages=messages, + tools=TOOLS, max_tokens=8000, + ) + messages.append({"role": "assistant", "content": response.content}) + + if response.stop_reason != "tool_use": + return + + results = [] + for block in response.content: + if block.type != "tool_use": + continue + handler = TOOL_HANDLERS.get(block.name) + try: + output = handler(**(block.input or {})) if handler else f"Unknown: {block.name}" + except Exception as e: + output = f"Error: {e}" + print(f"> {block.name}: {str(output)[:200]}") + results.append({ + "type": "tool_result", + "tool_use_id": block.id, + "content": str(output), + }) + + messages.append({"role": "user", "content": results}) + + +if __name__ == "__main__": + # Load existing memories at session start + memory_mgr.load_all() + mem_count = len(memory_mgr.memories) + if mem_count: + print(f"[{mem_count} memories loaded into context]") + else: + print("[No existing memories. The agent can create them with save_memory.]") + + history = [] + while True: + try: + query = input("\033[36ms09 >> \033[0m") + except (EOFError, KeyboardInterrupt): + break + if query.strip().lower() in ("q", "exit", ""): + break + + # /memories command to list current memories + if query.strip() == "/memories": + if memory_mgr.memories: + for name, mem in memory_mgr.memories.items(): + print(f" [{mem['type']}] {name}: {mem['description']}") + else: + print(" (no memories)") + continue + + history.append({"role": "user", "content": query}) + agent_loop(history) + response_content = history[-1]["content"] + if isinstance(response_content, list): + for block in response_content: + if hasattr(block, "text"): + print(block.text) + print() diff --git a/agents/s10_system_prompt.py b/agents/s10_system_prompt.py new file mode 100644 index 000000000..617fd4439 --- /dev/null +++ b/agents/s10_system_prompt.py @@ -0,0 +1,389 @@ +#!/usr/bin/env python3 +# Harness: assembly -- the system prompt is a pipeline, not a string. +""" +s10_system_prompt.py - System Prompt Construction + +This chapter teaches one core idea: +the system prompt should be assembled from clear sections, not written as one +giant hardcoded blob. + +Teaching pipeline: + 1. core instructions + 2. tool listing + 3. skill metadata + 4. memory section + 5. CLAUDE.md chain + 6. dynamic context + +The builder keeps stable information separate from information that changes +often. A simple DYNAMIC_BOUNDARY marker makes that split visible. + +Per-turn reminders are even more dynamic. They are better injected as a +separate user-role system reminder than mixed blindly into the stable prompt. + +Key insight: "Prompt construction is a pipeline with boundaries, not one +big string." +""" + +import datetime +import json +import os +import re +import subprocess +from pathlib import Path + +from anthropic import Anthropic +from dotenv import load_dotenv + +load_dotenv(override=True) + +if os.getenv("ANTHROPIC_BASE_URL"): + os.environ.pop("ANTHROPIC_AUTH_TOKEN", None) + +WORKDIR = Path.cwd() +client = Anthropic(base_url=os.getenv("ANTHROPIC_BASE_URL")) +MODEL = os.environ["MODEL_ID"] + +DYNAMIC_BOUNDARY = "=== DYNAMIC_BOUNDARY ===" + + +class SystemPromptBuilder: + """ + Assemble the system prompt from independent sections. + + The teaching goal here is clarity: + each section has one source and one responsibility. + + That makes the prompt easier to reason about, easier to test, and easier + to evolve as the agent grows new capabilities. + """ + + def __init__(self, workdir: Path = None, tools: list = None): + self.workdir = workdir or WORKDIR + self.tools = tools or [] + self.skills_dir = self.workdir / "skills" + self.memory_dir = self.workdir / ".memory" + + # -- Section 1: Core instructions -- + def _build_core(self) -> str: + return ( + f"You are a coding agent operating in {self.workdir}.\n" + "Use the provided tools to explore, read, write, and edit files.\n" + "Always verify before assuming. Prefer reading files over guessing." + ) + + # -- Section 2: Tool listings -- + def _build_tool_listing(self) -> str: + if not self.tools: + return "" + lines = ["# Available tools"] + for tool in self.tools: + props = tool.get("input_schema", {}).get("properties", {}) + params = ", ".join(props.keys()) + lines.append(f"- {tool['name']}({params}): {tool['description']}") + return "\n".join(lines) + + # -- Section 3: Skill metadata (layer 1 from s05 concept) -- + def _build_skill_listing(self) -> str: + if not self.skills_dir.exists(): + return "" + skills = [] + for skill_dir in sorted(self.skills_dir.iterdir()): + skill_md = skill_dir / "SKILL.md" + if not skill_md.exists(): + continue + text = skill_md.read_text() + # Parse frontmatter for name + description + match = re.match(r"^---\s*\n(.*?)\n---", text, re.DOTALL) + if not match: + continue + meta = {} + for line in match.group(1).splitlines(): + if ":" in line: + k, _, v = line.partition(":") + meta[k.strip()] = v.strip() + name = meta.get("name", skill_dir.name) + desc = meta.get("description", "") + skills.append(f"- {name}: {desc}") + if not skills: + return "" + return "# Available skills\n" + "\n".join(skills) + + # -- Section 4: Memory content -- + def _build_memory_section(self) -> str: + if not self.memory_dir.exists(): + return "" + memories = [] + for md_file in sorted(self.memory_dir.glob("*.md")): + if md_file.name == "MEMORY.md": + continue + text = md_file.read_text() + match = re.match(r"^---\s*\n(.*?)\n---\s*\n(.*)", text, re.DOTALL) + if not match: + continue + header, body = match.group(1), match.group(2).strip() + meta = {} + for line in header.splitlines(): + if ":" in line: + k, _, v = line.partition(":") + meta[k.strip()] = v.strip() + name = meta.get("name", md_file.stem) + mem_type = meta.get("type", "project") + desc = meta.get("description", "") + memories.append(f"[{mem_type}] {name}: {desc}\n{body}") + if not memories: + return "" + return "# Memories (persistent)\n\n" + "\n\n".join(memories) + + # -- Section 5: CLAUDE.md chain -- + def _build_claude_md(self) -> str: + """ + Load CLAUDE.md files in priority order (all are included): + 1. ~/.claude/CLAUDE.md (user-global instructions) + 2. /CLAUDE.md (project instructions) + 3. /CLAUDE.md (directory-specific instructions) + """ + sources = [] + + # User-global + user_claude = Path.home() / ".claude" / "CLAUDE.md" + if user_claude.exists(): + sources.append(("user global (~/.claude/CLAUDE.md)", user_claude.read_text())) + + # Project root + project_claude = self.workdir / "CLAUDE.md" + if project_claude.exists(): + sources.append(("project root (CLAUDE.md)", project_claude.read_text())) + + # Subdirectory -- in real CC, this walks from cwd up to project root + # Teaching: check cwd if different from workdir + cwd = Path.cwd() + if cwd != self.workdir: + subdir_claude = cwd / "CLAUDE.md" + if subdir_claude.exists(): + sources.append((f"subdir ({cwd.name}/CLAUDE.md)", subdir_claude.read_text())) + + if not sources: + return "" + parts = ["# CLAUDE.md instructions"] + for label, content in sources: + parts.append(f"## From {label}") + parts.append(content.strip()) + return "\n\n".join(parts) + + # -- Section 6: Dynamic context -- + def _build_dynamic_context(self) -> str: + lines = [ + f"Current date: {datetime.date.today().isoformat()}", + f"Working directory: {self.workdir}", + f"Model: {MODEL}", + f"Platform: {os.uname().sysname}", + ] + return "# Dynamic context\n" + "\n".join(lines) + + # -- Assemble all sections -- + def build(self) -> str: + """ + Assemble the full system prompt from all sections. + + Static sections (1-5) are separated from dynamic (6) by + the DYNAMIC_BOUNDARY marker. In real CC, the static prefix + is cached across turns to save prompt tokens. + """ + sections = [] + + core = self._build_core() + if core: + sections.append(core) + + tools = self._build_tool_listing() + if tools: + sections.append(tools) + + skills = self._build_skill_listing() + if skills: + sections.append(skills) + + memory = self._build_memory_section() + if memory: + sections.append(memory) + + claude_md = self._build_claude_md() + if claude_md: + sections.append(claude_md) + + # Static/dynamic boundary + sections.append(DYNAMIC_BOUNDARY) + + dynamic = self._build_dynamic_context() + if dynamic: + sections.append(dynamic) + + return "\n\n".join(sections) + + +def build_system_reminder(extra: str = None) -> dict: + """ + Build a system-reminder user message for per-turn dynamic content. + + The teaching version keeps reminders outside the stable system prompt so + short-lived context does not get mixed into the long-lived instructions. + """ + parts = [] + if extra: + parts.append(extra) + if not parts: + return None + content = "\n" + "\n".join(parts) + "\n" + return {"role": "user", "content": content} + + +# -- Tool implementations -- +def safe_path(p: str) -> Path: + path = (WORKDIR / p).resolve() + if not path.is_relative_to(WORKDIR): + raise ValueError(f"Path escapes workspace: {p}") + return path + + +def run_bash(command: str) -> str: + dangerous = ["rm -rf /", "sudo", "shutdown", "reboot", "> /dev/"] + if any(d in command for d in dangerous): + return "Error: Dangerous command blocked" + try: + r = subprocess.run(command, shell=True, cwd=WORKDIR, + capture_output=True, text=True, timeout=120) + out = (r.stdout + r.stderr).strip() + return out[:50000] if out else "(no output)" + except subprocess.TimeoutExpired: + return "Error: Timeout (120s)" + + +def run_read(path: str, limit: int = None) -> str: + try: + lines = safe_path(path).read_text().splitlines() + if limit and limit < len(lines): + lines = lines[:limit] + [f"... ({len(lines) - limit} more)"] + return "\n".join(lines)[:50000] + except Exception as e: + return f"Error: {e}" + + +def run_write(path: str, content: str) -> str: + try: + fp = safe_path(path) + fp.parent.mkdir(parents=True, exist_ok=True) + fp.write_text(content) + return f"Wrote {len(content)} bytes" + except Exception as e: + return f"Error: {e}" + + +def run_edit(path: str, old_text: str, new_text: str) -> str: + try: + fp = safe_path(path) + content = fp.read_text() + if old_text not in content: + return f"Error: Text not found in {path}" + fp.write_text(content.replace(old_text, new_text, 1)) + return f"Edited {path}" + except Exception as e: + return f"Error: {e}" + + +TOOL_HANDLERS = { + "bash": lambda **kw: run_bash(kw["command"]), + "read_file": lambda **kw: run_read(kw["path"], kw.get("limit")), + "write_file": lambda **kw: run_write(kw["path"], kw["content"]), + "edit_file": lambda **kw: run_edit(kw["path"], kw["old_text"], kw["new_text"]), +} + +TOOLS = [ + {"name": "bash", "description": "Run a shell command.", + "input_schema": {"type": "object", "properties": {"command": {"type": "string"}}, "required": ["command"]}}, + {"name": "read_file", "description": "Read file contents.", + "input_schema": {"type": "object", "properties": {"path": {"type": "string"}, "limit": {"type": "integer"}}, "required": ["path"]}}, + {"name": "write_file", "description": "Write content to file.", + "input_schema": {"type": "object", "properties": {"path": {"type": "string"}, "content": {"type": "string"}}, "required": ["path", "content"]}}, + {"name": "edit_file", "description": "Replace exact text in file.", + "input_schema": {"type": "object", "properties": {"path": {"type": "string"}, "old_text": {"type": "string"}, "new_text": {"type": "string"}}, "required": ["path", "old_text", "new_text"]}}, +] + +# Global prompt builder +prompt_builder = SystemPromptBuilder(workdir=WORKDIR, tools=TOOLS) + + +def agent_loop(messages: list): + """ + Agent loop with assembled system prompt. + + The system prompt is rebuilt each iteration. In real CC, the static + prefix is cached and only the dynamic suffix changes per turn. + """ + while True: + system = prompt_builder.build() + response = client.messages.create( + model=MODEL, system=system, messages=messages, + tools=TOOLS, max_tokens=8000, + ) + messages.append({"role": "assistant", "content": response.content}) + + if response.stop_reason != "tool_use": + return + + results = [] + for block in response.content: + if block.type != "tool_use": + continue + handler = TOOL_HANDLERS.get(block.name) + try: + output = handler(**(block.input or {})) if handler else f"Unknown: {block.name}" + except Exception as e: + output = f"Error: {e}" + print(f"> {block.name}: {str(output)[:200]}") + results.append({ + "type": "tool_result", + "tool_use_id": block.id, + "content": str(output), + }) + + messages.append({"role": "user", "content": results}) + + +if __name__ == "__main__": + # Show the assembled prompt at startup for educational purposes + full_prompt = prompt_builder.build() + section_count = full_prompt.count("\n# ") + print(f"[System prompt assembled: {len(full_prompt)} chars, ~{section_count} sections]") + + # /prompt command shows the full assembled prompt + history = [] + while True: + try: + query = input("\033[36ms10 >> \033[0m") + except (EOFError, KeyboardInterrupt): + break + if query.strip().lower() in ("q", "exit", ""): + break + + if query.strip() == "/prompt": + print("--- System Prompt ---") + print(prompt_builder.build()) + print("--- End ---") + continue + + if query.strip() == "/sections": + prompt = prompt_builder.build() + for line in prompt.splitlines(): + if line.startswith("# ") or line == DYNAMIC_BOUNDARY: + print(f" {line}") + continue + + history.append({"role": "user", "content": query}) + agent_loop(history) + response_content = history[-1]["content"] + if isinstance(response_content, list): + for block in response_content: + if hasattr(block, "text"): + print(block.text) + print() diff --git a/agents/s11_error_recovery.py b/agents/s11_error_recovery.py new file mode 100644 index 000000000..652954052 --- /dev/null +++ b/agents/s11_error_recovery.py @@ -0,0 +1,315 @@ +#!/usr/bin/env python3 +# Harness: resilience -- a robust agent recovers instead of crashing. +""" +s11_error_recovery.py - Error Recovery + +Teaching demo of three recovery paths: + +- continue when output is truncated +- compact when context grows too large +- back off when transport errors are temporary + + LLM response + | + v + [Check stop_reason] + | + +-- "max_tokens" ----> [Strategy 1: max_output_tokens recovery] + | Inject continuation message: + | "Output limit hit. Continue directly." + | Retry up to MAX_RECOVERY_ATTEMPTS (3). + | Counter: max_output_recovery_count + | + +-- API error -------> [Check error type] + | | + | +-- prompt_too_long --> [Strategy 2: compact + retry] + | | Trigger auto_compact (LLM summary). + | | Replace history with summary. + | | Retry the turn. + | | + | +-- connection/rate --> [Strategy 3: backoff retry] + | Exponential backoff: base * 2^attempt + jitter + | Up to 3 retries. + | + +-- "end_turn" -----> [Normal exit] + + Recovery priority (first match wins): + 1. max_tokens -> inject continuation, retry + 2. prompt_too_long -> compact, retry + 3. connection error -> backoff, retry + 4. all retries exhausted -> fail gracefully +""" + +import json +import os +import random +import subprocess +import time +from pathlib import Path + +from anthropic import Anthropic, APIError +from dotenv import load_dotenv + +load_dotenv(override=True) + +if os.getenv("ANTHROPIC_BASE_URL"): + os.environ.pop("ANTHROPIC_AUTH_TOKEN", None) + +WORKDIR = Path.cwd() +client = Anthropic(base_url=os.getenv("ANTHROPIC_BASE_URL")) +MODEL = os.environ["MODEL_ID"] + +# Recovery constants +MAX_RECOVERY_ATTEMPTS = 3 +BACKOFF_BASE_DELAY = 1.0 # seconds +BACKOFF_MAX_DELAY = 30.0 # seconds +TOKEN_THRESHOLD = 50000 # chars / 4 ~ tokens for compact trigger + +CONTINUATION_MESSAGE = ( + "Output limit hit. Continue directly from where you stopped -- " + "no recap, no repetition. Pick up mid-sentence if needed." +) + + +def estimate_tokens(messages: list) -> int: + """Rough token estimate: ~4 chars per token.""" + return len(json.dumps(messages, default=str)) // 4 + + +def auto_compact(messages: list) -> list: + """ + Compress conversation history into a short continuation summary. + """ + conversation_text = json.dumps(messages, default=str)[:80000] + prompt = ( + "Summarize this conversation for continuity. Include:\n" + "1) Task overview and success criteria\n" + "2) Current state: completed work, files touched\n" + "3) Key decisions and failed approaches\n" + "4) Remaining next steps\n" + "Be concise but preserve critical details.\n\n" + + conversation_text + ) + try: + response = client.messages.create( + model=MODEL, + messages=[{"role": "user", "content": prompt}], + max_tokens=4000, + ) + summary = response.content[0].text + except Exception as e: + summary = f"(compact failed: {e}). Previous context lost." + + continuation = ( + "This session continues from a previous conversation that was compacted. " + f"Summary of prior context:\n\n{summary}\n\n" + "Continue from where we left off without re-asking the user." + ) + return [{"role": "user", "content": continuation}] + + +def backoff_delay(attempt: int) -> float: + """Exponential backoff with jitter: base * 2^attempt + random(0, 1).""" + delay = min(BACKOFF_BASE_DELAY * (2 ** attempt), BACKOFF_MAX_DELAY) + jitter = random.uniform(0, 1) + return delay + jitter + + +# -- Tool implementations -- +def safe_path(p: str) -> Path: + path = (WORKDIR / p).resolve() + if not path.is_relative_to(WORKDIR): + raise ValueError(f"Path escapes workspace: {p}") + return path + + +def run_bash(command: str) -> str: + dangerous = ["rm -rf /", "sudo", "shutdown", "reboot", "> /dev/"] + if any(d in command for d in dangerous): + return "Error: Dangerous command blocked" + try: + r = subprocess.run(command, shell=True, cwd=WORKDIR, + capture_output=True, text=True, timeout=120) + out = (r.stdout + r.stderr).strip() + return out[:50000] if out else "(no output)" + except subprocess.TimeoutExpired: + return "Error: Timeout (120s)" + + +def run_read(path: str, limit: int = None) -> str: + try: + lines = safe_path(path).read_text().splitlines() + if limit and limit < len(lines): + lines = lines[:limit] + [f"... ({len(lines) - limit} more)"] + return "\n".join(lines)[:50000] + except Exception as e: + return f"Error: {e}" + + +def run_write(path: str, content: str) -> str: + try: + fp = safe_path(path) + fp.parent.mkdir(parents=True, exist_ok=True) + fp.write_text(content) + return f"Wrote {len(content)} bytes" + except Exception as e: + return f"Error: {e}" + + +def run_edit(path: str, old_text: str, new_text: str) -> str: + try: + fp = safe_path(path) + content = fp.read_text() + if old_text not in content: + return f"Error: Text not found in {path}" + fp.write_text(content.replace(old_text, new_text, 1)) + return f"Edited {path}" + except Exception as e: + return f"Error: {e}" + + +TOOL_HANDLERS = { + "bash": lambda **kw: run_bash(kw["command"]), + "read_file": lambda **kw: run_read(kw["path"], kw.get("limit")), + "write_file": lambda **kw: run_write(kw["path"], kw["content"]), + "edit_file": lambda **kw: run_edit(kw["path"], kw["old_text"], kw["new_text"]), +} + +TOOLS = [ + {"name": "bash", "description": "Run a shell command.", + "input_schema": {"type": "object", "properties": {"command": {"type": "string"}}, "required": ["command"]}}, + {"name": "read_file", "description": "Read file contents.", + "input_schema": {"type": "object", "properties": {"path": {"type": "string"}, "limit": {"type": "integer"}}, "required": ["path"]}}, + {"name": "write_file", "description": "Write content to file.", + "input_schema": {"type": "object", "properties": {"path": {"type": "string"}, "content": {"type": "string"}}, "required": ["path", "content"]}}, + {"name": "edit_file", "description": "Replace exact text in file.", + "input_schema": {"type": "object", "properties": {"path": {"type": "string"}, "old_text": {"type": "string"}, "new_text": {"type": "string"}}, "required": ["path", "old_text", "new_text"]}}, +] + +SYSTEM = f"You are a coding agent at {WORKDIR}. Use tools to solve tasks." + + +def agent_loop(messages: list): + """ + Error-recovering agent loop with three paths: + + 1. continue after max_tokens + 2. compact after prompt-too-long + 3. back off after transient transport failure + """ + max_output_recovery_count = 0 + + while True: + # -- Attempt the API call with connection retry -- + response = None + for attempt in range(MAX_RECOVERY_ATTEMPTS + 1): + try: + response = client.messages.create( + model=MODEL, system=SYSTEM, messages=messages, + tools=TOOLS, max_tokens=8000, + ) + break # success + + except APIError as e: + error_body = str(e).lower() + + # Strategy 2: prompt_too_long -> compact and retry + if "overlong_prompt" in error_body or ("prompt" in error_body and "long" in error_body): + print(f"[Recovery] Prompt too long. Compacting... (attempt {attempt + 1})") + messages[:] = auto_compact(messages) + continue + + # Strategy 3: connection/rate errors -> backoff + if attempt < MAX_RECOVERY_ATTEMPTS: + delay = backoff_delay(attempt) + print(f"[Recovery] API error: {e}. " + f"Retrying in {delay:.1f}s (attempt {attempt + 1}/{MAX_RECOVERY_ATTEMPTS})") + time.sleep(delay) + continue + + # All retries exhausted + print(f"[Error] API call failed after {MAX_RECOVERY_ATTEMPTS} retries: {e}") + return + + except (ConnectionError, TimeoutError, OSError) as e: + # Strategy 3: network-level errors -> backoff + if attempt < MAX_RECOVERY_ATTEMPTS: + delay = backoff_delay(attempt) + print(f"[Recovery] Connection error: {e}. " + f"Retrying in {delay:.1f}s (attempt {attempt + 1}/{MAX_RECOVERY_ATTEMPTS})") + time.sleep(delay) + continue + + print(f"[Error] Connection failed after {MAX_RECOVERY_ATTEMPTS} retries: {e}") + return + + if response is None: + print("[Error] No response received.") + return + + messages.append({"role": "assistant", "content": response.content}) + + # -- Strategy 1: max_tokens recovery -- + if response.stop_reason == "max_tokens": + max_output_recovery_count += 1 + if max_output_recovery_count <= MAX_RECOVERY_ATTEMPTS: + print(f"[Recovery] max_tokens hit " + f"({max_output_recovery_count}/{MAX_RECOVERY_ATTEMPTS}). " + "Injecting continuation...") + messages.append({"role": "user", "content": CONTINUATION_MESSAGE}) + continue # retry the loop + else: + print(f"[Error] max_tokens recovery exhausted " + f"({MAX_RECOVERY_ATTEMPTS} attempts). Stopping.") + return + + # Reset max_tokens counter on successful non-max_tokens response + max_output_recovery_count = 0 + + # -- Normal end_turn: no tool use requested -- + if response.stop_reason != "tool_use": + return + + # -- Process tool calls -- + results = [] + for block in response.content: + if block.type != "tool_use": + continue + handler = TOOL_HANDLERS.get(block.name) + try: + output = handler(**(block.input or {})) if handler else f"Unknown: {block.name}" + except Exception as e: + output = f"Error: {e}" + print(f"> {block.name}: {str(output)[:200]}") + results.append({ + "type": "tool_result", + "tool_use_id": block.id, + "content": str(output), + }) + + messages.append({"role": "user", "content": results}) + + # Check if we should auto-compact (proactive, not just reactive) + if estimate_tokens(messages) > TOKEN_THRESHOLD: + print("[Recovery] Token estimate exceeds threshold. Auto-compacting...") + messages[:] = auto_compact(messages) + + +if __name__ == "__main__": + print("[Error recovery enabled: max_tokens / prompt_too_long / connection backoff]") + history = [] + while True: + try: + query = input("\033[36ms11 >> \033[0m") + except (EOFError, KeyboardInterrupt): + break + if query.strip().lower() in ("q", "exit", ""): + break + history.append({"role": "user", "content": query}) + agent_loop(history) + response_content = history[-1]["content"] + if isinstance(response_content, list): + for block in response_content: + if hasattr(block, "text"): + print(block.text) + print() diff --git a/agents/s07_task_system.py b/agents/s12_task_system.py similarity index 74% rename from agents/s07_task_system.py rename to agents/s12_task_system.py index cf72783e4..f4e79f805 100644 --- a/agents/s07_task_system.py +++ b/agents/s12_task_system.py @@ -1,15 +1,18 @@ #!/usr/bin/env python3 # Harness: persistent tasks -- goals that outlive any single conversation. """ -s07_task_system.py - Tasks +s12_task_system.py - Tasks Tasks persist as JSON files in .tasks/ so they survive context compression. -Each task has a dependency graph (blockedBy). +Each task carries a small dependency graph: + +- blockedBy: what must finish first +- blocks: what this task unlocks later .tasks/ task_1.json {"id":1, "subject":"...", "status":"completed", ...} task_2.json {"id":2, "blockedBy":[1], "status":"pending", ...} - task_3.json {"id":3, "blockedBy":[2], ...} + task_3.json {"id":3, "blockedBy":[2], "blocks":[], ...} Dependency resolution: +----------+ +----------+ +----------+ @@ -19,7 +22,22 @@ | ^ +--- completing task 1 removes it from task 2's blockedBy -Key insight: "State that survives compression -- because it's outside the conversation." +Key idea: task state survives compression because it lives on disk, not only +inside the conversation. +These are durable work-graph tasks, not transient runtime execution slots. + +Read this file in this order: +1. TaskManager: what a TaskRecord looks like on disk. +2. TOOL_HANDLERS / TOOLS: how task operations enter the same loop as normal tools. +3. agent_loop: how persistent work state is exposed back to the model. + +Most common confusion: +- a task record is a durable work item +- it is not a thread, background slot, or worker process + +Teaching boundary: +this chapter teaches the durable work graph first. +Runtime execution slots and schedulers arrive later. """ import json @@ -43,8 +61,13 @@ SYSTEM = f"You are a coding agent at {WORKDIR}. Use task tools to plan and track work." -# -- TaskManager: CRUD with dependency graph, persisted as JSON files -- +# -- TaskManager: CRUD for a persistent task graph -- class TaskManager: + """Persistent TaskRecord store. + + Think "work graph on disk", not "currently running worker". + """ + def __init__(self, tasks_dir: Path): self.dir = tasks_dir self.dir.mkdir(exist_ok=True) @@ -62,35 +85,47 @@ def _load(self, task_id: int) -> dict: def _save(self, task: dict): path = self.dir / f"task_{task['id']}.json" - path.write_text(json.dumps(task, indent=2, ensure_ascii=False)) + path.write_text(json.dumps(task, indent=2)) def create(self, subject: str, description: str = "") -> str: task = { "id": self._next_id, "subject": subject, "description": description, - "status": "pending", "blockedBy": [], "owner": "", + "status": "pending", "blockedBy": [], "blocks": [], "owner": "", } self._save(task) self._next_id += 1 - return json.dumps(task, indent=2, ensure_ascii=False) + return json.dumps(task, indent=2) def get(self, task_id: int) -> str: - return json.dumps(self._load(task_id), indent=2, ensure_ascii=False) + return json.dumps(self._load(task_id), indent=2) - def update(self, task_id: int, status: str = None, - add_blocked_by: list = None, remove_blocked_by: list = None) -> str: + def update(self, task_id: int, status: str = None, owner: str = None, + add_blocked_by: list = None, add_blocks: list = None) -> str: task = self._load(task_id) + if owner is not None: + task["owner"] = owner if status: - if status not in ("pending", "in_progress", "completed"): + if status not in ("pending", "in_progress", "completed", "deleted"): raise ValueError(f"Invalid status: {status}") task["status"] = status + # When a task is completed, remove it from all other tasks' blockedBy if status == "completed": self._clear_dependency(task_id) if add_blocked_by: task["blockedBy"] = list(set(task["blockedBy"] + add_blocked_by)) - if remove_blocked_by: - task["blockedBy"] = [x for x in task["blockedBy"] if x not in remove_blocked_by] + if add_blocks: + task["blocks"] = list(set(task["blocks"] + add_blocks)) + # Bidirectional: also update the blocked tasks' blockedBy lists + for blocked_id in add_blocks: + try: + blocked = self._load(blocked_id) + if task_id not in blocked["blockedBy"]: + blocked["blockedBy"].append(task_id) + self._save(blocked) + except ValueError: + pass self._save(task) - return json.dumps(task, indent=2, ensure_ascii=False) + return json.dumps(task, indent=2) def _clear_dependency(self, completed_id: int): """Remove completed_id from all other tasks' blockedBy lists.""" @@ -102,19 +137,16 @@ def _clear_dependency(self, completed_id: int): def list_all(self) -> str: tasks = [] - files = sorted( - self.dir.glob("task_*.json"), - key=lambda f: int(f.stem.split("_")[1]) - ) - for f in files: + for f in sorted(self.dir.glob("task_*.json")): tasks.append(json.loads(f.read_text())) if not tasks: return "No tasks." lines = [] for t in tasks: - marker = {"pending": "[ ]", "in_progress": "[>]", "completed": "[x]"}.get(t["status"], "[?]") + marker = {"pending": "[ ]", "in_progress": "[>]", "completed": "[x]", "deleted": "[-]"}.get(t["status"], "[?]") blocked = f" (blocked by: {t['blockedBy']})" if t.get("blockedBy") else "" - lines.append(f"{marker} #{t['id']}: {t['subject']}{blocked}") + owner = f" owner={t['owner']}" if t.get("owner") else "" + lines.append(f"{marker} #{t['id']}: {t['subject']}{owner}{blocked}") return "\n".join(lines) @@ -176,7 +208,7 @@ def run_edit(path: str, old_text: str, new_text: str) -> str: "write_file": lambda **kw: run_write(kw["path"], kw["content"]), "edit_file": lambda **kw: run_edit(kw["path"], kw["old_text"], kw["new_text"]), "task_create": lambda **kw: TASKS.create(kw["subject"], kw.get("description", "")), - "task_update": lambda **kw: TASKS.update(kw["task_id"], kw.get("status"), kw.get("addBlockedBy"), kw.get("removeBlockedBy")), + "task_update": lambda **kw: TASKS.update(kw["task_id"], kw.get("status"), kw.get("owner"), kw.get("addBlockedBy"), kw.get("addBlocks")), "task_list": lambda **kw: TASKS.list_all(), "task_get": lambda **kw: TASKS.get(kw["task_id"]), } @@ -192,8 +224,8 @@ def run_edit(path: str, old_text: str, new_text: str) -> str: "input_schema": {"type": "object", "properties": {"path": {"type": "string"}, "old_text": {"type": "string"}, "new_text": {"type": "string"}}, "required": ["path", "old_text", "new_text"]}}, {"name": "task_create", "description": "Create a new task.", "input_schema": {"type": "object", "properties": {"subject": {"type": "string"}, "description": {"type": "string"}}, "required": ["subject"]}}, - {"name": "task_update", "description": "Update a task's status or dependencies.", - "input_schema": {"type": "object", "properties": {"task_id": {"type": "integer"}, "status": {"type": "string", "enum": ["pending", "in_progress", "completed"]}, "addBlockedBy": {"type": "array", "items": {"type": "integer"}}, "removeBlockedBy": {"type": "array", "items": {"type": "integer"}}}, "required": ["task_id"]}}, + {"name": "task_update", "description": "Update a task's status, owner, or dependencies.", + "input_schema": {"type": "object", "properties": {"task_id": {"type": "integer"}, "status": {"type": "string", "enum": ["pending", "in_progress", "completed", "deleted"]}, "owner": {"type": "string", "description": "Set when a teammate claims the task"}, "addBlockedBy": {"type": "array", "items": {"type": "integer"}}, "addBlocks": {"type": "array", "items": {"type": "integer"}}}, "required": ["task_id"]}}, {"name": "task_list", "description": "List all tasks with status summary.", "input_schema": {"type": "object", "properties": {}}}, {"name": "task_get", "description": "Get full details of a task by ID.", @@ -218,8 +250,7 @@ def agent_loop(messages: list): output = handler(**block.input) if handler else f"Unknown tool: {block.name}" except Exception as e: output = f"Error: {e}" - print(f"> {block.name}:") - print(str(output)[:200]) + print(f"> {block.name}: {str(output)[:200]}") results.append({"type": "tool_result", "tool_use_id": block.id, "content": str(output)}) messages.append({"role": "user", "content": results}) @@ -228,7 +259,7 @@ def agent_loop(messages: list): history = [] while True: try: - query = input("\033[36ms07 >> \033[0m") + query = input("\033[36ms12 >> \033[0m") except (EOFError, KeyboardInterrupt): break if query.strip().lower() in ("q", "exit", ""): diff --git a/agents/s08_background_tasks.py b/agents/s13_background_tasks.py similarity index 57% rename from agents/s08_background_tasks.py rename to agents/s13_background_tasks.py index 390a77780..4fc0483d9 100644 --- a/agents/s08_background_tasks.py +++ b/agents/s13_background_tasks.py @@ -1,10 +1,10 @@ #!/usr/bin/env python3 # Harness: background execution -- the model thinks while the harness waits. """ -s08_background_tasks.py - Background Tasks +s13_background_tasks.py - Background Tasks -Run commands in background threads. A notification queue is drained -before each LLM call to deliver results. +Run slow commands in background threads. Before each LLM call, the loop +drains a notification queue and hands finished results back to the model. Main thread Background thread +-----------------+ +-----------------+ @@ -18,16 +18,19 @@ Agent ----[spawn A]----[spawn B]----[other work]---- | | v v - [A runs] [B runs] (parallel) + [A runs] [B runs] | | +-- notification queue --> [results injected] -Key insight: "Fire and forget -- the agent doesn't block while the command runs." +Background tasks here are runtime execution slots, not the durable task-board +records introduced in s12. """ import os +import json import subprocess import threading +import time import uuid from pathlib import Path @@ -40,28 +43,95 @@ os.environ.pop("ANTHROPIC_AUTH_TOKEN", None) WORKDIR = Path.cwd() +RUNTIME_DIR = WORKDIR / ".runtime-tasks" +RUNTIME_DIR.mkdir(exist_ok=True) client = Anthropic(base_url=os.getenv("ANTHROPIC_BASE_URL")) MODEL = os.environ["MODEL_ID"] SYSTEM = f"You are a coding agent at {WORKDIR}. Use background_run for long-running commands." +STALL_THRESHOLD_S = 45 # seconds before a task is considered stalled + + +class NotificationQueue: + """ + Priority-based notification queue with same-key folding. + + Folding means a newer message can replace an older message with the + same key, so the context is not flooded with stale updates. + """ + + PRIORITIES = {"immediate": 0, "high": 1, "medium": 2, "low": 3} + + def __init__(self): + self._queue = [] # list of (priority, key, message) + self._lock = threading.Lock() + + def push(self, message: str, priority: str = "medium", key: str = None): + """Add a message to the queue, folding if key matches an existing entry.""" + with self._lock: + if key: + # Fold: replace existing message with same key + self._queue = [(p, k, m) for p, k, m in self._queue if k != key] + self._queue.append((self.PRIORITIES.get(priority, 2), key, message)) + self._queue.sort(key=lambda x: x[0]) + + def drain(self) -> list[str]: + """Return all pending messages in priority order and clear the queue.""" + with self._lock: + messages = [m for _, _, m in self._queue] + self._queue.clear() + return messages + # -- BackgroundManager: threaded execution + notification queue -- class BackgroundManager: def __init__(self): - self.tasks = {} # task_id -> {status, result, command} + self.dir = RUNTIME_DIR + self.tasks = {} # task_id -> {status, result, command, started_at} self._notification_queue = [] # completed task results self._lock = threading.Lock() + self._condition = threading.Condition(self._lock) + + def _record_path(self, task_id: str) -> Path: + return self.dir / f"{task_id}.json" + + def _output_path(self, task_id: str) -> Path: + return self.dir / f"{task_id}.log" + + def _persist_task(self, task_id: str): + record = dict(self.tasks[task_id]) + self._record_path(task_id).write_text( + json.dumps(record, indent=2, ensure_ascii=False) + ) + + def _preview(self, output: str, limit: int = 500) -> str: + compact = " ".join((output or "(no output)").split()) + return compact[:limit] def run(self, command: str) -> str: """Start a background thread, return task_id immediately.""" task_id = str(uuid.uuid4())[:8] - self.tasks[task_id] = {"status": "running", "result": None, "command": command} + output_file = self._output_path(task_id) + self.tasks[task_id] = { + "id": task_id, + "status": "running", + "result": None, + "command": command, + "started_at": time.time(), + "finished_at": None, + "result_preview": "", + "output_file": str(output_file.relative_to(WORKDIR)), + } + self._persist_task(task_id) thread = threading.Thread( target=self._execute, args=(task_id, command), daemon=True ) thread.start() - return f"Background task {task_id} started: {command[:80]}" + return ( + f"Background task {task_id} started: {command[:80]} " + f"(output_file={output_file.relative_to(WORKDIR)})" + ) def _execute(self, task_id: str, command: str): """Thread target: run subprocess, capture output, push to queue.""" @@ -78,15 +148,24 @@ def _execute(self, task_id: str, command: str): except Exception as e: output = f"Error: {e}" status = "error" + final_output = output or "(no output)" + preview = self._preview(final_output) + output_path = self._output_path(task_id) + output_path.write_text(final_output) self.tasks[task_id]["status"] = status - self.tasks[task_id]["result"] = output or "(no output)" - with self._lock: + self.tasks[task_id]["result"] = final_output + self.tasks[task_id]["finished_at"] = time.time() + self.tasks[task_id]["result_preview"] = preview + self._persist_task(task_id) + with self._condition: self._notification_queue.append({ "task_id": task_id, "status": status, "command": command[:80], - "result": (output or "(no output)")[:500], + "preview": preview, + "output_file": str(output_path.relative_to(WORKDIR)), }) + self._condition.notify_all() def check(self, task_id: str = None) -> str: """Check status of one task or list all.""" @@ -94,19 +173,58 @@ def check(self, task_id: str = None) -> str: t = self.tasks.get(task_id) if not t: return f"Error: Unknown task {task_id}" - return f"[{t['status']}] {t['command'][:60]}\n{t.get('result') or '(running)'}" + visible = { + "id": t["id"], + "status": t["status"], + "command": t["command"], + "result_preview": t.get("result_preview", ""), + "output_file": t.get("output_file", ""), + } + return json.dumps(visible, indent=2, ensure_ascii=False) lines = [] for tid, t in self.tasks.items(): - lines.append(f"{tid}: [{t['status']}] {t['command'][:60]}") + lines.append( + f"{tid}: [{t['status']}] {t['command'][:60]} " + f"-> {t.get('result_preview') or '(running)'}" + ) return "\n".join(lines) if lines else "No background tasks." def drain_notifications(self) -> list: """Return and clear all pending completion notifications.""" - with self._lock: + with self._condition: notifs = list(self._notification_queue) self._notification_queue.clear() return notifs + def _has_running_tasks_locked(self) -> bool: + return any(task["status"] == "running" for task in self.tasks.values()) + + def has_running_tasks(self) -> bool: + with self._condition: + return self._has_running_tasks_locked() + + def wait_for_notifications(self) -> list: + with self._condition: + while not self._notification_queue and self._has_running_tasks_locked(): + self._condition.wait() + notifs = list(self._notification_queue) + self._notification_queue.clear() + return notifs + + def detect_stalled(self) -> list[str]: + """ + Return task IDs that have been running longer than STALL_THRESHOLD_S. + """ + now = time.time() + stalled = [] + for task_id, info in self.tasks.items(): + if info["status"] != "running": + continue + elapsed = now - info.get("started_at", now) + if elapsed > STALL_THRESHOLD_S: + stalled.append(task_id) + return stalled + BG = BackgroundManager() @@ -185,21 +303,41 @@ def run_edit(path: str, old_text: str, new_text: str) -> str: ] +def inject_background_results(messages: list, notifs: list) -> bool: + if notifs and messages: + lines = [] + for notif in notifs: + suffix = "" + if notif.get("output_file"): + suffix = f" (output_file={notif['output_file']})" + lines.append( + f"[bg:{notif['task_id']}] {notif['status']}: " + f"{notif.get('preview') or '(no output)'}{suffix}" + ) + notif_text = "\n".join(lines) + messages.append( + { + "role": "user", + "content": f"\n{notif_text}\n", + } + ) + return True + return False + + def agent_loop(messages: list): while True: - # Drain background notifications and inject as system message before LLM call - notifs = BG.drain_notifications() - if notifs and messages: - notif_text = "\n".join( - f"[bg:{n['task_id']}] {n['status']}: {n['result']}" for n in notifs - ) - messages.append({"role": "user", "content": f"\n{notif_text}\n"}) + inject_background_results(messages, BG.drain_notifications()) response = client.messages.create( model=MODEL, system=SYSTEM, messages=messages, tools=TOOLS, max_tokens=8000, ) messages.append({"role": "assistant", "content": response.content}) if response.stop_reason != "tool_use": + if BG.has_running_tasks() and inject_background_results( + messages, BG.wait_for_notifications() + ): + continue return results = [] for block in response.content: @@ -219,7 +357,7 @@ def agent_loop(messages: list): history = [] while True: try: - query = input("\033[36ms08 >> \033[0m") + query = input("\033[36ms13 >> \033[0m") except (EOFError, KeyboardInterrupt): break if query.strip().lower() in ("q", "exit", ""): diff --git a/agents/s14_cron_scheduler.py b/agents/s14_cron_scheduler.py new file mode 100644 index 000000000..57910cc12 --- /dev/null +++ b/agents/s14_cron_scheduler.py @@ -0,0 +1,564 @@ +#!/usr/bin/env python3 +# Harness: time -- the agent schedules its own future work. +""" +s14_cron_scheduler.py - Cron / Scheduled Tasks + +The agent can schedule prompts for future execution using standard cron +expressions. When a schedule matches the current time, it pushes a +notification back into the main conversation loop. + + Cron expression: 5 fields + +-------+-------+-------+-------+-------+ + | min | hour | dom | month | dow | + | 0-59 | 0-23 | 1-31 | 1-12 | 0-6 | + +-------+-------+-------+-------+-------+ + Examples: + "*/5 * * * *" -> every 5 minutes + "0 9 * * 1" -> Monday 9:00 AM + "30 14 * * *" -> daily 2:30 PM + + Two persistence modes: + +--------------------+-------------------------------+ + | session-only | In-memory list, lost on exit | + | durable | .claude/scheduled_tasks.json | + +--------------------+-------------------------------+ + + Two trigger modes: + +--------------------+-------------------------------+ + | recurring | Repeats until deleted or | + | | 7-day auto-expiry | + | one-shot | Fires once, then auto-deleted | + +--------------------+-------------------------------+ + + Jitter: recurring tasks can avoid exact minute boundaries. + + Architecture: + +-------------------------------+ + | Background thread | + | (checks every 1 second) | + | | + | for each task: | + | if cron_matches(now): | + | enqueue notification | + +-------------------------------+ + | + v + [notification_queue] + | + (drained at top of agent_loop) + | + v + [injected as user messages before LLM call] + +Key idea: scheduling remembers future work, then hands it back to the +same main loop when the time arrives. +""" + +import json +import os +import subprocess +import threading +import time +import uuid +from datetime import datetime, timedelta +from pathlib import Path +from queue import Queue, Empty + +from anthropic import Anthropic +from dotenv import load_dotenv + +load_dotenv(override=True) + +if os.getenv("ANTHROPIC_BASE_URL"): + os.environ.pop("ANTHROPIC_AUTH_TOKEN", None) + +WORKDIR = Path.cwd() +client = Anthropic(base_url=os.getenv("ANTHROPIC_BASE_URL")) +MODEL = os.environ["MODEL_ID"] + +SCHEDULED_TASKS_FILE = WORKDIR / ".claude" / "scheduled_tasks.json" +CRON_LOCK_FILE = WORKDIR / ".claude" / "cron.lock" +AUTO_EXPIRY_DAYS = 7 +JITTER_MINUTES = [0, 30] # avoid these exact minutes for recurring tasks +JITTER_OFFSET_MAX = 4 # offset range in minutes +# Teaching version: use a simple 1-4 minute offset when needed. + + +class CronLock: + """ + PID-file-based lock to prevent multiple sessions from firing the same cron job. + """ + + def __init__(self, lock_path: Path = None): + self._lock_path = lock_path or CRON_LOCK_FILE + + def acquire(self) -> bool: + """ + Try to acquire the cron lock. Returns True on success. + + If a lock file exists, check whether the PID inside is still alive. + If the process is dead the lock is stale and we can take over. + """ + if self._lock_path.exists(): + try: + stored_pid = int(self._lock_path.read_text().strip()) + # PID liveness probe: send signal 0 (no-op) to check existence + os.kill(stored_pid, 0) + # Process is alive -- lock is held by another session + return False + except (ValueError, ProcessLookupError, PermissionError, OSError): + # Stale lock (process dead or PID unparseable) -- remove it + pass + self._lock_path.parent.mkdir(parents=True, exist_ok=True) + self._lock_path.write_text(str(os.getpid())) + return True + + def release(self): + """Remove the lock file if it belongs to this process.""" + try: + if self._lock_path.exists(): + stored_pid = int(self._lock_path.read_text().strip()) + if stored_pid == os.getpid(): + self._lock_path.unlink() + except (ValueError, OSError): + pass + + +def cron_matches(expr: str, dt: datetime) -> bool: + """ + Check if a 5-field cron expression matches a given datetime. + + Fields: minute hour day-of-month month day-of-week + Supports: * (any), */N (every N), N (exact), N-M (range), N,M (list) + + No external dependencies -- simple manual matching. + """ + fields = expr.strip().split() + if len(fields) != 5: + return False + + values = [dt.minute, dt.hour, dt.day, dt.month, dt.weekday()] + # Python weekday: 0=Monday; cron: 0=Sunday. Convert. + cron_dow = (dt.weekday() + 1) % 7 + values[4] = cron_dow + ranges = [(0, 59), (0, 23), (1, 31), (1, 12), (0, 6)] + + for field, value, (lo, hi) in zip(fields, values, ranges): + if not _field_matches(field, value, lo, hi): + return False + return True + + +def _field_matches(field: str, value: int, lo: int, hi: int) -> bool: + """Match a single cron field against a value.""" + if field == "*": + return True + + for part in field.split(","): + # Handle step: */N or N-M/S + step = 1 + if "/" in part: + part, step_str = part.split("/", 1) + step = int(step_str) + + if part == "*": + # */N -- check if value is on the step grid + if (value - lo) % step == 0: + return True + elif "-" in part: + # Range: N-M + start, end = part.split("-", 1) + start, end = int(start), int(end) + if start <= value <= end and (value - start) % step == 0: + return True + else: + # Exact value + if int(part) == value: + return True + + return False + + +class CronScheduler: + """ + Manage scheduled tasks with background checking. + + Teaching version keeps only the core pieces: schedule records, a + minute checker, optional persistence, and a notification queue. + """ + + def __init__(self): + self.tasks = [] # list of task dicts + self.queue = Queue() # notification queue + self._stop_event = threading.Event() + self._thread = None + self._last_check_minute = -1 # avoid double-firing within same minute + + def start(self): + """Load durable tasks and start the background check thread.""" + self._load_durable() + self._thread = threading.Thread(target=self._check_loop, daemon=True) + self._thread.start() + count = len(self.tasks) + if count: + print(f"[Cron] Loaded {count} scheduled tasks") + + def stop(self): + """Stop the background thread.""" + self._stop_event.set() + if self._thread: + self._thread.join(timeout=2) + + def create(self, cron_expr: str, prompt: str, + recurring: bool = True, durable: bool = False) -> str: + """Create a new scheduled task. Returns the task ID.""" + task_id = str(uuid.uuid4())[:8] + now = time.time() + + task = { + "id": task_id, + "cron": cron_expr, + "prompt": prompt, + "recurring": recurring, + "durable": durable, + "createdAt": now, + } + + # Jitter for recurring tasks: if the cron fires on :00 or :30, + # note it so we can offset the check slightly + if recurring: + task["jitter_offset"] = self._compute_jitter(cron_expr) + + self.tasks.append(task) + if durable: + self._save_durable() + + mode = "recurring" if recurring else "one-shot" + store = "durable" if durable else "session-only" + return f"Created task {task_id} ({mode}, {store}): cron={cron_expr}" + + def delete(self, task_id: str) -> str: + """Delete a scheduled task by ID.""" + before = len(self.tasks) + self.tasks = [t for t in self.tasks if t["id"] != task_id] + if len(self.tasks) < before: + self._save_durable() + return f"Deleted task {task_id}" + return f"Task {task_id} not found" + + def list_tasks(self) -> str: + """List all scheduled tasks.""" + if not self.tasks: + return "No scheduled tasks." + lines = [] + for t in self.tasks: + mode = "recurring" if t["recurring"] else "one-shot" + store = "durable" if t["durable"] else "session" + age_hours = (time.time() - t["createdAt"]) / 3600 + lines.append( + f" {t['id']} {t['cron']} [{mode}/{store}] " + f"({age_hours:.1f}h old): {t['prompt'][:60]}" + ) + return "\n".join(lines) + + def drain_notifications(self) -> list[str]: + """Drain all pending notifications from the queue.""" + notifications = [] + while True: + try: + notifications.append(self.queue.get_nowait()) + except Empty: + break + return notifications + + def _compute_jitter(self, cron_expr: str) -> int: + """If cron targets :00 or :30, return a small offset (1-4 minutes).""" + fields = cron_expr.strip().split() + if len(fields) < 1: + return 0 + minute_field = fields[0] + try: + minute_val = int(minute_field) + if minute_val in JITTER_MINUTES: + # Deterministic jitter based on the expression hash + return (hash(cron_expr) % JITTER_OFFSET_MAX) + 1 + except ValueError: + pass + return 0 + + def _check_loop(self): + """Background thread: check every second if any task is due.""" + while not self._stop_event.is_set(): + now = datetime.now() + current_minute = now.hour * 60 + now.minute + + # Only check once per minute to avoid double-firing + if current_minute != self._last_check_minute: + self._last_check_minute = current_minute + self._check_tasks(now) + + self._stop_event.wait(timeout=1) + + def _check_tasks(self, now: datetime): + """Check all tasks against current time, fire matches.""" + expired = [] + fired_oneshots = [] + + for task in self.tasks: + # Auto-expiry: recurring tasks older than 7 days + age_days = (time.time() - task["createdAt"]) / 86400 + if task["recurring"] and age_days > AUTO_EXPIRY_DAYS: + expired.append(task["id"]) + continue + + # Apply jitter offset for the match check + check_time = now + jitter = task.get("jitter_offset", 0) + if jitter: + check_time = now - timedelta(minutes=jitter) + + if cron_matches(task["cron"], check_time): + notification = ( + f"[Scheduled task {task['id']}]: {task['prompt']}" + ) + self.queue.put(notification) + task["last_fired"] = time.time() + print(f"[Cron] Fired: {task['id']}") + + if not task["recurring"]: + fired_oneshots.append(task["id"]) + + # Clean up expired and one-shot tasks + if expired or fired_oneshots: + remove_ids = set(expired) | set(fired_oneshots) + self.tasks = [t for t in self.tasks if t["id"] not in remove_ids] + for tid in expired: + print(f"[Cron] Auto-expired: {tid} (older than {AUTO_EXPIRY_DAYS} days)") + for tid in fired_oneshots: + print(f"[Cron] One-shot completed and removed: {tid}") + self._save_durable() + + def _load_durable(self): + """Load durable tasks from .claude/scheduled_tasks.json.""" + if not SCHEDULED_TASKS_FILE.exists(): + return + try: + data = json.loads(SCHEDULED_TASKS_FILE.read_text()) + # Only load durable tasks + self.tasks = [t for t in data if t.get("durable")] + except Exception as e: + print(f"[Cron] Error loading tasks: {e}") + + def detect_missed_tasks(self) -> list[dict]: + """ + On startup, check each durable task's last_fired time. + + If a task should have fired while the session was closed (i.e. + the gap between last_fired and now contains at least one cron match), + flag it as missed. The caller can then let the user decide whether + to run or discard each missed task. + + """ + now = datetime.now() + missed = [] + for task in self.tasks: + last_fired = task.get("last_fired") + if last_fired is None: + continue + last_dt = datetime.fromtimestamp(last_fired) + # Walk forward minute-by-minute from last_fired to now (cap at 24h) + check = last_dt + timedelta(minutes=1) + cap = min(now, last_dt + timedelta(hours=24)) + while check <= cap: + if cron_matches(task["cron"], check): + missed.append({ + "id": task["id"], + "cron": task["cron"], + "prompt": task["prompt"], + "missed_at": check.isoformat(), + }) + break # one miss is enough to flag it + check += timedelta(minutes=1) + return missed + + def _save_durable(self): + """Save durable tasks to disk.""" + durable = [t for t in self.tasks if t.get("durable")] + SCHEDULED_TASKS_FILE.parent.mkdir(parents=True, exist_ok=True) + SCHEDULED_TASKS_FILE.write_text( + json.dumps(durable, indent=2) + "\n" + ) + + +# Global scheduler +scheduler = CronScheduler() + + +# -- Tool implementations -- +def safe_path(p: str) -> Path: + path = (WORKDIR / p).resolve() + if not path.is_relative_to(WORKDIR): + raise ValueError(f"Path escapes workspace: {p}") + return path + + +def run_bash(command: str) -> str: + dangerous = ["rm -rf /", "sudo", "shutdown", "reboot", "> /dev/"] + if any(d in command for d in dangerous): + return "Error: Dangerous command blocked" + try: + r = subprocess.run(command, shell=True, cwd=WORKDIR, + capture_output=True, text=True, timeout=120) + out = (r.stdout + r.stderr).strip() + return out[:50000] if out else "(no output)" + except subprocess.TimeoutExpired: + return "Error: Timeout (120s)" + + +def run_read(path: str, limit: int = None) -> str: + try: + lines = safe_path(path).read_text().splitlines() + if limit and limit < len(lines): + lines = lines[:limit] + [f"... ({len(lines) - limit} more)"] + return "\n".join(lines)[:50000] + except Exception as e: + return f"Error: {e}" + + +def run_write(path: str, content: str) -> str: + try: + fp = safe_path(path) + fp.parent.mkdir(parents=True, exist_ok=True) + fp.write_text(content) + return f"Wrote {len(content)} bytes" + except Exception as e: + return f"Error: {e}" + + +def run_edit(path: str, old_text: str, new_text: str) -> str: + try: + fp = safe_path(path) + content = fp.read_text() + if old_text not in content: + return f"Error: Text not found in {path}" + fp.write_text(content.replace(old_text, new_text, 1)) + return f"Edited {path}" + except Exception as e: + return f"Error: {e}" + + +TOOL_HANDLERS = { + "bash": lambda **kw: run_bash(kw["command"]), + "read_file": lambda **kw: run_read(kw["path"], kw.get("limit")), + "write_file": lambda **kw: run_write(kw["path"], kw["content"]), + "edit_file": lambda **kw: run_edit(kw["path"], kw["old_text"], kw["new_text"]), + "cron_create": lambda **kw: scheduler.create( + kw["cron"], kw["prompt"], kw.get("recurring", True), kw.get("durable", False)), + "cron_delete": lambda **kw: scheduler.delete(kw["id"]), + "cron_list": lambda **kw: scheduler.list_tasks(), +} + +TOOLS = [ + {"name": "bash", "description": "Run a shell command.", + "input_schema": {"type": "object", "properties": {"command": {"type": "string"}}, "required": ["command"]}}, + {"name": "read_file", "description": "Read file contents.", + "input_schema": {"type": "object", "properties": {"path": {"type": "string"}, "limit": {"type": "integer"}}, "required": ["path"]}}, + {"name": "write_file", "description": "Write content to file.", + "input_schema": {"type": "object", "properties": {"path": {"type": "string"}, "content": {"type": "string"}}, "required": ["path", "content"]}}, + {"name": "edit_file", "description": "Replace exact text in file.", + "input_schema": {"type": "object", "properties": {"path": {"type": "string"}, "old_text": {"type": "string"}, "new_text": {"type": "string"}}, "required": ["path", "old_text", "new_text"]}}, + {"name": "cron_create", "description": "Schedule a recurring or one-shot task with a cron expression.", + "input_schema": {"type": "object", "properties": { + "cron": {"type": "string", "description": "5-field cron expression: 'min hour dom month dow'"}, + "prompt": {"type": "string", "description": "The prompt to inject when the task fires"}, + "recurring": {"type": "boolean", "description": "true=repeat, false=fire once then delete. Default true."}, + "durable": {"type": "boolean", "description": "true=persist to disk, false=session-only. Default false."}, + }, "required": ["cron", "prompt"]}}, + {"name": "cron_delete", "description": "Delete a scheduled task by ID.", + "input_schema": {"type": "object", "properties": { + "id": {"type": "string", "description": "Task ID to delete"}, + }, "required": ["id"]}}, + {"name": "cron_list", "description": "List all scheduled tasks.", + "input_schema": {"type": "object", "properties": {}}}, +] + +SYSTEM = f"You are a coding agent at {WORKDIR}. Use tools to solve tasks.\n\nYou can schedule future work with cron_create. Tasks fire automatically and their prompts are injected into the conversation." + + +def agent_loop(messages: list): + """ + Cron-aware agent loop. + + Before each LLM call, drain the notification queue and inject any + fired task prompts as user messages. This is how the agent "wakes up" + to handle scheduled work. + """ + while True: + # Drain scheduled task notifications + notifications = scheduler.drain_notifications() + for note in notifications: + print(f"[Cron notification] {note[:100]}") + messages.append({"role": "user", "content": note}) + + response = client.messages.create( + model=MODEL, system=SYSTEM, messages=messages, + tools=TOOLS, max_tokens=8000, + ) + messages.append({"role": "assistant", "content": response.content}) + + if response.stop_reason != "tool_use": + return + + results = [] + for block in response.content: + if block.type != "tool_use": + continue + handler = TOOL_HANDLERS.get(block.name) + try: + output = handler(**(block.input or {})) if handler else f"Unknown: {block.name}" + except Exception as e: + output = f"Error: {e}" + print(f"> {block.name}: {str(output)[:200]}") + results.append({ + "type": "tool_result", + "tool_use_id": block.id, + "content": str(output), + }) + + messages.append({"role": "user", "content": results}) + + +if __name__ == "__main__": + scheduler.start() + print("[Cron scheduler running. Background checks every second.]") + print("[Commands: /cron to list tasks, /test to fire a test notification]") + + history = [] + while True: + try: + query = input("\033[36ms14 >> \033[0m") + except (EOFError, KeyboardInterrupt): + scheduler.stop() + break + if query.strip().lower() in ("q", "exit", ""): + scheduler.stop() + break + + if query.strip() == "/cron": + print(scheduler.list_tasks()) + continue + + if query.strip() == "/test": + # Manually enqueue a test notification for demonstration + scheduler.queue.put("[Scheduled task test-0000]: This is a test notification.") + print("[Test notification enqueued. It will be injected on your next message.]") + continue + + history.append({"role": "user", "content": query}) + agent_loop(history) + response_content = history[-1]["content"] + if isinstance(response_content, list): + for block in response_content: + if hasattr(block, "text"): + print(block.text) + print() diff --git a/agents/s09_agent_teams.py b/agents/s15_agent_teams.py similarity index 94% rename from agents/s09_agent_teams.py rename to agents/s15_agent_teams.py index 90f6760df..8ec640baa 100644 --- a/agents/s09_agent_teams.py +++ b/agents/s15_agent_teams.py @@ -1,13 +1,14 @@ #!/usr/bin/env python3 # Harness: team mailboxes -- multiple models, coordinated through files. """ -s09_agent_teams.py - Agent Teams +s15_agent_teams.py - Agent Teams Persistent named agents with file-based JSONL inboxes. Each teammate runs -its own agent loop in a separate thread. Communication via append-only inboxes. +its own agent loop in a separate thread. Communication happens through +append-only inbox files. Subagent (s04): spawn -> execute -> return summary -> destroyed - Teammate (s09): spawn -> work -> idle -> work -> ... -> shutdown + Teammate (s15): spawn -> work -> idle -> work -> ... -> shutdown .team/config.json .team/inbox/ +----------------------------+ +------------------+ @@ -31,16 +32,20 @@ | status -> idle | | | +------------------+ +------------------+ - 5 message types (all declared, not all handled here): - +-------------------------+-----------------------------------+ - | message | Normal text message | - | broadcast | Sent to all teammates | - | shutdown_request | Request graceful shutdown (s10) | - | shutdown_response | Approve/reject shutdown (s10) | - | plan_approval_response | Approve/reject plan (s10) | - +-------------------------+-----------------------------------+ +Key idea: teammates have names, inboxes, and independent loops. -Key insight: "Teammates that can talk to each other." +Read this file in this order: +1. MessageBus: how messages are queued and drained. +2. TeammateManager: what persistent teammate state looks like. +3. _teammate_loop / TOOL_HANDLERS: how each named teammate keeps re-entering the same tool loop. + +Most common confusion: +- a teammate is not a one-shot subagent +- an inbox message is not yet a full protocol request + +Teaching boundary: +this file teaches persistent named workers plus mailboxes. +Approval protocols and autonomous policies are added in later chapters. """ import json @@ -70,6 +75,7 @@ "broadcast", "shutdown_request", "shutdown_response", + "plan_approval", "plan_approval_response", } @@ -122,6 +128,8 @@ def broadcast(self, sender: str, content: str, teammates: list) -> str: # -- TeammateManager: persistent named agents with config.json -- class TeammateManager: + """Persistent teammate registry plus worker-loop launcher.""" + def __init__(self, team_dir: Path): self.dir = team_dir self.dir.mkdir(exist_ok=True) @@ -382,7 +390,7 @@ def agent_loop(messages: list): history = [] while True: try: - query = input("\033[36ms09 >> \033[0m") + query = input("\033[36ms15 >> \033[0m") except (EOFError, KeyboardInterrupt): break if query.strip().lower() in ("q", "exit", ""): diff --git a/agents/s10_team_protocols.py b/agents/s16_team_protocols.py similarity index 83% rename from agents/s10_team_protocols.py rename to agents/s16_team_protocols.py index d5475359c..384b086ce 100644 --- a/agents/s10_team_protocols.py +++ b/agents/s16_team_protocols.py @@ -1,10 +1,10 @@ #!/usr/bin/env python3 # Harness: protocols -- structured handshakes between models. """ -s10_team_protocols.py - Team Protocols +s16_team_protocols.py - Team Protocols Shutdown protocol and plan approval protocol, both using the same -request_id correlation pattern. Builds on s09's team messaging. +request_id correlation pattern. Builds on s15's mailbox-based team messaging. Shutdown FSM: pending -> approved | rejected @@ -37,14 +37,28 @@ +---------------------+ | +---------------------+ +-------v-------------+ - | plan_approval_resp | <------- | plan_approval | + | plan_approval_response| <------ | plan_approval | | {approve: true} | | review: {req_id, | +---------------------+ | approve: true} | +---------------------+ - Trackers: {request_id: {"target|from": name, "status": "pending|..."}} + Request store: .team/requests/{request_id}.json -Key insight: "Same request_id correlation pattern, two domains." +Key idea: one request/response shape can support multiple kinds of team workflow. +Protocol requests are structured workflow objects, not normal free-form chat. + +Read this file in this order: +1. MessageBus: how protocol envelopes still travel through the same inbox surface. +2. Request files under .team/requests: how a request keeps durable status after the message is sent. +3. Protocol handlers: how shutdown and plan approval reuse the same correlation pattern. + +Most common confusion: +- a protocol request is not a normal teammate chat message +- a request record is not a task record + +Teaching boundary: +this file teaches durable handshakes first. +Autonomous claiming, task selection, and worktree assignment stay in later chapters. """ import json @@ -67,6 +81,7 @@ MODEL = os.environ["MODEL_ID"] TEAM_DIR = WORKDIR / ".team" INBOX_DIR = TEAM_DIR / "inbox" +REQUESTS_DIR = TEAM_DIR / "requests" SYSTEM = f"You are a team lead at {WORKDIR}. Manage teammates with shutdown and plan approval protocols." @@ -75,15 +90,10 @@ "broadcast", "shutdown_request", "shutdown_response", + "plan_approval", "plan_approval_response", } -# -- Request trackers: correlate by request_id -- -shutdown_requests = {} -plan_requests = {} -_tracker_lock = threading.Lock() - - # -- MessageBus: JSONL inbox per teammate -- class MessageBus: def __init__(self, inbox_dir: Path): @@ -130,6 +140,48 @@ def broadcast(self, sender: str, content: str, teammates: list) -> str: BUS = MessageBus(INBOX_DIR) +class RequestStore: + """ + Durable request records for protocol workflows. + + Protocol state should survive long enough to inspect, resume, or reconcile. + This store keeps one JSON file per request_id under .team/requests/. + """ + + def __init__(self, base_dir: Path): + self.dir = base_dir + self.dir.mkdir(parents=True, exist_ok=True) + self._lock = threading.Lock() + + def _path(self, request_id: str) -> Path: + return self.dir / f"{request_id}.json" + + def create(self, record: dict) -> dict: + request_id = record["request_id"] + with self._lock: + self._path(request_id).write_text(json.dumps(record, indent=2)) + return record + + def get(self, request_id: str) -> dict | None: + path = self._path(request_id) + if not path.exists(): + return None + return json.loads(path.read_text()) + + def update(self, request_id: str, **changes) -> dict | None: + with self._lock: + record = self.get(request_id) + if not record: + return None + record.update(changes) + record["updated_at"] = time.time() + self._path(request_id).write_text(json.dumps(record, indent=2)) + return record + + +REQUEST_STORE = RequestStore(REQUESTS_DIR) + + # -- TeammateManager with shutdown + plan approval -- class TeammateManager: def __init__(self, team_dir: Path): @@ -236,9 +288,15 @@ def _exec(self, sender: str, tool_name: str, args: dict) -> str: if tool_name == "shutdown_response": req_id = args["request_id"] approve = args["approve"] - with _tracker_lock: - if req_id in shutdown_requests: - shutdown_requests[req_id]["status"] = "approved" if approve else "rejected" + updated = REQUEST_STORE.update( + req_id, + status="approved" if approve else "rejected", + resolved_by=sender, + resolved_at=time.time(), + response={"approve": approve, "reason": args.get("reason", "")}, + ) + if not updated: + return f"Error: Unknown shutdown request {req_id}" BUS.send( sender, "lead", args.get("reason", ""), "shutdown_response", {"request_id": req_id, "approve": approve}, @@ -247,10 +305,18 @@ def _exec(self, sender: str, tool_name: str, args: dict) -> str: if tool_name == "plan_approval": plan_text = args.get("plan", "") req_id = str(uuid.uuid4())[:8] - with _tracker_lock: - plan_requests[req_id] = {"from": sender, "plan": plan_text, "status": "pending"} + REQUEST_STORE.create({ + "request_id": req_id, + "kind": "plan_approval", + "from": sender, + "to": "lead", + "status": "pending", + "plan": plan_text, + "created_at": time.time(), + "updated_at": time.time(), + }) BUS.send( - sender, "lead", plan_text, "plan_approval_response", + sender, "lead", plan_text, "plan_approval", {"request_id": req_id, "plan": plan_text}, ) return f"Plan submitted (request_id={req_id}). Waiting for lead approval." @@ -350,8 +416,15 @@ def _run_edit(path: str, old_text: str, new_text: str) -> str: # -- Lead-specific protocol handlers -- def handle_shutdown_request(teammate: str) -> str: req_id = str(uuid.uuid4())[:8] - with _tracker_lock: - shutdown_requests[req_id] = {"target": teammate, "status": "pending"} + REQUEST_STORE.create({ + "request_id": req_id, + "kind": "shutdown", + "from": "lead", + "to": teammate, + "status": "pending", + "created_at": time.time(), + "updated_at": time.time(), + }) BUS.send( "lead", teammate, "Please shut down gracefully.", "shutdown_request", {"request_id": req_id}, @@ -360,22 +433,25 @@ def handle_shutdown_request(teammate: str) -> str: def handle_plan_review(request_id: str, approve: bool, feedback: str = "") -> str: - with _tracker_lock: - req = plan_requests.get(request_id) + req = REQUEST_STORE.get(request_id) if not req: return f"Error: Unknown plan request_id '{request_id}'" - with _tracker_lock: - req["status"] = "approved" if approve else "rejected" + REQUEST_STORE.update( + request_id, + status="approved" if approve else "rejected", + reviewed_by="lead", + resolved_at=time.time(), + feedback=feedback, + ) BUS.send( "lead", req["from"], feedback, "plan_approval_response", {"request_id": request_id, "approve": approve, "feedback": feedback}, ) - return f"Plan {req['status']} for '{req['from']}'" + return f"Plan {'approved' if approve else 'rejected'} for '{req['from']}'" def _check_shutdown_status(request_id: str) -> str: - with _tracker_lock: - return json.dumps(shutdown_requests.get(request_id, {"error": "not found"})) + return json.dumps(REQUEST_STORE.get(request_id) or {"error": "not found"}) # -- Lead tool dispatch (12 tools) -- @@ -463,7 +539,7 @@ def agent_loop(messages: list): history = [] while True: try: - query = input("\033[36ms10 >> \033[0m") + query = input("\033[36ms16 >> \033[0m") except (EOFError, KeyboardInterrupt): break if query.strip().lower() in ("q", "exit", ""): diff --git a/agents/s11_autonomous_agents.py b/agents/s17_autonomous_agents.py similarity index 79% rename from agents/s11_autonomous_agents.py rename to agents/s17_autonomous_agents.py index 3aec416b8..272bc336d 100644 --- a/agents/s11_autonomous_agents.py +++ b/agents/s17_autonomous_agents.py @@ -1,10 +1,11 @@ #!/usr/bin/env python3 # Harness: autonomy -- models that find work without being told. """ -s11_autonomous_agents.py - Autonomous Agents +s17_autonomous_agents.py - Autonomous Agents Idle cycle with task board polling, auto-claiming unclaimed tasks, and -identity re-injection after context compression. Builds on s10's protocols. +identity re-injection after context compression. Builds on task boards, +team mailboxes, and protocol support from earlier chapters. Teammate lifecycle: +-------+ @@ -32,7 +33,10 @@ messages = [identity_block, ...remaining...] "You are 'coder', role: backend, team: my-team" -Key insight: "The agent finds work itself." +Key idea: an idle teammate can safely claim ready work instead of waiting +for every assignment from the lead. +A teammate here is a long-lived worker, not a one-shot subagent that only +returns a single summary. """ import json @@ -56,6 +60,8 @@ TEAM_DIR = WORKDIR / ".team" INBOX_DIR = TEAM_DIR / "inbox" TASKS_DIR = WORKDIR / ".tasks" +REQUESTS_DIR = TEAM_DIR / "requests" +CLAIM_EVENTS_PATH = TASKS_DIR / "claim_events.jsonl" POLL_INTERVAL = 5 IDLE_TIMEOUT = 60 @@ -67,13 +73,10 @@ "broadcast", "shutdown_request", "shutdown_response", + "plan_approval", "plan_approval_response", } -# -- Request trackers -- -shutdown_requests = {} -plan_requests = {} -_tracker_lock = threading.Lock() _claim_lock = threading.Lock() @@ -123,37 +126,108 @@ def broadcast(self, sender: str, content: str, teammates: list) -> str: BUS = MessageBus(INBOX_DIR) +class RequestStore: + """ + Durable protocol request records. + + s17 should not regress from s16 back to in-memory trackers. These request + files let autonomous teammates inspect or resume protocol state later. + """ + + def __init__(self, base_dir: Path): + self.dir = base_dir + self.dir.mkdir(parents=True, exist_ok=True) + self._lock = threading.Lock() + + def _path(self, request_id: str) -> Path: + return self.dir / f"{request_id}.json" + + def create(self, record: dict) -> dict: + request_id = record["request_id"] + with self._lock: + self._path(request_id).write_text(json.dumps(record, indent=2)) + return record + + def get(self, request_id: str) -> dict | None: + path = self._path(request_id) + if not path.exists(): + return None + return json.loads(path.read_text()) + + def update(self, request_id: str, **changes) -> dict | None: + with self._lock: + record = self.get(request_id) + if not record: + return None + record.update(changes) + record["updated_at"] = time.time() + self._path(request_id).write_text(json.dumps(record, indent=2)) + return record + + +REQUEST_STORE = RequestStore(REQUESTS_DIR) + + # -- Task board scanning -- -def scan_unclaimed_tasks() -> list: +def _append_claim_event(payload: dict): + TASKS_DIR.mkdir(parents=True, exist_ok=True) + with CLAIM_EVENTS_PATH.open("a", encoding="utf-8") as f: + f.write(json.dumps(payload) + "\n") + + +def _task_allows_role(task: dict, role: str | None) -> bool: + required_role = task.get("claim_role") or task.get("required_role") or "" + if not required_role: + return True + return bool(role) and role == required_role + + +def is_claimable_task(task: dict, role: str | None = None) -> bool: + return ( + task.get("status") == "pending" + and not task.get("owner") + and not task.get("blockedBy") + and _task_allows_role(task, role) + ) + + +def scan_unclaimed_tasks(role: str | None = None) -> list: TASKS_DIR.mkdir(exist_ok=True) unclaimed = [] for f in sorted(TASKS_DIR.glob("task_*.json")): task = json.loads(f.read_text()) - if (task.get("status") == "pending" - and not task.get("owner") - and not task.get("blockedBy")): + if is_claimable_task(task, role): unclaimed.append(task) return unclaimed -def claim_task(task_id: int, owner: str) -> str: +def claim_task( + task_id: int, + owner: str, + role: str | None = None, + source: str = "manual", +) -> str: with _claim_lock: path = TASKS_DIR / f"task_{task_id}.json" if not path.exists(): return f"Error: Task {task_id} not found" task = json.loads(path.read_text()) - if task.get("owner"): - existing_owner = task.get("owner") or "someone else" - return f"Error: Task {task_id} has already been claimed by {existing_owner}" - if task.get("status") != "pending": - status = task.get("status") - return f"Error: Task {task_id} cannot be claimed because its status is '{status}'" - if task.get("blockedBy"): - return f"Error: Task {task_id} is blocked by other task(s) and cannot be claimed yet" + if not is_claimable_task(task, role): + return f"Error: Task {task_id} is not claimable for role={role or '(any)'}" task["owner"] = owner task["status"] = "in_progress" + task["claimed_at"] = time.time() + task["claim_source"] = source path.write_text(json.dumps(task, indent=2)) - return f"Claimed task #{task_id} for {owner}" + _append_claim_event({ + "event": "task.claimed", + "task_id": task_id, + "owner": owner, + "role": role, + "source": source, + "ts": time.time(), + }) + return f"Claimed task #{task_id} for {owner} via {source}" # -- Identity re-injection after compression -- @@ -164,6 +238,13 @@ def make_identity_block(name: str, role: str, team_name: str) -> dict: } +def ensure_identity_context(messages: list, name: str, role: str, team_name: str): + if messages and "" in str(messages[0].get("content", "")): + return + messages.insert(0, make_identity_block(name, role, team_name)) + messages.insert(1, {"role": "assistant", "content": f"I am {name}. Continuing."}) + + # -- Autonomous TeammateManager -- class TeammateManager: def __init__(self, team_dir: Path): @@ -272,6 +353,7 @@ def _loop(self, name: str, role: str, prompt: str): time.sleep(POLL_INTERVAL) inbox = BUS.read_inbox(name) if inbox: + ensure_identity_context(messages, name, role, team_name) for msg in inbox: if msg.get("type") == "shutdown_request": self._set_status(name, "shutdown") @@ -279,21 +361,21 @@ def _loop(self, name: str, role: str, prompt: str): messages.append({"role": "user", "content": json.dumps(msg)}) resume = True break - unclaimed = scan_unclaimed_tasks() + unclaimed = scan_unclaimed_tasks(role) if unclaimed: task = unclaimed[0] - result = claim_task(task["id"], name) - if result.startswith("Error:"): + claim_result = claim_task( + task["id"], name, role=role, source="auto" + ) + if claim_result.startswith("Error:"): continue task_prompt = ( f"Task #{task['id']}: {task['subject']}\n" f"{task.get('description', '')}" ) - if len(messages) <= 3: - messages.insert(0, make_identity_block(name, role, team_name)) - messages.insert(1, {"role": "assistant", "content": f"I am {name}. Continuing."}) + ensure_identity_context(messages, name, role, team_name) messages.append({"role": "user", "content": task_prompt}) - messages.append({"role": "assistant", "content": f"Claimed task #{task['id']}. Working on it."}) + messages.append({"role": "assistant", "content": f"{claim_result}. Working on it."}) resume = True break @@ -318,9 +400,15 @@ def _exec(self, sender: str, tool_name: str, args: dict) -> str: return json.dumps(BUS.read_inbox(sender), indent=2) if tool_name == "shutdown_response": req_id = args["request_id"] - with _tracker_lock: - if req_id in shutdown_requests: - shutdown_requests[req_id]["status"] = "approved" if args["approve"] else "rejected" + updated = REQUEST_STORE.update( + req_id, + status="approved" if args["approve"] else "rejected", + resolved_by=sender, + resolved_at=time.time(), + response={"approve": args["approve"], "reason": args.get("reason", "")}, + ) + if not updated: + return f"Error: Unknown shutdown request {req_id}" BUS.send( sender, "lead", args.get("reason", ""), "shutdown_response", {"request_id": req_id, "approve": args["approve"]}, @@ -329,15 +417,28 @@ def _exec(self, sender: str, tool_name: str, args: dict) -> str: if tool_name == "plan_approval": plan_text = args.get("plan", "") req_id = str(uuid.uuid4())[:8] - with _tracker_lock: - plan_requests[req_id] = {"from": sender, "plan": plan_text, "status": "pending"} + REQUEST_STORE.create({ + "request_id": req_id, + "kind": "plan_approval", + "from": sender, + "to": "lead", + "status": "pending", + "plan": plan_text, + "created_at": time.time(), + "updated_at": time.time(), + }) BUS.send( - sender, "lead", plan_text, "plan_approval_response", + sender, "lead", plan_text, "plan_approval", {"request_id": req_id, "plan": plan_text}, ) return f"Plan submitted (request_id={req_id}). Waiting for approval." if tool_name == "claim_task": - return claim_task(args["task_id"], sender) + return claim_task( + args["task_id"], + sender, + role=self._find_member(sender).get("role") if self._find_member(sender) else None, + source="manual", + ) return f"Unknown tool: {tool_name}" def _teammate_tools(self) -> list: @@ -438,8 +539,15 @@ def _run_edit(path: str, old_text: str, new_text: str) -> str: # -- Lead-specific protocol handlers -- def handle_shutdown_request(teammate: str) -> str: req_id = str(uuid.uuid4())[:8] - with _tracker_lock: - shutdown_requests[req_id] = {"target": teammate, "status": "pending"} + REQUEST_STORE.create({ + "request_id": req_id, + "kind": "shutdown", + "from": "lead", + "to": teammate, + "status": "pending", + "created_at": time.time(), + "updated_at": time.time(), + }) BUS.send( "lead", teammate, "Please shut down gracefully.", "shutdown_request", {"request_id": req_id}, @@ -448,22 +556,25 @@ def handle_shutdown_request(teammate: str) -> str: def handle_plan_review(request_id: str, approve: bool, feedback: str = "") -> str: - with _tracker_lock: - req = plan_requests.get(request_id) + req = REQUEST_STORE.get(request_id) if not req: return f"Error: Unknown plan request_id '{request_id}'" - with _tracker_lock: - req["status"] = "approved" if approve else "rejected" + REQUEST_STORE.update( + request_id, + status="approved" if approve else "rejected", + reviewed_by="lead", + resolved_at=time.time(), + feedback=feedback, + ) BUS.send( "lead", req["from"], feedback, "plan_approval_response", {"request_id": request_id, "approve": approve, "feedback": feedback}, ) - return f"Plan {req['status']} for '{req['from']}'" + return f"Plan {'approved' if approve else 'rejected'} for '{req['from']}'" def _check_shutdown_status(request_id: str) -> str: - with _tracker_lock: - return json.dumps(shutdown_requests.get(request_id, {"error": "not found"})) + return json.dumps(REQUEST_STORE.get(request_id) or {"error": "not found"}) # -- Lead tool dispatch (14 tools) -- @@ -525,6 +636,10 @@ def agent_loop(messages: list): "role": "user", "content": f"{json.dumps(inbox, indent=2)}", }) + messages.append({ + "role": "assistant", + "content": "Noted inbox messages.", + }) response = client.messages.create( model=MODEL, system=SYSTEM, @@ -543,8 +658,7 @@ def agent_loop(messages: list): output = handler(**block.input) if handler else f"Unknown tool: {block.name}" except Exception as e: output = f"Error: {e}" - print(f"> {block.name}:") - print(str(output)[:200]) + print(f"> {block.name}: {str(output)[:200]}") results.append({ "type": "tool_result", "tool_use_id": block.id, @@ -557,7 +671,7 @@ def agent_loop(messages: list): history = [] while True: try: - query = input("\033[36ms11 >> \033[0m") + query = input("\033[36ms17 >> \033[0m") except (EOFError, KeyboardInterrupt): break if query.strip().lower() in ("q", "exit", ""): diff --git a/agents/s12_worktree_task_isolation.py b/agents/s18_worktree_task_isolation.py similarity index 51% rename from agents/s12_worktree_task_isolation.py rename to agents/s18_worktree_task_isolation.py index 09f905253..deac23bf7 100644 --- a/agents/s12_worktree_task_isolation.py +++ b/agents/s18_worktree_task_isolation.py @@ -1,7 +1,7 @@ #!/usr/bin/env python3 # Harness: directory isolation -- parallel execution lanes that never collide. """ -s12_worktree_task_isolation.py - Worktree + Task Isolation +s18_worktree_task_isolation.py - Worktree + Task Isolation Directory-level isolation for parallel task execution. Tasks are the control plane and worktrees are the execution plane. @@ -28,6 +28,19 @@ } Key insight: "Isolate by directory, coordinate by task ID." + +Read this file in this order: +1. EventBus: how worktree lifecycle stays observable. +2. TaskManager: how a task binds to an execution lane without becoming the lane itself. +3. Worktree registry / closeout helpers: how directory state is created, tracked, and cleaned up. + +Most common confusion: +- a worktree is not the task itself +- a worktree record is not just a path string + +Teaching boundary: +this file teaches isolated execution lanes first. +Cross-machine execution, merge automation, and enterprise policy glue are intentionally out of scope. """ import json @@ -51,19 +64,13 @@ def detect_repo_root(cwd: Path) -> Path | None: - """Return git repo root if cwd is inside a repo, else None.""" try: r = subprocess.run( ["git", "rev-parse", "--show-toplevel"], - cwd=cwd, - capture_output=True, - text=True, - timeout=10, + cwd=cwd, capture_output=True, text=True, timeout=10, ) - if r.returncode != 0: - return None root = Path(r.stdout.strip()) - return root if root.exists() else None + return root if r.returncode == 0 and root.exists() else None except Exception: return None @@ -74,8 +81,7 @@ def detect_repo_root(cwd: Path) -> Path | None: f"You are a coding agent at {WORKDIR}. " "Use task + worktree tools for multi-task work. " "For parallel or risky changes: create tasks, allocate worktree lanes, " - "run commands in those lanes, then choose keep/remove for closeout. " - "Use worktree_events when you need lifecycle visibility." + "run commands in those lanes, then choose keep/remove for closeout." ) @@ -87,30 +93,23 @@ def __init__(self, event_log_path: Path): if not self.path.exists(): self.path.write_text("") - def emit( - self, - event: str, - task: dict | None = None, - worktree: dict | None = None, - error: str | None = None, - ): - payload = { - "event": event, - "ts": time.time(), - "task": task or {}, - "worktree": worktree or {}, - } + def emit(self, event: str, task_id=None, wt_name=None, error=None, **extra): + payload = {"event": event, "ts": time.time()} + if task_id is not None: + payload["task_id"] = task_id + if wt_name: + payload["worktree"] = wt_name if error: payload["error"] = error + payload.update(extra) with self.path.open("a", encoding="utf-8") as f: f.write(json.dumps(payload) + "\n") def list_recent(self, limit: int = 20) -> str: n = max(1, min(int(limit or 20), 200)) lines = self.path.read_text(encoding="utf-8").splitlines() - recent = lines[-n:] items = [] - for line in recent: + for line in lines[-n:]: try: items.append(json.loads(line)) except Exception: @@ -148,15 +147,11 @@ def _save(self, task: dict): def create(self, subject: str, description: str = "") -> str: task = { - "id": self._next_id, - "subject": subject, - "description": description, - "status": "pending", - "owner": "", - "worktree": "", - "blockedBy": [], - "created_at": time.time(), - "updated_at": time.time(), + "id": self._next_id, "subject": subject, "description": description, + "status": "pending", "owner": "", "worktree": "", + "worktree_state": "unbound", "last_worktree": "", + "closeout": None, "blockedBy": [], + "created_at": time.time(), "updated_at": time.time(), } self._save(task) self._next_id += 1 @@ -171,7 +166,7 @@ def exists(self, task_id: int) -> bool: def update(self, task_id: int, status: str = None, owner: str = None) -> str: task = self._load(task_id) if status: - if status not in ("pending", "in_progress", "completed"): + if status not in ("pending", "in_progress", "completed", "deleted"): raise ValueError(f"Invalid status: {status}") task["status"] = status if owner is not None: @@ -183,6 +178,8 @@ def update(self, task_id: int, status: str = None, owner: str = None) -> str: def bind_worktree(self, task_id: int, worktree: str, owner: str = "") -> str: task = self._load(task_id) task["worktree"] = worktree + task["last_worktree"] = worktree + task["worktree_state"] = "active" if owner: task["owner"] = owner if task["status"] == "pending": @@ -194,6 +191,21 @@ def bind_worktree(self, task_id: int, worktree: str, owner: str = "") -> str: def unbind_worktree(self, task_id: int) -> str: task = self._load(task_id) task["worktree"] = "" + task["worktree_state"] = "unbound" + task["updated_at"] = time.time() + self._save(task) + return json.dumps(task, indent=2) + + def record_closeout(self, task_id: int, action: str, reason: str = "", keep_binding: bool = False) -> str: + task = self._load(task_id) + task["closeout"] = { + "action": action, + "reason": reason, + "at": time.time(), + } + task["worktree_state"] = action + if not keep_binding: + task["worktree"] = "" task["updated_at"] = time.time() self._save(task) return json.dumps(task, indent=2) @@ -206,11 +218,7 @@ def list_all(self) -> str: return "No tasks." lines = [] for t in tasks: - marker = { - "pending": "[ ]", - "in_progress": "[>]", - "completed": "[x]", - }.get(t["status"], "[?]") + marker = {"pending": "[ ]", "in_progress": "[>]", "completed": "[x]", "deleted": "[-]"}.get(t["status"], "[?]") owner = f" owner={t['owner']}" if t.get("owner") else "" wt = f" wt={t['worktree']}" if t.get("worktree") else "" lines.append(f"{marker} #{t['id']}: {t['subject']}{owner}{wt}") @@ -221,7 +229,7 @@ def list_all(self) -> str: EVENTS = EventBus(REPO_ROOT / ".worktrees" / "events.jsonl") -# -- WorktreeManager: create/list/run/remove git worktrees + lifecycle index -- +# -- WorktreeManager: create/list/run/remove git worktrees -- class WorktreeManager: def __init__(self, repo_root: Path, tasks: TaskManager, events: EventBus): self.repo_root = repo_root @@ -232,16 +240,13 @@ def __init__(self, repo_root: Path, tasks: TaskManager, events: EventBus): self.index_path = self.dir / "index.json" if not self.index_path.exists(): self.index_path.write_text(json.dumps({"worktrees": []}, indent=2)) - self.git_available = self._is_git_repo() + self.git_available = self._check_git() - def _is_git_repo(self) -> bool: + def _check_git(self) -> bool: try: r = subprocess.run( ["git", "rev-parse", "--is-inside-work-tree"], - cwd=self.repo_root, - capture_output=True, - text=True, - timeout=10, + cwd=self.repo_root, capture_output=True, text=True, timeout=10, ) return r.returncode == 0 except Exception: @@ -249,17 +254,13 @@ def _is_git_repo(self) -> bool: def _run_git(self, args: list[str]) -> str: if not self.git_available: - raise RuntimeError("Not in a git repository. worktree tools require git.") + raise RuntimeError("Not in a git repository.") r = subprocess.run( - ["git", *args], - cwd=self.repo_root, - capture_output=True, - text=True, - timeout=120, + ["git", *args], cwd=self.repo_root, + capture_output=True, text=True, timeout=120, ) if r.returncode != 0: - msg = (r.stdout + r.stderr).strip() - raise RuntimeError(msg or f"git {' '.join(args)} failed") + raise RuntimeError((r.stdout + r.stderr).strip() or f"git {' '.join(args)} failed") return (r.stdout + r.stderr).strip() or "(no output)" def _load_index(self) -> dict: @@ -269,83 +270,63 @@ def _save_index(self, data: dict): self.index_path.write_text(json.dumps(data, indent=2)) def _find(self, name: str) -> dict | None: - idx = self._load_index() - for wt in idx.get("worktrees", []): + for wt in self._load_index().get("worktrees", []): if wt.get("name") == name: return wt return None + def _update_entry(self, name: str, **changes) -> dict: + idx = self._load_index() + updated = None + for item in idx.get("worktrees", []): + if item.get("name") == name: + item.update(changes) + updated = item + break + self._save_index(idx) + if not updated: + raise ValueError(f"Worktree '{name}' not found in index") + return updated + def _validate_name(self, name: str): if not re.fullmatch(r"[A-Za-z0-9._-]{1,40}", name or ""): - raise ValueError( - "Invalid worktree name. Use 1-40 chars: letters, numbers, ., _, -" - ) + raise ValueError("Invalid worktree name. Use 1-40 chars: letters, digits, ., _, -") def create(self, name: str, task_id: int = None, base_ref: str = "HEAD") -> str: self._validate_name(name) if self._find(name): - raise ValueError(f"Worktree '{name}' already exists in index") + raise ValueError(f"Worktree '{name}' already exists") if task_id is not None and not self.tasks.exists(task_id): raise ValueError(f"Task {task_id} not found") path = self.dir / name branch = f"wt/{name}" - self.events.emit( - "worktree.create.before", - task={"id": task_id} if task_id is not None else {}, - worktree={"name": name, "base_ref": base_ref}, - ) + self.events.emit("worktree.create.before", task_id=task_id, wt_name=name) try: self._run_git(["worktree", "add", "-b", branch, str(path), base_ref]) - entry = { - "name": name, - "path": str(path), - "branch": branch, - "task_id": task_id, - "status": "active", - "created_at": time.time(), + "name": name, "path": str(path), "branch": branch, + "task_id": task_id, "status": "active", "created_at": time.time(), } - idx = self._load_index() idx["worktrees"].append(entry) self._save_index(idx) - if task_id is not None: self.tasks.bind_worktree(task_id, name) - - self.events.emit( - "worktree.create.after", - task={"id": task_id} if task_id is not None else {}, - worktree={ - "name": name, - "path": str(path), - "branch": branch, - "status": "active", - }, - ) + self.events.emit("worktree.create.after", task_id=task_id, wt_name=name) return json.dumps(entry, indent=2) except Exception as e: - self.events.emit( - "worktree.create.failed", - task={"id": task_id} if task_id is not None else {}, - worktree={"name": name, "base_ref": base_ref}, - error=str(e), - ) + self.events.emit("worktree.create.failed", task_id=task_id, wt_name=name, error=str(e)) raise def list_all(self) -> str: - idx = self._load_index() - wts = idx.get("worktrees", []) + wts = self._load_index().get("worktrees", []) if not wts: return "No worktrees in index." lines = [] for wt in wts: suffix = f" task={wt['task_id']}" if wt.get("task_id") else "" - lines.append( - f"[{wt.get('status', 'unknown')}] {wt['name']} -> " - f"{wt['path']} ({wt.get('branch', '-')}){suffix}" - ) + lines.append(f"[{wt.get('status', '?')}] {wt['name']} -> {wt['path']} ({wt.get('branch', '-')}){suffix}") return "\n".join(lines) def status(self, name: str) -> str: @@ -357,150 +338,162 @@ def status(self, name: str) -> str: return f"Error: Worktree path missing: {path}" r = subprocess.run( ["git", "status", "--short", "--branch"], - cwd=path, - capture_output=True, - text=True, - timeout=60, + cwd=path, capture_output=True, text=True, timeout=60, ) - text = (r.stdout + r.stderr).strip() - return text or "Clean worktree" + return (r.stdout + r.stderr).strip() or "Clean worktree" + + def enter(self, name: str) -> str: + wt = self._find(name) + if not wt: + return f"Error: Unknown worktree '{name}'" + path = Path(wt["path"]) + if not path.exists(): + return f"Error: Worktree path missing: {path}" + updated = self._update_entry(name, last_entered_at=time.time()) + self.events.emit("worktree.enter", task_id=wt.get("task_id"), wt_name=name, path=str(path)) + return json.dumps(updated, indent=2) def run(self, name: str, command: str) -> str: dangerous = ["rm -rf /", "sudo", "shutdown", "reboot", "> /dev/"] if any(d in command for d in dangerous): return "Error: Dangerous command blocked" - wt = self._find(name) if not wt: return f"Error: Unknown worktree '{name}'" path = Path(wt["path"]) if not path.exists(): return f"Error: Worktree path missing: {path}" - try: - r = subprocess.run( - command, - shell=True, - cwd=path, - capture_output=True, - text=True, - timeout=300, + self._update_entry( + name, + last_entered_at=time.time(), + last_command_at=time.time(), + last_command_preview=command[:120], ) + self.events.emit("worktree.run.before", task_id=wt.get("task_id"), wt_name=name, command=command[:120]) + r = subprocess.run(command, shell=True, cwd=path, + capture_output=True, text=True, timeout=300) out = (r.stdout + r.stderr).strip() + self.events.emit("worktree.run.after", task_id=wt.get("task_id"), wt_name=name) return out[:50000] if out else "(no output)" except subprocess.TimeoutExpired: + self.events.emit("worktree.run.timeout", task_id=wt.get("task_id"), wt_name=name) return "Error: Timeout (300s)" - def remove(self, name: str, force: bool = False, complete_task: bool = False) -> str: + def remove( + self, + name: str, + force: bool = False, + complete_task: bool = False, + reason: str = "", + ) -> str: wt = self._find(name) if not wt: return f"Error: Unknown worktree '{name}'" - - self.events.emit( - "worktree.remove.before", - task={"id": wt.get("task_id")} if wt.get("task_id") is not None else {}, - worktree={"name": name, "path": wt.get("path")}, - ) + task_id = wt.get("task_id") + self.events.emit("worktree.remove.before", task_id=task_id, wt_name=name) try: args = ["worktree", "remove"] if force: args.append("--force") args.append(wt["path"]) self._run_git(args) - - if complete_task and wt.get("task_id") is not None: - task_id = wt["task_id"] - before = json.loads(self.tasks.get(task_id)) + if complete_task and task_id is not None: self.tasks.update(task_id, status="completed") - self.tasks.unbind_worktree(task_id) - self.events.emit( - "task.completed", - task={ - "id": task_id, - "subject": before.get("subject", ""), - "status": "completed", - }, - worktree={"name": name}, - ) - - idx = self._load_index() - for item in idx.get("worktrees", []): - if item.get("name") == name: - item["status"] = "removed" - item["removed_at"] = time.time() - self._save_index(idx) - - self.events.emit( - "worktree.remove.after", - task={"id": wt.get("task_id")} if wt.get("task_id") is not None else {}, - worktree={"name": name, "path": wt.get("path"), "status": "removed"}, + self.events.emit("task.completed", task_id=task_id, wt_name=name) + if task_id is not None: + self.tasks.record_closeout(task_id, "removed", reason, keep_binding=False) + self._update_entry( + name, + status="removed", + removed_at=time.time(), + closeout={"action": "remove", "reason": reason, "at": time.time()}, ) + self.events.emit("worktree.remove.after", task_id=task_id, wt_name=name) return f"Removed worktree '{name}'" except Exception as e: - self.events.emit( - "worktree.remove.failed", - task={"id": wt.get("task_id")} if wt.get("task_id") is not None else {}, - worktree={"name": name, "path": wt.get("path")}, - error=str(e), - ) + self.events.emit("worktree.remove.failed", task_id=task_id, wt_name=name, error=str(e)) raise def keep(self, name: str) -> str: wt = self._find(name) if not wt: return f"Error: Unknown worktree '{name}'" - - idx = self._load_index() - kept = None - for item in idx.get("worktrees", []): - if item.get("name") == name: - item["status"] = "kept" - item["kept_at"] = time.time() - kept = item - self._save_index(idx) - - self.events.emit( - "worktree.keep", - task={"id": wt.get("task_id")} if wt.get("task_id") is not None else {}, - worktree={ - "name": name, - "path": wt.get("path"), - "status": "kept", - }, + if wt.get("task_id") is not None: + self.tasks.record_closeout(wt["task_id"], "kept", "", keep_binding=True) + self._update_entry( + name, + status="kept", + kept_at=time.time(), + closeout={"action": "keep", "reason": "", "at": time.time()}, ) - return json.dumps(kept, indent=2) if kept else f"Error: Unknown worktree '{name}'" + self.events.emit("worktree.keep", task_id=wt.get("task_id"), wt_name=name) + return json.dumps(self._find(name), indent=2) + + def closeout( + self, + name: str, + action: str, + reason: str = "", + force: bool = False, + complete_task: bool = False, + ) -> str: + if action == "keep": + wt = self._find(name) + if not wt: + return f"Error: Unknown worktree '{name}'" + if wt.get("task_id") is not None: + self.tasks.record_closeout( + wt["task_id"], "kept", reason, keep_binding=True + ) + if complete_task: + self.tasks.update(wt["task_id"], status="completed") + self._update_entry( + name, + status="kept", + kept_at=time.time(), + closeout={"action": "keep", "reason": reason, "at": time.time()}, + ) + self.events.emit( + "worktree.closeout.keep", + task_id=wt.get("task_id"), + wt_name=name, + reason=reason, + ) + return json.dumps(self._find(name), indent=2) + if action == "remove": + self.events.emit("worktree.closeout.remove", wt_name=name, reason=reason) + return self.remove( + name, + force=force, + complete_task=complete_task, + reason=reason, + ) + raise ValueError("action must be 'keep' or 'remove'") WORKTREES = WorktreeManager(REPO_ROOT, TASKS, EVENTS) -# -- Base tools (kept minimal, same style as previous sessions) -- +# -- Base tools (same as previous sessions, kept minimal) -- def safe_path(p: str) -> Path: path = (WORKDIR / p).resolve() if not path.is_relative_to(WORKDIR): raise ValueError(f"Path escapes workspace: {p}") return path - def run_bash(command: str) -> str: dangerous = ["rm -rf /", "sudo", "shutdown", "reboot", "> /dev/"] if any(d in command for d in dangerous): return "Error: Dangerous command blocked" try: - r = subprocess.run( - command, - shell=True, - cwd=WORKDIR, - capture_output=True, - text=True, - timeout=120, - ) + r = subprocess.run(command, shell=True, cwd=WORKDIR, + capture_output=True, text=True, timeout=120) out = (r.stdout + r.stderr).strip() return out[:50000] if out else "(no output)" except subprocess.TimeoutExpired: return "Error: Timeout (120s)" - def run_read(path: str, limit: int = None) -> str: try: lines = safe_path(path).read_text().splitlines() @@ -510,7 +503,6 @@ def run_read(path: str, limit: int = None) -> str: except Exception as e: return f"Error: {e}" - def run_write(path: str, content: str) -> str: try: fp = safe_path(path) @@ -520,7 +512,6 @@ def run_write(path: str, content: str) -> str: except Exception as e: return f"Error: {e}" - def run_edit(path: str, old_text: str, new_text: str) -> str: try: fp = safe_path(path) @@ -545,200 +536,76 @@ def run_edit(path: str, old_text: str, new_text: str) -> str: "task_bind_worktree": lambda **kw: TASKS.bind_worktree(kw["task_id"], kw["worktree"], kw.get("owner", "")), "worktree_create": lambda **kw: WORKTREES.create(kw["name"], kw.get("task_id"), kw.get("base_ref", "HEAD")), "worktree_list": lambda **kw: WORKTREES.list_all(), + "worktree_enter": lambda **kw: WORKTREES.enter(kw["name"]), "worktree_status": lambda **kw: WORKTREES.status(kw["name"]), "worktree_run": lambda **kw: WORKTREES.run(kw["name"], kw["command"]), + "worktree_closeout": lambda **kw: WORKTREES.closeout( + kw["name"], + kw["action"], + kw.get("reason", ""), + kw.get("force", False), + kw.get("complete_task", False), + ), "worktree_keep": lambda **kw: WORKTREES.keep(kw["name"]), - "worktree_remove": lambda **kw: WORKTREES.remove(kw["name"], kw.get("force", False), kw.get("complete_task", False)), + "worktree_remove": lambda **kw: WORKTREES.remove( + kw["name"], + kw.get("force", False), + kw.get("complete_task", False), + kw.get("reason", ""), + ), "worktree_events": lambda **kw: EVENTS.list_recent(kw.get("limit", 20)), } +# Compact tool definitions -- same schema, less vertical space TOOLS = [ - { - "name": "bash", - "description": "Run a shell command in the current workspace (blocking).", - "input_schema": { - "type": "object", - "properties": {"command": {"type": "string"}}, - "required": ["command"], - }, - }, - { - "name": "read_file", - "description": "Read file contents.", - "input_schema": { - "type": "object", - "properties": { - "path": {"type": "string"}, - "limit": {"type": "integer"}, - }, - "required": ["path"], - }, - }, - { - "name": "write_file", - "description": "Write content to file.", - "input_schema": { - "type": "object", - "properties": { - "path": {"type": "string"}, - "content": {"type": "string"}, - }, - "required": ["path", "content"], - }, - }, - { - "name": "edit_file", - "description": "Replace exact text in file.", - "input_schema": { - "type": "object", - "properties": { - "path": {"type": "string"}, - "old_text": {"type": "string"}, - "new_text": {"type": "string"}, - }, - "required": ["path", "old_text", "new_text"], - }, - }, - { - "name": "task_create", - "description": "Create a new task on the shared task board.", - "input_schema": { - "type": "object", - "properties": { - "subject": {"type": "string"}, - "description": {"type": "string"}, - }, - "required": ["subject"], - }, - }, - { - "name": "task_list", - "description": "List all tasks with status, owner, and worktree binding.", - "input_schema": {"type": "object", "properties": {}}, - }, - { - "name": "task_get", - "description": "Get task details by ID.", - "input_schema": { - "type": "object", - "properties": {"task_id": {"type": "integer"}}, - "required": ["task_id"], - }, - }, - { - "name": "task_update", - "description": "Update task status or owner.", - "input_schema": { - "type": "object", - "properties": { - "task_id": {"type": "integer"}, - "status": { - "type": "string", - "enum": ["pending", "in_progress", "completed"], - }, - "owner": {"type": "string"}, - }, - "required": ["task_id"], - }, - }, - { - "name": "task_bind_worktree", - "description": "Bind a task to a worktree name.", - "input_schema": { - "type": "object", - "properties": { - "task_id": {"type": "integer"}, - "worktree": {"type": "string"}, - "owner": {"type": "string"}, - }, - "required": ["task_id", "worktree"], - }, - }, - { - "name": "worktree_create", - "description": "Create a git worktree and optionally bind it to a task.", - "input_schema": { - "type": "object", - "properties": { - "name": {"type": "string"}, - "task_id": {"type": "integer"}, - "base_ref": {"type": "string"}, - }, - "required": ["name"], - }, - }, - { - "name": "worktree_list", - "description": "List worktrees tracked in .worktrees/index.json.", - "input_schema": {"type": "object", "properties": {}}, - }, - { - "name": "worktree_status", - "description": "Show git status for one worktree.", - "input_schema": { - "type": "object", - "properties": {"name": {"type": "string"}}, - "required": ["name"], - }, - }, - { - "name": "worktree_run", - "description": "Run a shell command in a named worktree directory.", - "input_schema": { - "type": "object", - "properties": { - "name": {"type": "string"}, - "command": {"type": "string"}, - }, - "required": ["name", "command"], - }, - }, - { - "name": "worktree_remove", - "description": "Remove a worktree and optionally mark its bound task completed.", - "input_schema": { - "type": "object", - "properties": { - "name": {"type": "string"}, - "force": {"type": "boolean"}, - "complete_task": {"type": "boolean"}, - }, - "required": ["name"], - }, - }, - { - "name": "worktree_keep", - "description": "Mark a worktree as kept in lifecycle state without removing it.", - "input_schema": { - "type": "object", - "properties": {"name": {"type": "string"}}, - "required": ["name"], - }, - }, - { - "name": "worktree_events", - "description": "List recent worktree/task lifecycle events from .worktrees/events.jsonl.", - "input_schema": { - "type": "object", - "properties": {"limit": {"type": "integer"}}, - }, - }, + {"name": "bash", "description": "Run a shell command in the current workspace.", + "input_schema": {"type": "object", "properties": {"command": {"type": "string"}}, "required": ["command"]}}, + {"name": "read_file", "description": "Read file contents.", + "input_schema": {"type": "object", "properties": {"path": {"type": "string"}, "limit": {"type": "integer"}}, "required": ["path"]}}, + {"name": "write_file", "description": "Write content to file.", + "input_schema": {"type": "object", "properties": {"path": {"type": "string"}, "content": {"type": "string"}}, "required": ["path", "content"]}}, + {"name": "edit_file", "description": "Replace exact text in file.", + "input_schema": {"type": "object", "properties": {"path": {"type": "string"}, "old_text": {"type": "string"}, "new_text": {"type": "string"}}, "required": ["path", "old_text", "new_text"]}}, + {"name": "task_create", "description": "Create a new task on the shared task board.", + "input_schema": {"type": "object", "properties": {"subject": {"type": "string"}, "description": {"type": "string"}}, "required": ["subject"]}}, + {"name": "task_list", "description": "List all tasks with status, owner, and worktree binding.", + "input_schema": {"type": "object", "properties": {}}}, + {"name": "task_get", "description": "Get task details by ID.", + "input_schema": {"type": "object", "properties": {"task_id": {"type": "integer"}}, "required": ["task_id"]}}, + {"name": "task_update", "description": "Update task status or owner.", + "input_schema": {"type": "object", "properties": {"task_id": {"type": "integer"}, "status": {"type": "string", "enum": ["pending", "in_progress", "completed", "deleted"]}, "owner": {"type": "string"}}, "required": ["task_id"]}}, + {"name": "task_bind_worktree", "description": "Bind a task to a worktree name.", + "input_schema": {"type": "object", "properties": {"task_id": {"type": "integer"}, "worktree": {"type": "string"}, "owner": {"type": "string"}}, "required": ["task_id", "worktree"]}}, + {"name": "worktree_create", "description": "Create a git worktree and optionally bind it to a task.", + "input_schema": {"type": "object", "properties": {"name": {"type": "string"}, "task_id": {"type": "integer"}, "base_ref": {"type": "string"}}, "required": ["name"]}}, + {"name": "worktree_list", "description": "List worktrees tracked in .worktrees/index.json.", + "input_schema": {"type": "object", "properties": {}}}, + {"name": "worktree_enter", "description": "Enter or reopen a worktree lane before working in it.", + "input_schema": {"type": "object", "properties": {"name": {"type": "string"}}, "required": ["name"]}}, + {"name": "worktree_status", "description": "Show git status for one worktree.", + "input_schema": {"type": "object", "properties": {"name": {"type": "string"}}, "required": ["name"]}}, + {"name": "worktree_run", "description": "Run a shell command in a named worktree directory.", + "input_schema": {"type": "object", "properties": {"name": {"type": "string"}, "command": {"type": "string"}}, "required": ["name", "command"]}}, + {"name": "worktree_closeout", "description": "Close out a lane by keeping it for follow-up or removing it.", + "input_schema": {"type": "object", "properties": {"name": {"type": "string"}, "action": {"type": "string", "enum": ["keep", "remove"]}, "reason": {"type": "string"}, "force": {"type": "boolean"}, "complete_task": {"type": "boolean"}}, "required": ["name", "action"]}}, + {"name": "worktree_remove", "description": "Remove a worktree and optionally mark its bound task completed.", + "input_schema": {"type": "object", "properties": {"name": {"type": "string"}, "force": {"type": "boolean"}, "complete_task": {"type": "boolean"}, "reason": {"type": "string"}}, "required": ["name"]}}, + {"name": "worktree_keep", "description": "Mark a worktree as kept without removing it.", + "input_schema": {"type": "object", "properties": {"name": {"type": "string"}}, "required": ["name"]}}, + {"name": "worktree_events", "description": "List recent lifecycle events.", + "input_schema": {"type": "object", "properties": {"limit": {"type": "integer"}}}}, ] def agent_loop(messages: list): while True: response = client.messages.create( - model=MODEL, - system=SYSTEM, - messages=messages, - tools=TOOLS, - max_tokens=8000, + model=MODEL, system=SYSTEM, messages=messages, + tools=TOOLS, max_tokens=8000, ) messages.append({"role": "assistant", "content": response.content}) if response.stop_reason != "tool_use": return - results = [] for block in response.content: if block.type == "tool_use": @@ -747,27 +614,20 @@ def agent_loop(messages: list): output = handler(**block.input) if handler else f"Unknown tool: {block.name}" except Exception as e: output = f"Error: {e}" - print(f"> {block.name}:") - print(str(output)[:200]) - results.append( - { - "type": "tool_result", - "tool_use_id": block.id, - "content": str(output), - } - ) + print(f"> {block.name}: {str(output)[:200]}") + results.append({"type": "tool_result", "tool_use_id": block.id, "content": str(output)}) messages.append({"role": "user", "content": results}) if __name__ == "__main__": - print(f"Repo root for s12: {REPO_ROOT}") + print(f"Repo root for s18: {REPO_ROOT}") if not WORKTREES.git_available: print("Note: Not in a git repo. worktree_* tools will return errors.") history = [] while True: try: - query = input("\033[36ms12 >> \033[0m") + query = input("\033[36ms18 >> \033[0m") except (EOFError, KeyboardInterrupt): break if query.strip().lower() in ("q", "exit", ""): diff --git a/agents/s19_mcp_plugin.py b/agents/s19_mcp_plugin.py new file mode 100644 index 000000000..d7dd0f953 --- /dev/null +++ b/agents/s19_mcp_plugin.py @@ -0,0 +1,567 @@ +#!/usr/bin/env python3 +# Harness: integration -- tools aren't just in your code. +""" +s19_mcp_plugin.py - MCP & Plugin System + +This teaching chapter focuses on the smallest useful idea: +external processes can expose tools, and your agent can treat them like +normal tools after a small amount of normalization. + +Minimal path: + 1. start an MCP server process + 2. ask it which tools it has + 3. prefix and register those tools + 4. route matching calls to that server + +Plugins add one more layer: discovery. A tiny manifest tells the agent which +external server to start. + +Key insight: "External tools should enter the same tool pipeline, not form a +completely separate world." In practice that means shared permission checks +and normalized tool_result payloads. + +Read this file in this order: +1. CapabilityPermissionGate: external tools still go through the same control gate. +2. MCPClient: how one server connection exposes tool specs and tool calls. +3. PluginLoader: how manifests declare external servers. +4. MCPToolRouter / build_tool_pool: how native and external tools merge into one pool. + +Most common confusion: +- a plugin manifest is not an MCP server +- an MCP server is not a single MCP tool +- external capability does not bypass the native permission path + +Teaching boundary: +this file teaches the smallest useful stdio MCP path. +Marketplace details, auth flows, reconnect logic, and non-tool capability layers +are intentionally left to bridge docs and later extensions. +""" + +import json +import os +import subprocess +import threading +from pathlib import Path + +from anthropic import Anthropic +from dotenv import load_dotenv + +load_dotenv(override=True) + +if os.getenv("ANTHROPIC_BASE_URL"): + os.environ.pop("ANTHROPIC_AUTH_TOKEN", None) + +WORKDIR = Path.cwd() +client = Anthropic(base_url=os.getenv("ANTHROPIC_BASE_URL")) +MODEL = os.environ["MODEL_ID"] +PERMISSION_MODES = ("default", "auto") + + +class CapabilityPermissionGate: + """ + Shared permission gate for native tools and external capabilities. + + The teaching goal is simple: MCP does not bypass the control plane. + Native tools and MCP tools both become normalized capability intents first, + then pass through the same allow / ask policy. + """ + + READ_PREFIXES = ("read", "list", "get", "show", "search", "query", "inspect") + HIGH_RISK_PREFIXES = ("delete", "remove", "drop", "shutdown") + + def __init__(self, mode: str = "default"): + self.mode = mode if mode in PERMISSION_MODES else "default" + + def normalize(self, tool_name: str, tool_input: dict) -> dict: + if tool_name.startswith("mcp__"): + _, server_name, actual_tool = tool_name.split("__", 2) + source = "mcp" + else: + server_name = None + actual_tool = tool_name + source = "native" + + lowered = actual_tool.lower() + if actual_tool == "read_file" or lowered.startswith(self.READ_PREFIXES): + risk = "read" + elif actual_tool == "bash": + command = tool_input.get("command", "") + risk = "high" if any( + token in command for token in ("rm -rf", "sudo", "shutdown", "reboot") + ) else "write" + elif lowered.startswith(self.HIGH_RISK_PREFIXES): + risk = "high" + else: + risk = "write" + + return { + "source": source, + "server": server_name, + "tool": actual_tool, + "risk": risk, + } + + def check(self, tool_name: str, tool_input: dict) -> dict: + intent = self.normalize(tool_name, tool_input) + + if intent["risk"] == "read": + return {"behavior": "allow", "reason": "Read capability", "intent": intent} + + if self.mode == "auto" and intent["risk"] != "high": + return { + "behavior": "allow", + "reason": "Auto mode for non-high-risk capability", + "intent": intent, + } + + if intent["risk"] == "high": + return { + "behavior": "ask", + "reason": "High-risk capability requires confirmation", + "intent": intent, + } + + return { + "behavior": "ask", + "reason": "State-changing capability requires confirmation", + "intent": intent, + } + + def ask_user(self, intent: dict, tool_input: dict) -> bool: + preview = json.dumps(tool_input, ensure_ascii=False)[:200] + source = ( + f"{intent['source']}:{intent['server']}/{intent['tool']}" + if intent.get("server") + else f"{intent['source']}:{intent['tool']}" + ) + print(f"\n [Permission] {source} risk={intent['risk']}: {preview}") + try: + answer = input(" Allow? (y/n): ").strip().lower() + except (EOFError, KeyboardInterrupt): + return False + return answer in ("y", "yes") + + +permission_gate = CapabilityPermissionGate() + + +class MCPClient: + """ + Minimal MCP client over stdio. + + This is enough to teach the core architecture without dragging readers + through every transport, auth flow, or marketplace detail up front. + """ + + def __init__(self, server_name: str, command: str, args: list = None, env: dict = None): + self.server_name = server_name + self.command = command + self.args = args or [] + self.env = {**os.environ, **(env or {})} + self.process = None + self._request_id = 0 + self._tools = [] # cached tool list + + def connect(self): + """Start the MCP server process.""" + try: + self.process = subprocess.Popen( + [self.command] + self.args, + stdin=subprocess.PIPE, + stdout=subprocess.PIPE, + stderr=subprocess.PIPE, + env=self.env, + text=True, + ) + # Send initialize request + self._send({"method": "initialize", "params": { + "protocolVersion": "2024-11-05", + "capabilities": {}, + "clientInfo": {"name": "teaching-agent", "version": "1.0"}, + }}) + response = self._recv() + if response and "result" in response: + # Send initialized notification + self._send({"method": "notifications/initialized"}) + return True + except FileNotFoundError: + print(f"[MCP] Server command not found: {self.command}") + except Exception as e: + print(f"[MCP] Connection failed: {e}") + return False + + def list_tools(self) -> list: + """Fetch available tools from the server.""" + self._send({"method": "tools/list", "params": {}}) + response = self._recv() + if response and "result" in response: + self._tools = response["result"].get("tools", []) + return self._tools + + def call_tool(self, tool_name: str, arguments: dict) -> str: + """Execute a tool on the server.""" + self._send({"method": "tools/call", "params": { + "name": tool_name, + "arguments": arguments, + }}) + response = self._recv() + if response and "result" in response: + content = response["result"].get("content", []) + return "\n".join(c.get("text", str(c)) for c in content) + if response and "error" in response: + return f"MCP Error: {response['error'].get('message', 'unknown')}" + return "MCP Error: no response" + + def get_agent_tools(self) -> list: + """ + Convert MCP tools to agent tool format. + + Teaching version uses the same simple prefix idea: + mcp__{server_name}__{tool_name} + """ + agent_tools = [] + for tool in self._tools: + prefixed_name = f"mcp__{self.server_name}__{tool['name']}" + agent_tools.append({ + "name": prefixed_name, + "description": tool.get("description", ""), + "input_schema": tool.get("inputSchema", {"type": "object", "properties": {}}), + "_mcp_server": self.server_name, + "_mcp_tool": tool["name"], + }) + return agent_tools + + def disconnect(self): + """Shut down the server process.""" + if self.process: + try: + self._send({"method": "shutdown"}) + self.process.terminate() + self.process.wait(timeout=5) + except Exception: + self.process.kill() + self.process = None + + def _send(self, message: dict): + if not self.process or self.process.poll() is not None: + return + self._request_id += 1 + envelope = {"jsonrpc": "2.0", "id": self._request_id, **message} + line = json.dumps(envelope) + "\n" + try: + self.process.stdin.write(line) + self.process.stdin.flush() + except (BrokenPipeError, OSError): + pass + + def _recv(self) -> dict | None: + if not self.process or self.process.poll() is not None: + return None + try: + line = self.process.stdout.readline() + if line: + return json.loads(line) + except (json.JSONDecodeError, OSError): + pass + return None + + +class PluginLoader: + """ + Load plugins from .claude-plugin/ directories. + + Teaching version implements the smallest useful plugin flow: + read a manifest, discover MCP server configs, and register them. + """ + + def __init__(self, search_dirs: list = None): + self.search_dirs = search_dirs or [WORKDIR] + self.plugins = {} # name -> manifest + + def scan(self) -> list: + """Scan directories for .claude-plugin/plugin.json manifests.""" + found = [] + for search_dir in self.search_dirs: + plugin_dir = Path(search_dir) / ".claude-plugin" + manifest_path = plugin_dir / "plugin.json" + if manifest_path.exists(): + try: + manifest = json.loads(manifest_path.read_text()) + name = manifest.get("name", plugin_dir.parent.name) + self.plugins[name] = manifest + found.append(name) + except (json.JSONDecodeError, OSError) as e: + print(f"[Plugin] Failed to load {manifest_path}: {e}") + return found + + def get_mcp_servers(self) -> dict: + """ + Extract MCP server configs from loaded plugins. + Returns {server_name: {command, args, env}}. + """ + servers = {} + for plugin_name, manifest in self.plugins.items(): + for server_name, config in manifest.get("mcpServers", {}).items(): + servers[f"{plugin_name}__{server_name}"] = config + return servers + + +class MCPToolRouter: + """ + Routes tool calls to the correct MCP server. + + MCP tools are prefixed mcp__{server}__{tool} and live alongside + native tools in the same tool pool. The router strips the prefix + and dispatches to the right MCPClient. + """ + + def __init__(self): + self.clients = {} # server_name -> MCPClient + + def register_client(self, client: MCPClient): + self.clients[client.server_name] = client + + def is_mcp_tool(self, tool_name: str) -> bool: + return tool_name.startswith("mcp__") + + def call(self, tool_name: str, arguments: dict) -> str: + """Route an MCP tool call to the correct server.""" + parts = tool_name.split("__", 2) + if len(parts) != 3: + return f"Error: Invalid MCP tool name: {tool_name}" + _, server_name, actual_tool = parts + client = self.clients.get(server_name) + if not client: + return f"Error: MCP server not found: {server_name}" + return client.call_tool(actual_tool, arguments) + + def get_all_tools(self) -> list: + """Collect tools from all connected MCP servers.""" + tools = [] + for client in self.clients.values(): + tools.extend(client.get_agent_tools()) + return tools + + +# -- Native tool implementations (same as s02) -- +def safe_path(p: str) -> Path: + path = (WORKDIR / p).resolve() + if not path.is_relative_to(WORKDIR): + raise ValueError(f"Path escapes workspace: {p}") + return path + +def run_bash(command: str) -> str: + dangerous = ["rm -rf /", "sudo", "shutdown", "reboot", "> /dev/"] + if any(d in command for d in dangerous): + return "Error: Dangerous command blocked" + try: + r = subprocess.run(command, shell=True, cwd=WORKDIR, + capture_output=True, text=True, timeout=120) + out = (r.stdout + r.stderr).strip() + return out[:50000] if out else "(no output)" + except subprocess.TimeoutExpired: + return "Error: Timeout (120s)" + +def run_read(path: str) -> str: + try: + return safe_path(path).read_text()[:50000] + except Exception as e: + return f"Error: {e}" + +def run_write(path: str, content: str) -> str: + try: + fp = safe_path(path) + fp.parent.mkdir(parents=True, exist_ok=True) + fp.write_text(content) + return f"Wrote {len(content)} bytes" + except Exception as e: + return f"Error: {e}" + +def run_edit(path: str, old_text: str, new_text: str) -> str: + try: + fp = safe_path(path) + content = fp.read_text() + if old_text not in content: + return f"Error: Text not found in {path}" + fp.write_text(content.replace(old_text, new_text, 1)) + return f"Edited {path}" + except Exception as e: + return f"Error: {e}" + + +NATIVE_HANDLERS = { + "bash": lambda **kw: run_bash(kw["command"]), + "read_file": lambda **kw: run_read(kw["path"]), + "write_file": lambda **kw: run_write(kw["path"], kw["content"]), + "edit_file": lambda **kw: run_edit(kw["path"], kw["old_text"], kw["new_text"]), +} + +NATIVE_TOOLS = [ + {"name": "bash", "description": "Run a shell command.", + "input_schema": {"type": "object", "properties": {"command": {"type": "string"}}, "required": ["command"]}}, + {"name": "read_file", "description": "Read file contents.", + "input_schema": {"type": "object", "properties": {"path": {"type": "string"}}, "required": ["path"]}}, + {"name": "write_file", "description": "Write content to file.", + "input_schema": {"type": "object", "properties": {"path": {"type": "string"}, "content": {"type": "string"}}, "required": ["path", "content"]}}, + {"name": "edit_file", "description": "Replace exact text in file.", + "input_schema": {"type": "object", "properties": {"path": {"type": "string"}, "old_text": {"type": "string"}, "new_text": {"type": "string"}}, "required": ["path", "old_text", "new_text"]}}, +] + + +# -- MCP Tool Router (global) -- +mcp_router = MCPToolRouter() +plugin_loader = PluginLoader() + + +def build_tool_pool() -> list: + """ + Assemble the complete tool pool: native + MCP tools. + + Native tools take precedence on name conflicts so the local core remains + predictable even after external tools are added. + """ + all_tools = list(NATIVE_TOOLS) + mcp_tools = mcp_router.get_all_tools() + + native_names = {t["name"] for t in all_tools} + for tool in mcp_tools: + if tool["name"] not in native_names: + all_tools.append(tool) + + return all_tools + + +def handle_tool_call(tool_name: str, tool_input: dict) -> str: + """Dispatch to native handler or MCP router.""" + if mcp_router.is_mcp_tool(tool_name): + return mcp_router.call(tool_name, tool_input) + handler = NATIVE_HANDLERS.get(tool_name) + if handler: + return handler(**tool_input) + return f"Unknown tool: {tool_name}" + + +def normalize_tool_result(tool_name: str, output: str, intent: dict | None = None) -> str: + intent = intent or permission_gate.normalize(tool_name, {}) + status = "error" if "Error:" in output or "MCP Error:" in output else "ok" + payload = { + "source": intent["source"], + "server": intent.get("server"), + "tool": intent["tool"], + "risk": intent["risk"], + "status": status, + "preview": output[:500], + } + return json.dumps(payload, indent=2, ensure_ascii=False) + + +def agent_loop(messages: list): + """Agent loop with unified native + MCP tool pool.""" + tools = build_tool_pool() + + while True: + system = ( + f"You are a coding agent at {WORKDIR}. Use tools to solve tasks.\n" + "You have both native tools and MCP tools available.\n" + "MCP tools are prefixed with mcp__{server}__{tool}.\n" + "All capabilities pass through the same permission gate before execution." + ) + response = client.messages.create( + model=MODEL, system=system, messages=messages, + tools=tools, max_tokens=8000, + ) + messages.append({"role": "assistant", "content": response.content}) + + if response.stop_reason != "tool_use": + return + + results = [] + for block in response.content: + if block.type != "tool_use": + continue + decision = permission_gate.check(block.name, block.input or {}) + try: + if decision["behavior"] == "deny": + output = f"Permission denied: {decision['reason']}" + elif decision["behavior"] == "ask" and not permission_gate.ask_user( + decision["intent"], block.input or {} + ): + output = f"Permission denied by user: {decision['reason']}" + else: + output = handle_tool_call(block.name, block.input or {}) + except Exception as e: + output = f"Error: {e}" + print(f"> {block.name}: {str(output)[:200]}") + results.append({ + "type": "tool_result", + "tool_use_id": block.id, + "content": normalize_tool_result( + block.name, + str(output), + decision.get("intent"), + ), + }) + + messages.append({"role": "user", "content": results}) + + +# Further upgrades you can add later: +# - more transports +# - auth / approval flows +# - server reconnect and lifecycle management +# - filtering external tools before they reach the model +# - richer plugin installation and update handling + + +if __name__ == "__main__": + # Scan for plugins + found = plugin_loader.scan() + if found: + print(f"[Plugins loaded: {', '.join(found)}]") + for server_name, config in plugin_loader.get_mcp_servers().items(): + mcp_client = MCPClient(server_name, config.get("command", ""), config.get("args", [])) + if mcp_client.connect(): + mcp_client.list_tools() + mcp_router.register_client(mcp_client) + print(f"[MCP] Connected to {server_name}") + + tool_count = len(build_tool_pool()) + mcp_count = len(mcp_router.get_all_tools()) + print(f"[Tool pool: {tool_count} tools ({mcp_count} from MCP)]") + + history = [] + while True: + try: + query = input("\033[36ms19 >> \033[0m") + except (EOFError, KeyboardInterrupt): + break + if query.strip().lower() in ("q", "exit", ""): + break + + if query.strip() == "/tools": + for tool in build_tool_pool(): + prefix = "[MCP] " if tool["name"].startswith("mcp__") else " " + print(f" {prefix}{tool['name']}: {tool.get('description', '')[:60]}") + continue + + if query.strip() == "/mcp": + if mcp_router.clients: + for name, c in mcp_router.clients.items(): + tools = c.get_agent_tools() + print(f" {name}: {len(tools)} tools") + else: + print(" (no MCP servers connected)") + continue + + history.append({"role": "user", "content": query}) + agent_loop(history) + response_content = history[-1]["content"] + if isinstance(response_content, list): + for block in response_content: + if hasattr(block, "text"): + print(block.text) + print() + + # Cleanup MCP connections + for c in mcp_router.clients.values(): + c.disconnect() diff --git a/agents/s_full.py b/agents/s_full.py index e2f887b5c..ada23a39e 100644 --- a/agents/s_full.py +++ b/agents/s_full.py @@ -1,39 +1,36 @@ #!/usr/bin/env python3 # Harness: all mechanisms combined -- the complete cockpit for the model. """ -s_full.py - Full Reference Agent - -Capstone implementation combining every mechanism from s01-s11. -Session s12 (task-aware worktree isolation) is taught separately. -NOT a teaching session -- this is the "put it all together" reference. - - +------------------------------------------------------------------+ - | FULL AGENT | - | | - | System prompt (s05 skills, task-first + optional todo nag) | - | | - | Before each LLM call: | - | +--------------------+ +------------------+ +--------------+ | - | | Microcompact (s06) | | Drain bg (s08) | | Check inbox | | - | | Auto-compact (s06) | | notifications | | (s09) | | - | +--------------------+ +------------------+ +--------------+ | - | | - | Tool dispatch (s02 pattern): | - | +--------+----------+----------+---------+-----------+ | - | | bash | read | write | edit | TodoWrite | | - | | task | load_sk | compress | bg_run | bg_check | | - | | t_crt | t_get | t_upd | t_list | spawn_tm | | - | | list_tm| send_msg | rd_inbox | bcast | shutdown | | - | | plan | idle | claim | | | | - | +--------+----------+----------+---------+-----------+ | - | | - | Subagent (s04): spawn -> work -> return summary | - | Teammate (s09): spawn -> work -> idle -> auto-claim (s11) | - | Shutdown (s10): request_id handshake | - | Plan gate (s10): submit -> approve/reject | - +------------------------------------------------------------------+ - - REPL commands: /compact /tasks /team /inbox +s_full.py - Capstone Teaching Agent + +Capstone file that combines the core local mechanisms taught across +`s01-s18` into one runnable agent. + +`s19` (MCP / plugin integration) is still taught as a separate chapter, +because external tool connectivity is easier to understand after the local +core is already stable. + +Chapter -> Class/Function mapping: + s01 Agent Loop -> agent_loop() + s02 Tool Dispatch -> TOOL_HANDLERS, normalize_messages() + s03 TodoWrite -> TodoManager + s04 Subagent -> run_subagent() + s05 Skill Loading -> SkillLoader + s06 Context Compact-> maybe_persist_output(), micro_compact(), auto_compact() + s07 Permissions -> PermissionManager + s08 Hooks -> HookManager + s09 Memory -> MemoryManager + s10 System Prompt -> build_system_prompt() + s11 Error Recovery -> recovery logic inside agent_loop() + s12 Task System -> TaskManager + s13 Background -> BackgroundManager + s14 Cron Scheduler -> CronScheduler + s15 Agent Teams -> TeammateManager, MessageBus + s16 Team Protocols -> shutdown_requests, plan_requests dicts + s17 Autonomous -> _idle_poll(), scan_unclaimed_tasks() + s18 Worktree -> WorktreeManager + +REPL commands: /compact /tasks /team /inbox """ import json @@ -66,10 +63,69 @@ POLL_INTERVAL = 5 IDLE_TIMEOUT = 60 +# Persisted-output: large tool outputs written to disk, replaced with preview marker +TASK_OUTPUT_DIR = WORKDIR / ".task_outputs" +TOOL_RESULTS_DIR = TASK_OUTPUT_DIR / "tool-results" +PERSIST_OUTPUT_TRIGGER_CHARS_DEFAULT = 50000 +PERSIST_OUTPUT_TRIGGER_CHARS_BASH = 30000 +CONTEXT_TRUNCATE_CHARS = 50000 +PERSISTED_OPEN = "" +PERSISTED_CLOSE = "" +PERSISTED_PREVIEW_CHARS = 2000 +KEEP_RECENT = 3 +PRESERVE_RESULT_TOOLS = {"read_file"} + VALID_MSG_TYPES = {"message", "broadcast", "shutdown_request", "shutdown_response", "plan_approval_response"} +# === SECTION: persisted_output (s06) === +def _persist_tool_result(tool_use_id: str, content: str) -> Path: + TOOL_RESULTS_DIR.mkdir(parents=True, exist_ok=True) + safe_id = re.sub(r"[^a-zA-Z0-9_.-]", "_", tool_use_id or "unknown") + path = TOOL_RESULTS_DIR / f"{safe_id}.txt" + if not path.exists(): + path.write_text(content) + return path.relative_to(WORKDIR) + +def _format_size(size: int) -> str: + if size < 1024: + return f"{size}B" + if size < 1024 * 1024: + return f"{size / 1024:.1f}KB" + return f"{size / (1024 * 1024):.1f}MB" + +def _preview_slice(text: str, limit: int) -> tuple[str, bool]: + if len(text) <= limit: + return text, False + idx = text[:limit].rfind("\n") + cut = idx if idx > (limit * 0.5) else limit + return text[:cut], True + +def _build_persisted_marker(stored_path: Path, content: str) -> str: + preview, has_more = _preview_slice(content, PERSISTED_PREVIEW_CHARS) + marker = ( + f"{PERSISTED_OPEN}\n" + f"Output too large ({_format_size(len(content))}). " + f"Full output saved to: {stored_path}\n\n" + f"Preview (first {_format_size(PERSISTED_PREVIEW_CHARS)}):\n" + f"{preview}" + ) + if has_more: + marker += "\n..." + marker += f"\n{PERSISTED_CLOSE}" + return marker + +def maybe_persist_output(tool_use_id: str, output: str, trigger_chars: int = None) -> str: + if not isinstance(output, str): + return str(output) + trigger = PERSIST_OUTPUT_TRIGGER_CHARS_DEFAULT if trigger_chars is None else int(trigger_chars) + if len(output) <= trigger: + return output + stored_path = _persist_tool_result(tool_use_id, output) + return _build_persisted_marker(stored_path, output) + + # === SECTION: base_tools === def safe_path(p: str) -> Path: path = (WORKDIR / p).resolve() @@ -77,7 +133,7 @@ def safe_path(p: str) -> Path: raise ValueError(f"Path escapes workspace: {p}") return path -def run_bash(command: str) -> str: +def run_bash(command: str, tool_use_id: str = "") -> str: dangerous = ["rm -rf /", "sudo", "shutdown", "reboot", "> /dev/"] if any(d in command for d in dangerous): return "Error: Dangerous command blocked" @@ -85,16 +141,21 @@ def run_bash(command: str) -> str: r = subprocess.run(command, shell=True, cwd=WORKDIR, capture_output=True, text=True, timeout=120) out = (r.stdout + r.stderr).strip() - return out[:50000] if out else "(no output)" + if not out: + return "(no output)" + out = maybe_persist_output(tool_use_id, out, trigger_chars=PERSIST_OUTPUT_TRIGGER_CHARS_BASH) + return out[:CONTEXT_TRUNCATE_CHARS] if isinstance(out, str) else str(out)[:CONTEXT_TRUNCATE_CHARS] except subprocess.TimeoutExpired: return "Error: Timeout (120s)" -def run_read(path: str, limit: int = None) -> str: +def run_read(path: str, tool_use_id: str = "", limit: int = None) -> str: try: lines = safe_path(path).read_text().splitlines() if limit and limit < len(lines): lines = lines[:limit] + [f"... ({len(lines) - limit} more)"] - return "\n".join(lines)[:50000] + out = "\n".join(lines) + out = maybe_persist_output(tool_use_id, out) + return out[:CONTEXT_TRUNCATE_CHARS] if isinstance(out, str) else str(out)[:CONTEXT_TRUNCATE_CHARS] except Exception as e: return f"Error: {e}" @@ -228,33 +289,64 @@ def estimate_tokens(messages: list) -> int: return len(json.dumps(messages, default=str)) // 4 def microcompact(messages: list): - indices = [] - for i, msg in enumerate(messages): + tool_results = [] + for msg in messages: if msg["role"] == "user" and isinstance(msg.get("content"), list): for part in msg["content"]: if isinstance(part, dict) and part.get("type") == "tool_result": - indices.append(part) - if len(indices) <= 3: + tool_results.append(part) + if len(tool_results) <= KEEP_RECENT: return - for part in indices[:-3]: - if isinstance(part.get("content"), str) and len(part["content"]) > 100: - part["content"] = "[cleared]" + tool_name_map = {} + for msg in messages: + if msg["role"] == "assistant": + content = msg.get("content", []) + if isinstance(content, list): + for block in content: + if hasattr(block, "type") and block.type == "tool_use": + tool_name_map[block.id] = block.name + for part in tool_results[:-KEEP_RECENT]: + if not isinstance(part.get("content"), str) or len(part["content"]) <= 100: + continue + tool_id = part.get("tool_use_id", "") + tool_name = tool_name_map.get(tool_id, "unknown") + if tool_name in PRESERVE_RESULT_TOOLS: + continue + part["content"] = f"[Previous: used {tool_name}]" -def auto_compact(messages: list) -> list: +def auto_compact(messages: list, focus: str = None) -> list: TRANSCRIPT_DIR.mkdir(exist_ok=True) path = TRANSCRIPT_DIR / f"transcript_{int(time.time())}.jsonl" with open(path, "w") as f: for msg in messages: f.write(json.dumps(msg, default=str) + "\n") - conv_text = json.dumps(messages, default=str)[-80000:] + conv_text = json.dumps(messages, default=str)[:80000] + prompt = ( + "Summarize this conversation for continuity. Structure your summary:\n" + "1) Task overview: core request, success criteria, constraints\n" + "2) Current state: completed work, files touched, artifacts created\n" + "3) Key decisions and discoveries: constraints, errors, failed approaches\n" + "4) Next steps: remaining actions, blockers, priority order\n" + "5) Context to preserve: user preferences, domain details, commitments\n" + "Be concise but preserve critical details.\n" + ) + if focus: + prompt += f"\nPay special attention to: {focus}\n" resp = client.messages.create( model=MODEL, - messages=[{"role": "user", "content": f"Summarize for continuity:\n{conv_text}"}], - max_tokens=2000, + messages=[{"role": "user", "content": prompt + "\n" + conv_text}], + max_tokens=4000, ) summary = resp.content[0].text + continuation = ( + "This session is being continued from a previous conversation that ran out " + "of context. The summary below covers the earlier portion of the conversation.\n\n" + f"{summary}\n\n" + "Please continue the conversation from where we left it off without asking " + "the user any further questions." + ) return [ - {"role": "user", "content": f"[Compressed. Transcript: {path}]\n{summary}"}, + {"role": "user", "content": continuation}, ] @@ -277,7 +369,7 @@ def _save(self, task: dict): def create(self, subject: str, description: str = "") -> str: task = {"id": self._next_id(), "subject": subject, "description": description, - "status": "pending", "owner": None, "blockedBy": []} + "status": "pending", "owner": None, "blockedBy": [], "blocks": []} self._save(task) return json.dumps(task, indent=2) @@ -285,7 +377,7 @@ def get(self, tid: int) -> str: return json.dumps(self._load(tid), indent=2) def update(self, tid: int, status: str = None, - add_blocked_by: list = None, remove_blocked_by: list = None) -> str: + add_blocked_by: list = None, add_blocks: list = None) -> str: task = self._load(tid) if status: task["status"] = status @@ -300,8 +392,8 @@ def update(self, tid: int, status: str = None, return f"Task {tid} deleted" if add_blocked_by: task["blockedBy"] = list(set(task["blockedBy"] + add_blocked_by)) - if remove_blocked_by: - task["blockedBy"] = [x for x in task["blockedBy"] if x not in remove_blocked_by] + if add_blocks: + task["blocks"] = list(set(task["blocks"] + add_blocks)) self._save(task) return json.dumps(task, indent=2) @@ -350,7 +442,7 @@ def _exec(self, tid: str, command: str, timeout: int): def check(self, tid: str = None) -> str: if tid: t = self.tasks.get(tid) - return f"[{t['status']}] {t.get('result') or '(running)'}" if t else f"Unknown: {tid}" + return f"[{t['status']}] {t.get('result', '(running)')}" if t else f"Unknown: {tid}" return "\n".join(f"{k}: [{v['status']}] {v['command'][:60]}" for k, v in self.tasks.items()) or "No bg tasks." def drain(self) -> list: @@ -575,8 +667,8 @@ def handle_plan_review(request_id: str, approve: bool, feedback: str = "") -> st # === SECTION: tool_dispatch (s02) === TOOL_HANDLERS = { - "bash": lambda **kw: run_bash(kw["command"]), - "read_file": lambda **kw: run_read(kw["path"], kw.get("limit")), + "bash": lambda **kw: run_bash(kw["command"], kw.get("tool_use_id", "")), + "read_file": lambda **kw: run_read(kw["path"], kw.get("tool_use_id", ""), kw.get("limit")), "write_file": lambda **kw: run_write(kw["path"], kw["content"]), "edit_file": lambda **kw: run_edit(kw["path"], kw["old_text"], kw["new_text"]), "TodoWrite": lambda **kw: TODO.update(kw["items"]), @@ -587,7 +679,7 @@ def handle_plan_review(request_id: str, approve: bool, feedback: str = "") -> st "check_background": lambda **kw: BG.check(kw.get("task_id")), "task_create": lambda **kw: TASK_MGR.create(kw["subject"], kw.get("description", "")), "task_get": lambda **kw: TASK_MGR.get(kw["task_id"]), - "task_update": lambda **kw: TASK_MGR.update(kw["task_id"], kw.get("status"), kw.get("add_blocked_by"), kw.get("remove_blocked_by")), + "task_update": lambda **kw: TASK_MGR.update(kw["task_id"], kw.get("status"), kw.get("add_blocked_by"), kw.get("add_blocks")), "task_list": lambda **kw: TASK_MGR.list_all(), "spawn_teammate": lambda **kw: TEAM.spawn(kw["name"], kw["role"], kw["prompt"]), "list_teammates": lambda **kw: TEAM.list_all(), @@ -626,7 +718,7 @@ def handle_plan_review(request_id: str, approve: bool, feedback: str = "") -> st {"name": "task_get", "description": "Get task details by ID.", "input_schema": {"type": "object", "properties": {"task_id": {"type": "integer"}}, "required": ["task_id"]}}, {"name": "task_update", "description": "Update task status or dependencies.", - "input_schema": {"type": "object", "properties": {"task_id": {"type": "integer"}, "status": {"type": "string", "enum": ["pending", "in_progress", "completed", "deleted"]}, "add_blocked_by": {"type": "array", "items": {"type": "integer"}}, "remove_blocked_by": {"type": "array", "items": {"type": "integer"}}}, "required": ["task_id"]}}, + "input_schema": {"type": "object", "properties": {"task_id": {"type": "integer"}, "status": {"type": "string", "enum": ["pending", "in_progress", "completed", "deleted"]}, "add_blocked_by": {"type": "array", "items": {"type": "integer"}}, "add_blocks": {"type": "array", "items": {"type": "integer"}}}, "required": ["task_id"]}}, {"name": "task_list", "description": "List all tasks.", "input_schema": {"type": "object", "properties": {}}}, {"name": "spawn_teammate", "description": "Spawn a persistent autonomous teammate.", @@ -650,6 +742,21 @@ def handle_plan_review(request_id: str, approve: bool, feedback: str = "") -> st ] +def inject_background_results(messages: list, notifs: list) -> bool: + if notifs: + txt = "\n".join( + f"[bg:{n['task_id']}] {n['status']}: {n['result']}" for n in notifs + ) + messages.append( + { + "role": "user", + "content": f"\n{txt}\n", + } + ) + return True + return False + + # === SECTION: agent_loop === def agent_loop(messages: list): rounds_without_todo = 0 @@ -660,14 +767,12 @@ def agent_loop(messages: list): print("[auto-compact triggered]") messages[:] = auto_compact(messages) # s08: drain background notifications - notifs = BG.drain() - if notifs: - txt = "\n".join(f"[bg:{n['task_id']}] {n['status']}: {n['result']}" for n in notifs) - messages.append({"role": "user", "content": f"\n{txt}\n"}) + inject_background_results(messages, BG.drain()) # s10: check lead inbox inbox = BUS.read_inbox("lead") if inbox: messages.append({"role": "user", "content": f"{json.dumps(inbox, indent=2)}"}) + messages.append({"role": "assistant", "content": "Noted inbox messages."}) # LLM call response = client.messages.create( model=MODEL, system=SYSTEM, messages=messages, @@ -675,35 +780,41 @@ def agent_loop(messages: list): ) messages.append({"role": "assistant", "content": response.content}) if response.stop_reason != "tool_use": + if BG.has_running_tasks() and inject_background_results( + messages, BG.wait_for_notifications() + ): + continue return # Tool execution results = [] used_todo = False manual_compress = False + compact_focus = None for block in response.content: if block.type == "tool_use": if block.name == "compress": manual_compress = True + compact_focus = (block.input or {}).get("focus") handler = TOOL_HANDLERS.get(block.name) try: - output = handler(**block.input) if handler else f"Unknown tool: {block.name}" + tool_input = dict(block.input or {}) + tool_input["tool_use_id"] = block.id + output = handler(**tool_input) if handler else f"Unknown tool: {block.name}" except Exception as e: output = f"Error: {e}" - print(f"> {block.name}:") - print(str(output)[:200]) + print(f"> {block.name}: {str(output)[:200]}") results.append({"type": "tool_result", "tool_use_id": block.id, "content": str(output)}) if block.name == "TodoWrite": used_todo = True # s03: nag reminder (only when todo workflow is active) rounds_without_todo = 0 if used_todo else rounds_without_todo + 1 if TODO.has_open_items() and rounds_without_todo >= 3: - results.append({"type": "text", "text": "Update your todos."}) + results.insert(0, {"type": "text", "text": "Update your todos."}) messages.append({"role": "user", "content": results}) # s06: manual compress if manual_compress: print("[manual compact]") - messages[:] = auto_compact(messages) - return + messages[:] = auto_compact(messages, focus=compact_focus) # === SECTION: repl === @@ -732,9 +843,4 @@ def agent_loop(messages: list): continue history.append({"role": "user", "content": query}) agent_loop(history) - response_content = history[-1]["content"] - if isinstance(response_content, list): - for block in response_content: - if hasattr(block, "text"): - print(block.text) print() diff --git a/docs/en/data-structures.md b/docs/en/data-structures.md new file mode 100644 index 000000000..5e9300f98 --- /dev/null +++ b/docs/en/data-structures.md @@ -0,0 +1,167 @@ +# Core Data Structures + +> **Reference** -- Use this when you lose track of where state lives. Each record has one clear job. + +The easiest way to get lost in an agent system is not feature count -- it is losing track of where the state actually lives. This document collects the core records that appear again and again across the mainline and bridge docs so you always have one place to look them up. + +## Recommended Reading Together + +- [`glossary.md`](./glossary.md) for term meanings +- [`entity-map.md`](./entity-map.md) for layer boundaries +- [`s13a-runtime-task-model.md`](./s13a-runtime-task-model.md) for task vs runtime-slot separation +- [`s19a-mcp-capability-layers.md`](./s19a-mcp-capability-layers.md) for MCP beyond tools + +## Two Principles To Keep In Mind + +### Principle 1: separate content state from process-control state + +- `messages`, `tool_result`, and memory text are content state +- `turn_count`, `transition`, and retry flags are process-control state + +### Principle 2: separate durable state from runtime-only state + +- tasks, memory, and schedules are usually durable +- runtime slots, permission decisions, and live MCP connections are usually runtime state + +## Query And Conversation State + +### `Message` + +Stores conversation and tool round-trip history. + +### `NormalizedMessage` + +Stable message shape ready for the model API. + +### `QueryParams` + +External input used to start one query process. + +### `QueryState` + +Mutable state that changes across turns. + +### `TransitionReason` + +Explains why the next turn exists. + +### `CompactSummary` + +Compressed carry-forward summary when old context leaves the hot window. + +## Prompt And Input State + +### `SystemPromptBlock` + +One stable prompt fragment. + +### `PromptParts` + +Separated prompt fragments before final assembly. + +### `ReminderMessage` + +Temporary one-turn or one-mode injection. + +## Tool And Control-Plane State + +### `ToolSpec` + +What the model knows about one tool. + +### `ToolDispatchMap` + +Name-to-handler routing table. + +### `ToolUseContext` + +Shared execution environment visible to tools. + +### `ToolResultEnvelope` + +Normalized result returned into the main loop. + +### `PermissionRule` + +Policy that decides allow / deny / ask. + +### `PermissionDecision` + +Structured output of the permission gate. + +### `HookEvent` + +Normalized lifecycle event emitted around the loop. + +## Durable Work State + +### `TaskRecord` + +Durable work-graph node with goal, status, and dependency edges. + +### `ScheduleRecord` + +Rule describing when work should trigger. + +### `MemoryEntry` + +Cross-session fact worth keeping. + +## Runtime Execution State + +### `RuntimeTaskState` + +Live execution-slot record for background or long-running work. + +### `Notification` + +Small result bridge that carries runtime outcomes back into the main loop. + +### `RecoveryState` + +State used to continue coherently after failures. + +## Team And Platform State + +### `TeamMember` + +Persistent teammate identity. + +### `MessageEnvelope` + +Structured message between teammates. + +### `RequestRecord` + +Durable record for approvals, shutdowns, handoffs, or other protocol workflows. + +### `WorktreeRecord` + +Record for one isolated execution lane. + +### `MCPServerConfig` + +Configuration for one external capability provider. + +### `CapabilityRoute` + +Routing decision for native, plugin, or MCP-backed capability. + +## A Useful Quick Map + +| Record | Main Job | Usually Lives In | +|---|---|---| +| `Message` | conversation history | `messages[]` | +| `QueryState` | turn-by-turn control | query engine | +| `ToolUseContext` | tool execution environment | tool control plane | +| `PermissionDecision` | execution gate outcome | permission layer | +| `TaskRecord` | durable work goal | task board | +| `RuntimeTaskState` | live execution slot | runtime manager | +| `TeamMember` | persistent teammate | team config | +| `RequestRecord` | protocol state | request tracker | +| `WorktreeRecord` | isolated execution lane | worktree index | +| `MCPServerConfig` | external capability config | settings / plugin config | + +## Key Takeaway + +**High-completion systems become much easier to understand when every important record has one clear job and one clear layer.** diff --git a/docs/en/entity-map.md b/docs/en/entity-map.md new file mode 100644 index 000000000..7409b8f7a --- /dev/null +++ b/docs/en/entity-map.md @@ -0,0 +1,119 @@ +# Entity Map + +> **Reference** -- Use this when concepts start to blur together. It tells you which layer each thing belongs to. + +As you move into the second half of the repo, you will notice that the main source of confusion is often not code. It is the fact that many entities look similar while living on different layers. This map helps you keep them straight. + +## How This Map Differs From Other Docs + +- this map answers: **which layer does this thing belong to?** +- [`glossary.md`](./glossary.md) answers: **what does the word mean?** +- [`data-structures.md`](./data-structures.md) answers: **what does the state shape look like?** + +## A Fast Layered Picture + +```text +conversation layer + - message + - prompt block + - reminder + +action layer + - tool call + - tool result + - hook event + +work layer + - work-graph task + - runtime task + - protocol request + +execution layer + - subagent + - teammate + - worktree lane + +platform layer + - MCP server + - memory record + - capability router +``` + +## The Most Commonly Confused Pairs + +### `Message` vs `PromptBlock` + +| Entity | What It Is | What It Is Not | +|---|---|---| +| `Message` | conversational content in history | not a stable system rule | +| `PromptBlock` | stable prompt instruction fragment | not one turn's latest event | + +### `Todo / Plan` vs `Task` + +| Entity | What It Is | What It Is Not | +|---|---|---| +| `todo / plan` | temporary session guidance | not a durable work graph | +| `task` | durable work node | not one turn's local thought | + +### `Work-Graph Task` vs `RuntimeTaskState` + +| Entity | What It Is | What It Is Not | +|---|---|---| +| work-graph task | durable goal and dependency node | not the live executor | +| runtime task | currently running execution slot | not the durable dependency node | + +### `Subagent` vs `Teammate` + +| Entity | What It Is | What It Is Not | +|---|---|---| +| subagent | one-shot delegated worker | not a long-lived team member | +| teammate | persistent collaborator with identity and inbox | not a disposable summary tool | + +### `ProtocolRequest` vs normal message + +| Entity | What It Is | What It Is Not | +|---|---|---| +| normal message | free-form communication | not a traceable approval workflow | +| protocol request | structured request with `request_id` | not casual chat text | + +### `Task` vs `Worktree` + +| Entity | What It Is | What It Is Not | +|---|---|---| +| task | what should be done | not a directory | +| worktree | where isolated execution happens | not the goal itself | + +### `Memory` vs `CLAUDE.md` + +| Entity | What It Is | What It Is Not | +|---|---|---| +| memory | durable cross-session facts | not the project rule file | +| `CLAUDE.md` | stable local rule / instruction surface | not user-specific long-term fact storage | + +### `MCPServer` vs `MCPTool` + +| Entity | What It Is | What It Is Not | +|---|---|---| +| MCP server | external capability provider | not one specific tool | +| MCP tool | one exposed capability | not the whole connection surface | + +## Quick "What / Where" Table + +| Entity | Main Job | Typical Place | +|---|---|---| +| `Message` | visible conversation context | `messages[]` | +| `PromptParts` | input assembly fragments | prompt builder | +| `PermissionRule` | execution decision rules | settings / session state | +| `HookEvent` | lifecycle extension point | hook system | +| `MemoryEntry` | durable fact | memory store | +| `TaskRecord` | work goal node | task board | +| `RuntimeTaskState` | live execution slot | runtime manager | +| `TeamMember` | persistent worker identity | team config | +| `MessageEnvelope` | structured teammate message | inbox | +| `RequestRecord` | protocol workflow state | request tracker | +| `WorktreeRecord` | isolated execution lane | worktree index | +| `MCPServerConfig` | external capability provider config | plugin / settings | + +## Key Takeaway + +**The more capable the system becomes, the more important clear entity boundaries become.** diff --git a/docs/en/glossary.md b/docs/en/glossary.md new file mode 100644 index 000000000..8abfc93f1 --- /dev/null +++ b/docs/en/glossary.md @@ -0,0 +1,141 @@ +# Glossary + +> **Reference** -- Bookmark this page. Come back whenever you hit an unfamiliar term. + +This glossary collects the terms that matter most to the teaching mainline -- the ones that most often trip up beginners. If you find yourself staring at a word mid-chapter and thinking "wait, what does that mean again?", this is the page to return to. + +## Recommended Companion Docs + +- [`entity-map.md`](./entity-map.md) for layer boundaries +- [`data-structures.md`](./data-structures.md) for record shapes +- [`s13a-runtime-task-model.md`](./s13a-runtime-task-model.md) if you keep mixing up different kinds of "task" + +## Agent + +A model that can reason over input and call tools to complete work. (Think of it as the "brain" that decides what to do next.) + +## Harness + +The working environment prepared around the model -- everything the model needs but cannot provide for itself: + +- tools +- filesystem +- permissions +- prompt assembly +- memory +- task runtime + +## Agent Loop + +The repeating core cycle that drives every agent session. Each iteration looks like this: + +1. send current input to the model +2. inspect whether it answered or asked for tools +3. execute tools if needed +4. write results back +5. continue or stop + +## Message / `messages[]` + +The visible conversation and tool-result history used as working context. (This is the rolling transcript the model sees on every turn.) + +## Tool + +An action the model may request, such as reading a file, writing a file, editing content, or running a shell command. + +## Tool Schema + +The description shown to the model: + +- name +- purpose +- input parameters +- input types + +## Dispatch Map + +A routing table from tool names to handlers. (Like a phone switchboard: the name comes in, and the map connects it to the right function.) + +## Stop Reason + +Why the current model turn ended. Common values: + +- `end_turn` +- `tool_use` +- `max_tokens` + +## Context + +The total information currently visible to the model. (Everything inside the model's "window" on a given turn.) + +## Compaction + +The process of shrinking active context while preserving the important storyline and next-step information. (Like summarizing meeting notes so you keep the action items but drop the small talk.) + +## Subagent + +A one-shot delegated worker that runs in a separate context and usually returns a summary. (A temporary helper spun up for one job, then discarded.) + +## Permission + +The decision layer that determines whether a requested action may execute. + +## Hook + +An extension point that lets the system observe or add side effects around the loop without rewriting the loop itself. (Like event listeners -- the loop fires a signal, and hooks respond.) + +## Memory + +Cross-session information worth keeping because it remains valuable later and is not cheap to re-derive. + +## System Prompt + +The stable system-level instruction surface that defines identity, rules, and long-lived constraints. + +## Query + +The full multi-turn process used to complete one user request. (One query may span many loop turns before the answer is ready.) + +## Transition Reason + +The reason the system continues into another turn. + +## Task + +A durable work goal node in the work graph. (Unlike a todo item that disappears when the session ends, a task persists.) + +## Runtime Task / Runtime Slot + +A live execution slot representing something currently running. (The task says "what should happen"; the runtime slot says "it is happening right now.") + +## Teammate + +A persistent collaborator inside a multi-agent system. (Unlike a subagent that is fire-and-forget, a teammate sticks around.) + +## Protocol Request + +A structured request with explicit identity, status, and tracking, usually backed by a `request_id`. (A formal envelope rather than a casual message.) + +## Worktree + +An isolated execution directory lane used so parallel work does not collide. (Each lane gets its own copy of the workspace, like separate desks for separate tasks.) + +## MCP + +Model Context Protocol. In this repo it represents an external capability integration surface, not only a tool list. (The bridge that lets your agent talk to outside services.) + +## DAG + +Directed Acyclic Graph. A set of nodes connected by one-way edges with no cycles. (If you draw arrows between tasks showing "A must finish before B", and no arrow path ever loops back to where it started, you have a DAG.) Used in this repo for task dependency graphs. + +## FSM / State Machine + +Finite State Machine. A system that is always in exactly one state from a known set, and transitions between states based on defined events. (Think of a traffic light cycling through red, green, and yellow.) The agent loop's turn logic is modeled as a state machine. + +## Control Plane + +The layer that decides what should happen next, as opposed to the layer that actually does the work. (Air traffic control versus the airplane.) In this repo, the query engine and tool dispatch act as control planes. + +## Tokens + +The atomic units a language model reads and writes. One token is roughly 3/4 of an English word. Context limits and compaction thresholds are measured in tokens. diff --git a/docs/en/s00-architecture-overview.md b/docs/en/s00-architecture-overview.md new file mode 100644 index 000000000..ceb94acc1 --- /dev/null +++ b/docs/en/s00-architecture-overview.md @@ -0,0 +1,179 @@ +# s00: Architecture Overview + +Welcome to the map. Before diving into building piece by piece, it helps to see the whole picture from above. This document shows you what the full system contains, why the chapters are ordered this way, and what you will actually learn. + +## The Big Picture + +The mainline of this repo is reasonable because it grows the system in four dependency-driven stages: + +1. build a real single-agent loop +2. harden that loop with safety, memory, and recovery +3. turn temporary session work into durable runtime work +4. grow the single executor into a multi-agent platform with isolated lanes and external capability routing + +This order follows **mechanism dependencies**, not file order and not product glamour. + +If the learner does not already understand: + +`user input -> model -> tools -> write-back -> next turn` + +then permissions, hooks, memory, tasks, teams, worktrees, and MCP all become disconnected vocabulary. + +## What This Repo Is Trying To Reconstruct + +This repository is not trying to mirror a production codebase line by line. + +It is trying to reconstruct the parts that determine whether an agent system actually works: + +- what the main modules are +- how those modules cooperate +- what each module is responsible for +- where the important state lives +- how one request flows through the system + +That means the goal is: + +**high fidelity to the design backbone, not 1:1 fidelity to every outer implementation detail.** + +## Three Tips Before You Start + +### Tip 1: Learn the smallest correct version first + +For example, a subagent does not need every advanced capability on day one. + +The smallest correct version already teaches the core lesson: + +- the parent defines the subtask +- the child gets a separate `messages[]` +- the child returns a summary + +Only after that is stable should you add: + +- inherited context +- separate permissions +- background runtime +- worktree isolation + +### Tip 2: New terms should be explained before they are used + +This repo uses terms such as: + +- state machine +- dispatch map +- dependency graph +- worktree +- protocol envelope +- MCP + +If a term is unfamiliar, pause and check the reference docs rather than pushing forward blindly. + +Recommended companions: + +- [`glossary.md`](./glossary.md) +- [`entity-map.md`](./entity-map.md) +- [`data-structures.md`](./data-structures.md) +- [`teaching-scope.md`](./teaching-scope.md) + +### Tip 3: Do not let peripheral complexity pretend to be core mechanism + +Good teaching does not try to include everything. + +It explains the important parts completely and keeps low-value complexity out of your way: + +- packaging and release flow +- enterprise integration glue +- telemetry +- product-specific compatibility branches +- file-name / line-number reverse-engineering trivia + +## Bridge Docs That Matter + +Treat these as cross-chapter maps: + +| Doc | What It Clarifies | +|---|---| +| [`s00d-chapter-order-rationale.md`](./s00d-chapter-order-rationale.md) (Deep Dive) | why the curriculum order is what it is | +| [`s00e-reference-module-map.md`](./s00e-reference-module-map.md) (Deep Dive) | how the reference repo's real module clusters map onto the current curriculum | +| [`s00a-query-control-plane.md`](./s00a-query-control-plane.md) (Deep Dive) | why a high-completion agent needs more than `messages[] + while True` | +| [`s00b-one-request-lifecycle.md`](./s00b-one-request-lifecycle.md) (Deep Dive) | how one request moves through the full system | +| [`s02a-tool-control-plane.md`](./s02a-tool-control-plane.md) (Deep Dive) | why tools become a control plane, not just a function table | +| [`s10a-message-prompt-pipeline.md`](./s10a-message-prompt-pipeline.md) (Deep Dive) | why system prompt is only one input surface | +| [`s13a-runtime-task-model.md`](./s13a-runtime-task-model.md) (Deep Dive) | why durable tasks and live runtime slots must split | +| [`s19a-mcp-capability-layers.md`](./s19a-mcp-capability-layers.md) (Deep Dive) | why MCP is more than a remote tool list | + +## The Four Learning Stages + +### Stage 1: Core Single-Agent (`s01-s06`) + +Goal: build a single agent that can actually do work. + +| Chapter | New Layer | +|---|---| +| `s01` | loop and write-back | +| `s02` | tools and dispatch | +| `s03` | session planning | +| `s04` | delegated subtask isolation | +| `s05` | skill discovery and loading | +| `s06` | context compaction | + +### Stage 2: Hardening (`s07-s11`) + +Goal: make the loop safer, more stable, and easier to extend. + +| Chapter | New Layer | +|---|---| +| `s07` | permission gate | +| `s08` | hooks and side effects | +| `s09` | durable memory | +| `s10` | prompt assembly | +| `s11` | recovery and continuation | + +### Stage 3: Runtime Work (`s12-s14`) + +Goal: upgrade session work into durable, background, and scheduled runtime work. + +| Chapter | New Layer | +|---|---| +| `s12` | persistent task graph | +| `s13` | runtime execution slots | +| `s14` | time-based triggers | + +### Stage 4: Platform (`s15-s19`) + +Goal: grow from one executor into a larger platform. + +| Chapter | New Layer | +|---|---| +| `s15` | persistent teammates | +| `s16` | structured team protocols | +| `s17` | autonomous claiming and resuming | +| `s18` | isolated execution lanes | +| `s19` | external capability routing | + +## Quick Reference: What Each Chapter Adds + +| Chapter | Core Structure | What You Should Be Able To Build | +|---|---|---| +| `s01` | `LoopState`, `tool_result` write-back | a minimal working agent loop | +| `s02` | `ToolSpec`, dispatch map | stable tool routing | +| `s03` | `TodoItem`, `PlanState` | visible session planning | +| `s04` | isolated child context | delegated subtasks without polluting the parent | +| `s05` | `SkillRegistry` | cheap discovery and deep on-demand loading | +| `s06` | compaction records | long sessions that stay usable | +| `s07` | permission decisions | execution behind a gate | +| `s08` | lifecycle events | extension without rewriting the loop | +| `s09` | memory records | selective long-term memory | +| `s10` | prompt parts | staged input assembly | +| `s11` | continuation reasons | recovery branches that stay legible | +| `s12` | `TaskRecord` | durable work graphs | +| `s13` | `RuntimeTaskState` | background execution with later write-back | +| `s14` | `ScheduleRecord` | time-triggered work | +| `s15` | `TeamMember`, inboxes | persistent teammates | +| `s16` | protocol envelopes | structured request / response coordination | +| `s17` | claim policy | self-claim and self-resume | +| `s18` | `WorktreeRecord` | isolated execution lanes | +| `s19` | capability routing | unified native + plugin + MCP routing | + +## Key Takeaway + +**A good chapter order is not a list of features. It is a path where each mechanism grows naturally out of the last one.** diff --git a/docs/en/s00a-query-control-plane.md b/docs/en/s00a-query-control-plane.md new file mode 100644 index 000000000..29366128c --- /dev/null +++ b/docs/en/s00a-query-control-plane.md @@ -0,0 +1,207 @@ +# s00a: Query Control Plane + +> **Deep Dive** -- Best read after completing Stage 1 (s01-s06). It explains why the simple loop needs a coordination layer as the system grows. + +### When to Read This + +After you've built the basic loop and tools, and before you start Stage 2's hardening chapters. + +--- + +> This bridge document answers one foundational question: +> +> **Why is `messages[] + while True` not enough for a high-completion agent?** + +## Why This Document Exists + +`s01` correctly teaches the smallest working loop: + +```text +user input + -> +model response + -> +if tool_use then execute + -> +append result + -> +continue +``` + +That is the right starting point. + +But once the system grows, the harness needs a separate layer that manages the **query process itself**. A "control plane" (the part of a system that coordinates behavior rather than performing the work directly) sits above the data path and decides when, why, and how the loop should keep running: + +- current turn +- continuation reason +- recovery state +- compaction state +- budget changes +- hook-driven continuation + +That layer is the **query control plane**. + +## Terms First + +### What is a query? + +Here, a query is not a database lookup. + +It means: + +> the full multi-turn process the system runs in order to finish one user request + +### What is a control plane? + +A control plane does not perform the business action itself. + +It coordinates: + +- when execution continues +- why it continues +- what state is patched before the next turn + +If you have worked with networking or infrastructure, the term is familiar -- the control plane decides where traffic goes, while the data plane carries the actual packets. The same idea applies here: the control plane decides whether the loop should keep running and why, while the execution layer does the actual model calls and tool work. + +### What is a transition? + +A transition explains: + +> why the previous turn did not end and why the next turn exists + +Common reasons: + +- tool result write-back +- truncated output recovery +- retry after compaction +- retry after transport failure + +## The Smallest Useful Mental Model + +Think of the query path in three layers: + +```text +1. Input layer + - messages + - system prompt + - user/system context + +2. Control layer + - query state + - turn count + - transition reason + - recovery / compaction / budget flags + +3. Execution layer + - model call + - tool execution + - write-back +``` + +The control plane does not replace the loop. + +It makes the loop capable of handling more than one happy-path branch. + +## Why `messages[]` Alone Stops Being Enough + +At demo scale, many learners put everything into `messages[]`. + +That breaks down once the system needs to know: + +- whether reactive compaction already ran +- how many continuation attempts happened +- whether this turn is a retry or a normal write-back +- whether a temporary output budget is active + +Those are not conversation contents. + +They are **process-control state**. + +## Core Structures + +### `QueryParams` + +External input passed into the query engine: + +```python +params = { + "messages": [...], + "system_prompt": "...", + "tool_use_context": {...}, + "max_output_tokens_override": None, + "max_turns": None, +} +``` + +### `QueryState` + +Mutable state that changes across turns: + +```python +state = { + "messages": [...], + "tool_use_context": {...}, + "turn_count": 1, + "continuation_count": 0, + "has_attempted_compact": False, + "max_output_tokens_override": None, + "transition": None, +} +``` + +### `TransitionReason` + +An explicit reason for continuing: + +```python +TRANSITIONS = ( + "tool_result_continuation", + "max_tokens_recovery", + "compact_retry", + "transport_retry", +) +``` + +This is not ceremony. It makes logs, testing, debugging, and teaching much clearer. + +## Minimal Implementation Pattern + +### 1. Split entry params from live state + +```python +def query(params): + state = { + "messages": params["messages"], + "tool_use_context": params["tool_use_context"], + "turn_count": 1, + "transition": None, + } +``` + +### 2. Let every continue-site patch state explicitly + +```python +state["transition"] = "tool_result_continuation" +state["turn_count"] += 1 +``` + +### 3. Make the next turn enter with a reason + +The next loop iteration should know whether it exists because of: + +- normal write-back +- retry +- compaction +- continuation after truncated output + +## What This Changes For You + +Once you see the query control plane clearly, later chapters stop feeling like random features. + +- `s06` compaction becomes a state patch, not a magic jump +- `s11` recovery becomes structured continuation, not just `try/except` +- `s17` autonomy becomes another controlled continuation path, not a separate mystery loop + +## Key Takeaway + +**A query is not just messages flowing through a loop. It is a controlled process with explicit continuation state.** diff --git a/docs/en/s00b-one-request-lifecycle.md b/docs/en/s00b-one-request-lifecycle.md new file mode 100644 index 000000000..77bb89f56 --- /dev/null +++ b/docs/en/s00b-one-request-lifecycle.md @@ -0,0 +1,226 @@ +# s00b: One Request Lifecycle + +> **Deep Dive** -- Best read after Stage 2 (s07-s11) when you want to see how all the pieces connect end-to-end. + +### When to Read This + +When you've learned several subsystems and want to see the full vertical flow of a single request. + +--- + +> This bridge document connects the whole system into one continuous execution chain. +> +> It answers: +> +> **What really happens after one user message enters the system?** + +## Why This Document Exists + +When you read chapter by chapter, you can understand each mechanism in isolation: + +- `s01` loop +- `s02` tools +- `s07` permissions +- `s09` memory +- `s12-s19` tasks, teams, worktrees, MCP + +But implementation gets difficult when you cannot answer: + +- what comes first? +- when do memory and prompt assembly happen? +- where do permissions sit relative to tools? +- when do tasks, runtime slots, teammates, worktrees, and MCP enter? + +This document gives you the vertical flow. + +## The Most Important Full Picture + +```text +user request + | + v +initialize query state + | + v +assemble system prompt / messages / reminders + | + v +call model + | + +-- normal answer --------------------------> finish request + | + +-- tool_use + | + v + tool router + | + +-- permission gate + +-- hooks + +-- native tool / MCP / agent / task / team + | + v + execution result + | + +-- may update task / runtime / memory / worktree state + | + v + write tool_result back to messages + | + v + patch query state + | + v + continue next turn +``` + +## Segment 1: A User Request Becomes Query State + +The system does not treat one user request as one API call. + +It first creates a query state for a process that may span many turns: + +```python +query_state = { + "messages": [{"role": "user", "content": user_text}], + "turn_count": 1, + "transition": None, + "tool_use_context": {...}, +} +``` + +The key mental shift: + +**a request is a multi-turn runtime process, not a single model response.** + +Related reading: + +- [`s01-the-agent-loop.md`](./s01-the-agent-loop.md) +- [`s00a-query-control-plane.md`](./s00a-query-control-plane.md) + +## Segment 2: The Real Model Input Is Assembled + +The harness usually does not send raw `messages` directly. + +It assembles: + +- system prompt blocks +- normalized messages +- memory attachments +- reminders +- tool definitions + +So the actual payload is closer to: + +```text +system prompt ++ normalized messages ++ tools ++ optional reminders and attachments +``` + +Related chapters: + +- `s09` +- `s10` +- [`s10a-message-prompt-pipeline.md`](./s10a-message-prompt-pipeline.md) + +## Segment 3: The Model Produces Either an Answer or an Action Intent + +There are two important output classes. + +### Normal answer + +The request may end here. + +### Action intent + +This usually means a tool call, for example: + +- `read_file(...)` +- `bash(...)` +- `task_create(...)` +- `mcp__server__tool(...)` + +The system is no longer receiving only text. + +It is receiving an instruction that should affect the real world. + +## Segment 4: The Tool Control Plane Takes Over + +Once `tool_use` appears, the system enters the tool control plane (the layer that decides how a tool call gets routed, checked, and executed). + +It answers: + +1. which tool is this? +2. where should it route? +3. should it pass a permission gate? +4. do hooks observe or modify the action? +5. what shared runtime context can it access? + +Minimal picture: + +```text +tool_use + | + v +tool router + | + +-- native handler + +-- MCP client + +-- agent / team / task runtime +``` + +Related reading: + +- [`s02-tool-use.md`](./s02-tool-use.md) +- [`s02a-tool-control-plane.md`](./s02a-tool-control-plane.md) + +## Segment 5: Execution May Update More Than Messages + +A tool result does not only return text. + +Execution may also update: + +- task board state +- runtime task state +- memory records +- request records +- worktree records + +That is why middle and late chapters are not optional side features. They become part of the request lifecycle. + +## Segment 6: Results Rejoin the Main Loop + +The crucial step is always the same: + +```text +real execution result + -> +tool_result or structured write-back + -> +messages / query state updated + -> +next turn +``` + +If the result never re-enters the loop, the model cannot reason over reality. + +## A Useful Compression + +When you get lost, compress the whole lifecycle into three layers: + +### Query loop + +Owns the multi-turn request process. + +### Tool control plane + +Owns routing, permissions, hooks, and execution context. + +### Platform state + +Owns durable records such as tasks, runtime slots, teammates, worktrees, and external capability configuration. + +## Key Takeaway + +**A user request enters as query state, moves through assembled input, becomes action intent, crosses the tool control plane, touches platform state, and then returns to the loop as new visible context.** diff --git a/docs/en/s00c-query-transition-model.md b/docs/en/s00c-query-transition-model.md new file mode 100644 index 000000000..c4316638f --- /dev/null +++ b/docs/en/s00c-query-transition-model.md @@ -0,0 +1,268 @@ +# s00c: Query Transition Model + +> **Deep Dive** -- Best read alongside s11 (Error Recovery). It deepens the transition model introduced in s00a. + +### When to Read This + +When you're working on error recovery and want to understand why each continuation needs an explicit reason. + +--- + +> This bridge note answers one narrow but important question: +> +> **Why does a high-completion agent need to know _why_ a query continues into the next turn, instead of treating every `continue` as the same thing?** + +## Why This Note Exists + +The mainline already teaches: + +- `s01`: the smallest loop +- `s06`: compaction and context control +- `s11`: error recovery + +That sequence is correct. + +The problem is what you often carry in your head after reading those chapters separately: + +> "The loop continues because it continues." + +That is enough for a toy demo, but it breaks down quickly in a larger system. + +A query can continue for very different reasons: + +- a tool just finished and the model needs the result +- the output hit a token limit and the model should continue +- compaction changed the active context and the system should retry +- the transport layer failed and backoff says "try again" +- a stop hook said the turn should not fully end yet +- a budget policy still allows the system to keep going + +If all of those collapse into one vague `continue`, three things get worse fast: + +- logs stop being readable +- tests stop being precise +- the teaching mental model becomes blurry + +## Terms First + +### What is a transition + +Here, a transition means: + +> the reason the previous turn became the next turn + +It is not the message content itself. It is the control-flow cause. + +### What is a continuation + +A continuation means: + +> this query is still alive and should keep advancing + +But continuation is not one thing. It is a family of reasons. + +### What is a query boundary + +A query boundary is the edge between one turn and the next. + +Whenever the system crosses that boundary, it should know: + +- why it is crossing +- what state was changed before the crossing +- how the next turn should interpret that change + +## The Minimum Mental Model + +Do not picture a query as a single straight line. + +A better mental model is: + +```text +one query + = a chain of state transitions + with explicit continuation reasons +``` + +For example: + +```text +user input + -> +model emits tool_use + -> +tool finishes + -> +tool_result_continuation + -> +model output is truncated + -> +max_tokens_recovery + -> +compaction happens + -> +compact_retry + -> +final completion +``` + +That is why the real lesson is not: + +> "the loop keeps spinning" + +The real lesson is: + +> "the system is advancing through typed transition reasons" + +## Core Records + +### 1. `transition` inside query state + +Even a teaching implementation should carry an explicit transition field: + +```python +state = { + "messages": [...], + "turn_count": 3, + "continuation_count": 1, + "has_attempted_compact": False, + "transition": None, +} +``` + +This field is not decoration. + +It tells you: + +- why this turn exists +- how the log should explain it +- what path a test should assert + +### 2. `TransitionReason` + +A minimal teaching set can look like this: + +```python +TRANSITIONS = ( + "tool_result_continuation", + "max_tokens_recovery", + "compact_retry", + "transport_retry", + "stop_hook_continuation", + "budget_continuation", +) +``` + +These reasons are not equivalent: + +- `tool_result_continuation` + is normal loop progress +- `max_tokens_recovery` + is continuation after truncated output +- `compact_retry` + is continuation after context reshaping +- `transport_retry` + is continuation after infrastructure failure +- `stop_hook_continuation` + is continuation forced by external control logic +- `budget_continuation` + is continuation allowed by policy and remaining budget + +### 3. Continuation budget + +High-completion systems do not just continue. They limit continuation. + +Typical fields look like: + +```python +state = { + "max_output_tokens_recovery_count": 2, + "has_attempted_reactive_compact": True, +} +``` + +The principle is: + +> continuation is a controlled resource, not an infinite escape hatch + +## Minimum Implementation Steps + +### Step 1: make every continue site explicit + +Many beginner loops still look like this: + +```python +continue +``` + +Move one step forward: + +```python +state["transition"] = "tool_result_continuation" +continue +``` + +### Step 2: pair each continuation with its state patch + +```python +if response.stop_reason == "tool_use": + state["messages"] = append_tool_results(...) + state["turn_count"] += 1 + state["transition"] = "tool_result_continuation" + continue + +if response.stop_reason == "max_tokens": + state["messages"].append({ + "role": "user", + "content": CONTINUE_MESSAGE, + }) + state["max_output_tokens_recovery_count"] += 1 + state["transition"] = "max_tokens_recovery" + continue +``` + +The important part is not "one more line of code." + +The important part is: + +> before every continuation, the system knows both the reason and the state mutation + +### Step 3: separate normal progress from recovery + +```python +if should_retry_transport(error): + time.sleep(backoff(...)) + state["transition"] = "transport_retry" + continue + +if should_recompact(error): + state["messages"] = compact_messages(state["messages"]) + state["transition"] = "compact_retry" + continue +``` + +Once you do this, "continue" stops being a vague action and becomes a typed control transition. + +## What to Test + +Your teaching repo should make these assertions straightforward: + +- a tool result writes `tool_result_continuation` +- a truncated model output writes `max_tokens_recovery` +- compaction retry does not silently reuse the old reason +- transport retry increments retry state and does not look like a normal turn + +If those paths are not easy to test, the model is probably still too implicit. + +## What Not to Over-Teach + +You do not need to bury yourself in vendor-specific transport details or every corner-case enum. + +For a teaching repo, the core lesson is narrower: + +> one query is a sequence of explicit transitions, and each transition should carry a reason, a state patch, and a budget rule + +That is the part you actually need if you want to rebuild a high-completion agent from zero. + +## Key Takeaway + +**Every continuation needs a typed reason. Without one, logs blur, tests weaken, and the mental model collapses into "the loop keeps spinning."** diff --git a/docs/en/s00d-chapter-order-rationale.md b/docs/en/s00d-chapter-order-rationale.md new file mode 100644 index 000000000..2c351a4c4 --- /dev/null +++ b/docs/en/s00d-chapter-order-rationale.md @@ -0,0 +1,292 @@ +# s00d: Chapter Order Rationale + +> **Deep Dive** -- Read this after completing Stage 1 (s01-s06) or whenever you wonder "why is the course ordered this way?" + +This note is not about one mechanism. It answers a more basic teaching question: why does this curriculum teach the system in the current order instead of following source-file order, feature hype, or raw implementation complexity? + +## Conclusion First + +The current `s01 -> s19` order is structurally sound. + +Its strength is not just breadth. Its strength is that it grows the system in the same order you should understand it: + +1. Build the smallest working agent loop. +2. Add the control-plane and hardening layers around that loop. +3. Upgrade session planning into durable work and runtime state. +4. Only then expand into persistent teams, isolated execution lanes, and external capability buses. + +That is the right teaching order because it follows: + +**dependency order between mechanisms** + +not file order or product packaging order. + +## The Four Dependency Lines + +This curriculum is really organized by four dependency lines: + +1. `core loop dependency` +2. `control-plane dependency` +3. `work-state dependency` +4. `platform-boundary dependency` + +In plain English: + +```text +first make the agent run + -> then make it run safely + -> then make it run durably + -> then make it run as a platform +``` + +## The Real Shape of the Sequence + +```text +s01-s06 + build one working single-agent system + +s07-s11 + harden and control that system + +s12-s14 + turn temporary planning into durable work + runtime + +s15-s19 + expand into teammates, protocols, autonomy, isolated lanes, and external capability +``` + +After each stage, you should be able to say: + +- after `s06`: "I can build one real single-agent harness" +- after `s11`: "I can make that harness safer, steadier, and easier to extend" +- after `s14`: "I can manage durable work, background execution, and time-triggered starts" +- after `s19`: "I understand the platform boundary of a high-completion agent system" + +## Why The Early Chapters Must Stay In Their Current Order + +### `s01` must stay first + +Because it establishes: + +- the minimal entry point +- the turn-by-turn loop +- why tool results must flow back into the next model call + +Without this, everything later becomes disconnected feature talk. + +### `s02` must immediately follow `s01` + +Because an agent that cannot route intent into tools is still only talking, not acting. + +`s02` is where learners first see the harness become real: + +- model emits `tool_use` +- the system dispatches to a handler +- the tool executes +- `tool_result` flows back into the loop + +### `s03` should stay before `s04` + +This is an important guardrail. + +You should first understand: + +- how the current agent organizes its own work + +before learning: + +- when to delegate work into a separate sub-context + +If `s04` comes too early, subagents become an escape hatch instead of a clear isolation mechanism. + +### `s05` should stay before `s06` + +These two chapters solve two halves of the same problem: + +- `s05`: prevent unnecessary knowledge from entering the context +- `s06`: manage the context that still must remain active + +That order matters. A good system first avoids bloat, then compacts what is still necessary. + +## Why `s07-s11` Form One Hardening Block + +These chapters all answer the same larger question: + +**the loop already works, so how does it become stable, safe, and legible as a real system?** + +### `s07` should stay before `s08` + +Permission comes first because the system must first answer: + +- may this action happen at all +- should it be denied +- should it ask the user first + +Only after that should you teach hooks, which answer: + +- what extra behavior attaches around the loop + +So the correct teaching order is: + +**gate first, extend second** + +### `s09` should stay before `s10` + +This is another very important ordering decision. + +`s09` teaches: + +- what durable information exists +- which facts deserve long-term storage + +`s10` teaches: + +- how multiple information sources are assembled into model input + +That means: + +- memory defines one content source +- prompt assembly explains how all content sources are combined + +If you reverse them, prompt construction starts to feel arbitrary and mysterious. + +### `s11` is the right closing chapter for this block + +Error recovery is not an isolated feature. + +It is where the system finally needs to explain: + +- why it is continuing +- why it is retrying +- why it is stopping + +That only becomes legible after the input path, tool path, state path, and control path already exist. + +## Why `s12-s14` Must Stay Goal -> Runtime -> Schedule + +This is the easiest part of the curriculum to teach badly if the order is wrong. + +### `s12` must stay before `s13` + +`s12` teaches: + +- what work exists +- dependency relations between work nodes +- when downstream work unlocks + +`s13` teaches: + +- what live execution is currently running +- where background results go +- how runtime state writes back + +That is the crucial distinction: + +- `task` is the durable work goal +- `runtime task` is the live execution slot + +If `s13` comes first, you will almost certainly collapse those two into one concept. + +### `s14` must stay after `s13` + +Cron does not add another kind of task. + +It adds a new start condition: + +**time becomes one more way to launch work into the runtime** + +So the right order is: + +`durable task graph -> runtime slot -> schedule trigger` + +## Why `s15-s19` Should Stay Team -> Protocol -> Autonomy -> Worktree -> Capability Bus + +### `s15` defines who persists in the system + +Before protocols or autonomy make sense, the system needs durable actors: + +- who teammates are +- what identity they carry +- how they persist across work + +### `s16` then defines how those actors coordinate + +Protocols should not come before actors. + +Protocols exist to structure: + +- who requests +- who approves +- who responds +- how requests remain traceable + +### `s17` only makes sense after both + +Autonomy is easy to teach vaguely. + +But in a real system it only becomes clear after: + +- persistent teammates exist +- structured coordination already exists + +Otherwise "autonomous claiming" sounds like magic instead of the bounded mechanism it really is. + +### `s18` should stay before `s19` + +Worktree isolation is a local execution-boundary problem: + +- where parallel work actually runs +- how one work lane stays isolated from another + +That should become clear before moving outward into: + +- plugins +- MCP servers +- external capability routing + +Otherwise you risk over-focusing on external capability and under-learning the local platform boundary. + +### `s19` is correctly last + +It is the outer platform boundary. + +It only becomes clean once you already understand: + +- local actors +- local work lanes +- local durable work +- local runtime execution +- then external capability providers + +## Five Reorders That Would Make The Course Worse + +1. Moving `s04` before `s03` + This teaches delegation before local planning. + +2. Moving `s10` before `s09` + This teaches prompt assembly before the learner understands one of its core inputs. + +3. Moving `s13` before `s12` + This collapses durable goals and live runtime slots into one confused idea. + +4. Moving `s17` before `s15` or `s16` + This turns autonomy into vague polling magic. + +5. Moving `s19` before `s18` + This makes the external platform look more important than the local execution boundary. + +## A Good Maintainer Check Before Reordering + +Before moving chapters around, ask: + +1. Does the learner already understand the prerequisite concept? +2. Will this reorder blur two concepts that should stay separate? +3. Is this chapter mainly about goals, runtime state, actors, or capability boundaries? +4. If I move it earlier, will the reader still be able to build the minimal correct version? +5. Am I optimizing for understanding, or merely copying source-file order? + +If the honest answer to the last question is "source-file order", the reorder is probably a mistake. + +## Key Takeaway + +**A good chapter order is not just a list of mechanisms. It is a sequence where each chapter feels like the next natural layer grown from the previous one.** diff --git a/docs/en/s00e-reference-module-map.md b/docs/en/s00e-reference-module-map.md new file mode 100644 index 000000000..0b548f50b --- /dev/null +++ b/docs/en/s00e-reference-module-map.md @@ -0,0 +1,214 @@ +# s00e: Reference Module Map + +> **Deep Dive** -- Read this when you want to verify how the teaching chapters map to the real production codebase. + +This is a calibration note for maintainers and serious learners. It does not turn the reverse-engineered source into required reading. Instead, it answers one narrow but important question: if you compare the high-signal module clusters in the reference repo with this teaching repo, is the current chapter order actually rational? + +## Verdict First + +Yes. + +The current `s01 -> s19` order is broadly correct, and it is closer to the real design backbone than any naive "follow the source tree" order would be. + +The reason is simple: + +- the reference repo contains many surface-level directories +- but the real design weight is concentrated in a smaller set of control, state, task, team, worktree, and capability modules +- those modules line up with the current four-stage teaching path + +So the right move is **not** to flatten the teaching repo into source-tree order. + +The right move is: + +- keep the current dependency-driven order +- make the mapping to the reference repo explicit +- keep removing low-value product detail from the mainline + +## How This Comparison Was Done + +The comparison was based on the reference repo's higher-signal clusters, especially modules around: + +- `Tool.ts` +- `state/AppStateStore.ts` +- `coordinator/coordinatorMode.ts` +- `memdir/*` +- `services/SessionMemory/*` +- `services/toolUseSummary/*` +- `constants/prompts.ts` +- `tasks/*` +- `tools/TodoWriteTool/*` +- `tools/AgentTool/*` +- `tools/ScheduleCronTool/*` +- `tools/EnterWorktreeTool/*` +- `tools/ExitWorktreeTool/*` +- `tools/MCPTool/*` +- `services/mcp/*` +- `plugins/*` +- `hooks/toolPermission/*` + +This is enough to judge the backbone without dragging you through every product-facing command, compatibility branch, or UI detail. + +## The Real Mapping + +| Reference repo cluster | Typical examples | Teaching chapter(s) | Why this placement is right | +|---|---|---|---| +| Query loop + control state | `Tool.ts`, `AppStateStore.ts`, query/coordinator state | `s00`, `s00a`, `s00b`, `s01`, `s11` | The real system is not just `messages[] + while True`. The teaching repo is right to start with the tiny loop first, then add the control plane later. | +| Tool routing and execution plane | `Tool.ts`, native tools, tool context, execution helpers | `s02`, `s02a`, `s02b` | The source clearly treats tools as a shared execution surface, not a toy dispatch table. The teaching split is correct. | +| Session planning | `TodoWriteTool` | `s03` | Session planning is a small but central layer. It belongs early, before durable tasks. | +| One-shot delegation | `AgentTool` in its simplest form | `s04` | The reference repo's agent spawning machinery is large, but the teaching repo is right to teach the smallest clean subagent first: fresh context, bounded task, summary return. | +| Skill discovery and loading | `DiscoverSkillsTool`, `skills/*`, prompt sections | `s05` | Skills are not random extras. They are a selective knowledge-loading layer, so they belong before prompt and context pressure become severe. | +| Context pressure and collapse | `services/toolUseSummary/*`, `services/contextCollapse/*`, compact logic | `s06` | The reference repo clearly has explicit compaction machinery. Teaching this before later platform features is correct. | +| Permission gate | `types/permissions.ts`, `hooks/toolPermission/*`, approval handlers | `s07` | Execution safety is a distinct gate, not "just another hook". Keeping it before hooks is the right teaching choice. | +| Hooks and side effects | `types/hooks.ts`, hook runners, lifecycle integrations | `s08` | The source separates extension points from the primary gate. Teaching them after permissions preserves that boundary. | +| Durable memory selection | `memdir/*`, `services/SessionMemory/*`, extract/select memory helpers | `s09` | The source makes memory a selective cross-session layer, not a generic notebook. Teaching this before prompt assembly is correct. | +| Prompt assembly | `constants/prompts.ts`, prompt sections, memory prompt loading | `s10`, `s10a` | The source builds inputs from many sections. The teaching repo is right to present prompt assembly as a pipeline instead of one giant string. | +| Recovery and continuation | query transition reasons, retry branches, compaction retry, token recovery | `s11`, `s00c` | The reference repo has explicit continuation logic. This belongs after loop, tools, compaction, permissions, memory, and prompt assembly already exist. | +| Durable work graph | task records, task board concepts, dependency unlocks | `s12` | The teaching repo correctly separates durable work goals from temporary session planning. | +| Live runtime tasks | `tasks/types.ts`, `LocalShellTask`, `LocalAgentTask`, `RemoteAgentTask`, `MonitorMcpTask` | `s13`, `s13a` | The source has a clear runtime-task union. This strongly validates the teaching split between `TaskRecord` and `RuntimeTaskState`. | +| Scheduled triggers | `ScheduleCronTool/*`, `useScheduledTasks` | `s14` | Scheduling appears after runtime work exists, which is exactly the correct dependency order. | +| Persistent teammates | `InProcessTeammateTask`, team tools, agent registries | `s15` | The source clearly grows from one-shot subagents into durable actors. Teaching teammates later is correct. | +| Structured team coordination | message envelopes, send-message flows, request tracking, coordinator mode | `s16` | Protocols make sense only after durable actors exist. The current order matches the real dependency. | +| Autonomous claiming and resuming | coordinator mode, task claiming, async worker lifecycle, resume logic | `s17` | Autonomy in the source is not magic. It is layered on top of actors, tasks, and coordination rules. The current placement is correct. | +| Worktree execution lanes | `EnterWorktreeTool`, `ExitWorktreeTool`, agent worktree helpers | `s18` | The reference repo treats worktree as an execution-lane boundary with closeout logic. Teaching it after tasks and teammates prevents concept collapse. | +| External capability bus | `MCPTool`, `services/mcp/*`, `plugins/*`, MCP resources/prompts/tools | `s19`, `s19a` | The source clearly places MCP and plugins at the outer platform boundary. Keeping this last is the right teaching choice. | + +## The Most Important Validation Points + +The reference repo strongly confirms five teaching choices. + +### 1. `s03` should stay before `s12` + +The source contains both: + +- small session planning +- larger durable task/runtime machinery + +Those are not the same thing. + +The teaching repo is correct to teach: + +`session planning first -> durable tasks later` + +### 2. `s09` should stay before `s10` + +The source builds the model input from multiple sources, including memory. + +That means: + +- memory is one input source +- prompt assembly is the pipeline that combines sources + +So memory should be explained before prompt assembly. + +### 3. `s12` must stay before `s13` + +The runtime-task union in the reference repo is one of the strongest pieces of evidence in the whole comparison. + +It shows that: + +- durable work definitions +- live running executions + +must stay conceptually separate. + +If `s13` came first, you would almost certainly merge those two layers. + +### 4. `s15 -> s16 -> s17` is the right order + +The source has: + +- durable actors +- structured coordination +- autonomous resume / claiming behavior + +Autonomy depends on the first two. So the current order is correct. + +### 5. `s18` should stay before `s19` + +The reference repo treats worktree isolation as a local execution-boundary mechanism. + +That should be understood before you are asked to reason about: + +- external capability providers +- MCP servers +- plugin-installed surfaces + +Otherwise external capability looks more central than it really is. + +## What This Teaching Repo Should Still Avoid Copying + +The reference repo contains many things that are real, but should still not dominate the teaching mainline: + +- CLI command surface area +- UI rendering details +- telemetry and analytics branches +- product integration glue +- remote and enterprise wiring +- platform-specific compatibility code +- line-by-line naming trivia + +These are valid implementation details. + +They are not the right center of a 0-to-1 teaching path. + +## Where The Teaching Repo Must Be Extra Careful + +The mapping also reveals several places where things can easily drift into confusion. + +### 1. Do not merge subagents and teammates into one vague concept + +The reference repo's `AgentTool` spans: + +- one-shot delegation +- async/background workers +- teammate-like persistent workers +- worktree-isolated workers + +That is exactly why the teaching repo should split the story across: + +- `s04` +- `s15` +- `s17` +- `s18` + +### 2. Do not teach worktree as "just a git trick" + +The source shows closeout, resume, cleanup, and isolation state around worktrees. + +So `s18` should keep teaching: + +- lane identity +- task binding +- keep/remove closeout +- resume and cleanup concerns + +not just `git worktree add`. + +### 3. Do not reduce MCP to "remote tools" + +The source includes: + +- tools +- resources +- prompts +- elicitation / connection state +- plugin mediation + +So `s19` should keep a tools-first teaching path, but still explain the wider capability-bus boundary. + +## Final Judgment + +Compared against the high-signal module clusters in the reference repo, the current chapter order is sound. + +The biggest remaining quality gains do **not** come from another major reorder. + +They come from: + +- cleaner bridge docs +- stronger entity-boundary explanations +- tighter multilingual consistency +- web pages that expose the same learning map clearly + +## Key Takeaway + +**The best teaching order is not the order files appear in a repo. It is the order in which dependencies become understandable to a learner who wants to rebuild the system.** diff --git a/docs/en/s00f-code-reading-order.md b/docs/en/s00f-code-reading-order.md new file mode 100644 index 000000000..dc8587ce5 --- /dev/null +++ b/docs/en/s00f-code-reading-order.md @@ -0,0 +1,142 @@ +# s00f: Code Reading Order + +> **Deep Dive** -- Read this when you're about to open the Python agent files and want a strategy for reading them. + +This page is not about reading more code. It answers a narrower question: once the chapter order is stable, what is the cleanest order for reading this repository's code without scrambling your mental model again? + +## Conclusion First + +Do not read the code like this: + +- do not start with the longest file +- do not jump straight into the most "advanced" chapter +- do not open `web/` first and then guess the mainline +- do not treat all `agents/*.py` files like one flat source pool + +The stable rule is simple: + +**read the code in the same order as the curriculum.** + +Inside each chapter file, keep the same reading order: + +1. state structures +2. tool definitions or registries +3. the function that advances one turn +4. the CLI entry last + +## Why This Page Exists + +You will probably not get lost in the prose first. You will get lost when you finally open the code and immediately start scanning the wrong things. + +Typical mistakes: + +- staring at the bottom half of a long file first +- reading a pile of `run_*` helpers before knowing where they connect +- jumping into late platform chapters and treating early chapters as "too simple" +- collapsing `task`, `runtime task`, `teammate`, and `worktree` back into one vague idea + +## Use The Same Reading Template For Every Agent File + +For any `agents/sXX_*.py`, read in this order: + +### 1. File header + +Answer two questions before anything else: + +- what is this chapter teaching +- what is it intentionally not teaching yet + +### 2. State structures or manager classes + +Look for things like: + +- `LoopState` +- `PlanningState` +- `CompactState` +- `TaskManager` +- `BackgroundManager` +- `TeammateManager` +- `WorktreeManager` + +### 3. Tool list or registry + +Look for: + +- `TOOLS` +- `TOOL_HANDLERS` +- `build_tool_pool()` +- the important `run_*` entrypoints + +### 4. The turn-advancing function + +Usually this is one of: + +- `run_one_turn(...)` +- `agent_loop(...)` +- a chapter-specific `handle_*` + +### 5. CLI entry last + +`if __name__ == "__main__"` matters, but it should not be the first thing you study. + +## Stage 1: `s01-s06` + +This stage is the single-agent backbone taking shape. + +| Chapter | File | Read First | Then Read | Confirm Before Moving On | +|---|---|---|---|---| +| `s01` | `agents/s01_agent_loop.py` | `LoopState` | `TOOLS` -> `execute_tool_calls()` -> `run_one_turn()` -> `agent_loop()` | You can trace `messages -> model -> tool_result -> next turn` | +| `s02` | `agents/s02_tool_use.py` | `safe_path()` | tool handlers -> `TOOL_HANDLERS` -> `agent_loop()` | You understand how tools grow without rewriting the loop | +| `s03` | `agents/s03_todo_write.py` | planning state types | todo handler path -> reminder injection -> `agent_loop()` | You understand visible session planning state | +| `s04` | `agents/s04_subagent.py` | `AgentTemplate` | `run_subagent()` -> parent `agent_loop()` | You understand that subagents are mainly context isolation | +| `s05` | `agents/s05_skill_loading.py` | skill registry types | registry methods -> `agent_loop()` | You understand discover light, load deep | +| `s06` | `agents/s06_context_compact.py` | `CompactState` | persist / micro compact / history compact -> `agent_loop()` | You understand that compaction relocates detail instead of deleting continuity | + +## Stage 2: `s07-s11` + +This stage hardens the control plane around a working single agent. + +| Chapter | File | Read First | Then Read | Confirm Before Moving On | +|---|---|---|---|---| +| `s07` | `agents/s07_permission_system.py` | validator / manager | permission path -> `run_bash()` -> `agent_loop()` | You understand gate before execute | +| `s08` | `agents/s08_hook_system.py` | `HookManager` | hook registration and dispatch -> `agent_loop()` | You understand fixed extension points | +| `s09` | `agents/s09_memory_system.py` | memory managers | save path -> prompt build -> `agent_loop()` | You understand memory as a long-term information layer | +| `s10` | `agents/s10_system_prompt.py` | `SystemPromptBuilder` | reminder builder -> `agent_loop()` | You understand input assembly as a pipeline | +| `s11` | `agents/s11_error_recovery.py` | compact / backoff helpers | recovery branches -> `agent_loop()` | You understand continuation after failure | + +## Stage 3: `s12-s14` + +This stage turns the harness into a work runtime. + +| Chapter | File | Read First | Then Read | Confirm Before Moving On | +|---|---|---|---|---| +| `s12` | `agents/s12_task_system.py` | `TaskManager` | task create / dependency / unlock -> `agent_loop()` | You understand durable work goals | +| `s13` | `agents/s13_background_tasks.py` | `NotificationQueue` / `BackgroundManager` | background registration -> notification drain -> `agent_loop()` | You understand runtime slots | +| `s14` | `agents/s14_cron_scheduler.py` | `CronLock` / `CronScheduler` | cron match -> trigger -> `agent_loop()` | You understand future start conditions | + +## Stage 4: `s15-s19` + +This stage is about platform boundaries. + +| Chapter | File | Read First | Then Read | Confirm Before Moving On | +|---|---|---|---|---| +| `s15` | `agents/s15_agent_teams.py` | `MessageBus` / `TeammateManager` | roster / inbox / loop -> `agent_loop()` | You understand persistent teammates | +| `s16` | `agents/s16_team_protocols.py` | `RequestStore` / `TeammateManager` | request handlers -> `agent_loop()` | You understand request-response plus `request_id` | +| `s17` | `agents/s17_autonomous_agents.py` | claim and identity helpers | claim path -> resume path -> `agent_loop()` | You understand idle check -> safe claim -> resume work | +| `s18` | `agents/s18_worktree_task_isolation.py` | `TaskManager` / `WorktreeManager` / `EventBus` | worktree lifecycle -> `agent_loop()` | You understand goals versus execution lanes | +| `s19` | `agents/s19_mcp_plugin.py` | capability gate / MCP client / plugin loader / router | tool pool build -> route -> normalize -> `agent_loop()` | You understand how external capability enters the same control plane | + +## Best Doc + Code Loop + +For each chapter: + +1. read the chapter prose +2. read the bridge note for that chapter +3. open the matching `agents/sXX_*.py` +4. follow the order: state -> tools -> turn driver -> CLI entry +5. run the demo once +6. rewrite the smallest version from scratch + +## Key Takeaway + +**Code reading order must obey teaching order: read boundaries first, then state, then the path that advances the loop.** diff --git a/docs/en/s01-the-agent-loop.md b/docs/en/s01-the-agent-loop.md index 405646869..67b3700dc 100644 --- a/docs/en/s01-the-agent-loop.md +++ b/docs/en/s01-the-agent-loop.md @@ -1,16 +1,24 @@ # s01: The Agent Loop -`[ s01 ] s02 > s03 > s04 > s05 > s06 | s07 > s08 > s09 > s10 > s11 > s12` +`[ s01 ] > s02 > s03 > s04 > s05 > s06 > s07 > s08 > s09 > s10 > s11 > s12 > s13 > s14 > s15 > s16 > s17 > s18 > s19` -> *"One loop & Bash is all you need"* -- one tool + one loop = an agent. -> -> **Harness layer**: The loop -- the model's first connection to the real world. +## What You'll Learn -## Problem +- How the core agent loop works: send messages, run tools, feed results back +- Why the "write-back" step is the single most important idea in agent design +- How to build a working agent in under 30 lines of Python -A language model can reason about code, but it can't *touch* the real world -- can't read files, run tests, or check errors. Without a loop, every tool call requires you to manually copy-paste results back. You become the loop. +Imagine you have a brilliant assistant who can reason about code, plan solutions, and write great answers -- but cannot touch anything. Every time it suggests running a command, you have to copy it, run it yourself, paste the output back, and wait for the next suggestion. You are the loop. This chapter removes you from that loop. -## Solution +## The Problem + +Without a loop, every tool call requires a human in the middle. The model says "run this test." You run it. You paste the output. The model says "now fix line 12." You fix it. You tell the model what happened. This manual back-and-forth might work for a single question, but it falls apart completely when a task requires 10, 20, or 50 tool calls in a row. + +The solution is simple: let the code do the looping. + +## The Solution + +Here's the entire system in one picture: ``` +--------+ +-------+ +---------+ @@ -20,20 +28,20 @@ A language model can reason about code, but it can't *touch* the real world -- c ^ | | tool_result | +----------------+ - (loop until stop_reason != "tool_use") + (loop until the model stops calling tools) ``` -One exit condition controls the entire flow. The loop runs until the model stops calling tools. +The model talks, the harness (the code wrapping the model) executes tools, and the results go right back into the conversation. The loop keeps spinning until the model decides it's done. ## How It Works -1. User prompt becomes the first message. +**Step 1.** The user's prompt becomes the first message. ```python messages.append({"role": "user", "content": query}) ``` -2. Send messages + tool definitions to the LLM. +**Step 2.** Send the conversation to the model, along with tool definitions. ```python response = client.messages.create( @@ -42,15 +50,17 @@ response = client.messages.create( ) ``` -3. Append the assistant response. Check `stop_reason` -- if the model didn't call a tool, we're done. +**Step 3.** Add the model's response to the conversation. Then check: did it call a tool, or is it done? ```python messages.append({"role": "assistant", "content": response.content}) + +# If the model didn't call a tool, the task is finished if response.stop_reason != "tool_use": return ``` -4. Execute each tool call, collect results, append as a user message. Loop back to step 2. +**Step 4.** Execute each tool call, collect the results, and put them back into the conversation as a new message. Then loop back to Step 2. ```python results = [] @@ -59,13 +69,14 @@ for block in response.content: output = run_bash(block.input["command"]) results.append({ "type": "tool_result", - "tool_use_id": block.id, + "tool_use_id": block.id, # links result to the tool call "content": output, }) +# This is the "write-back" -- the model can now see the real-world result messages.append({"role": "user", "content": results}) ``` -Assembled into one function: +Put it all together, and the entire agent fits in one function: ```python def agent_loop(query): @@ -78,7 +89,7 @@ def agent_loop(query): messages.append({"role": "assistant", "content": response.content}) if response.stop_reason != "tool_use": - return + return # model is done results = [] for block in response.content: @@ -92,7 +103,9 @@ def agent_loop(query): messages.append({"role": "user", "content": results}) ``` -That's the entire agent in under 30 lines. Everything else in this course layers on top -- without changing the loop. +That's the entire agent in under 30 lines. Everything else in this course layers on top of this loop -- without changing its core shape. + +> **A note about real systems:** Production agents typically use streaming responses, where the model's output arrives token by token instead of all at once. That changes the user experience (you see text appearing in real time), but the fundamental loop -- send, execute, write back -- stays exactly the same. We skip streaming here to keep the core idea crystal clear. ## What Changed @@ -114,3 +127,19 @@ python agents/s01_agent_loop.py 2. `List all Python files in this directory` 3. `What is the current git branch?` 4. `Create a directory called test_output and write 3 files in it` + +## What You've Mastered + +At this point, you can: + +- Build a working agent loop from scratch +- Explain why tool results must flow back into the conversation (the "write-back") +- Redraw the loop from memory: messages -> model -> tool execution -> write-back -> next turn + +## What's Next + +Right now, the agent can only run bash commands. That means every file read uses `cat`, every edit uses `sed`, and there's no safety boundary at all. In the next chapter, you'll add dedicated tools with a clean routing system -- and the loop itself won't need to change at all. + +## Key Takeaway + +> An agent is just a loop: send messages to the model, execute the tools it asks for, feed the results back, and repeat until it's done. diff --git a/docs/en/s02-tool-use.md b/docs/en/s02-tool-use.md index 279774b82..2e4b76ec1 100644 --- a/docs/en/s02-tool-use.md +++ b/docs/en/s02-tool-use.md @@ -1,18 +1,22 @@ # s02: Tool Use -`s01 > [ s02 ] s03 > s04 > s05 > s06 | s07 > s08 > s09 > s10 > s11 > s12` +`s01 > [ s02 ] > s03 > s04 > s05 > s06 > s07 > s08 > s09 > s10 > s11 > s12 > s13 > s14 > s15 > s16 > s17 > s18 > s19` -> *"Adding a tool means adding one handler"* -- the loop stays the same; new tools register into the dispatch map. -> -> **Harness layer**: Tool dispatch -- expanding what the model can reach. +## What You'll Learn -## Problem +- How to build a dispatch map (a routing table that maps tool names to handler functions) +- How path sandboxing prevents the model from escaping its workspace +- How to add new tools without touching the agent loop -With only `bash`, the agent shells out for everything. `cat` truncates unpredictably, `sed` fails on special characters, and every bash call is an unconstrained security surface. Dedicated tools like `read_file` and `write_file` let you enforce path sandboxing at the tool level. +If you ran the s01 agent for more than a few minutes, you probably noticed the cracks. `cat` silently truncates long files. `sed` chokes on special characters. Every bash command is an open door -- nothing stops the model from running `rm -rf /` or reading your SSH keys. You need dedicated tools with guardrails, and you need a clean way to add them. -The key insight: adding tools does not require changing the loop. +## The Problem -## Solution +With only `bash`, the agent shells out for everything. There is no way to limit what it reads, where it writes, or how much output it returns. A single bad command can corrupt files, leak secrets, or blow past your token budget with a massive stdout dump. What you really want is a small set of purpose-built tools -- `read_file`, `write_file`, `edit_file` -- each with its own safety checks. The question is: how do you wire them in without rewriting the loop every time? + +## The Solution + +The answer is a dispatch map -- one dictionary that routes tool names to handler functions. Adding a tool means adding one entry. The loop itself never changes. ``` +--------+ +-------+ +------------------+ @@ -31,7 +35,7 @@ One lookup replaces any if/elif chain. ## How It Works -1. Each tool gets a handler function. Path sandboxing prevents workspace escape. +**Step 1.** Each tool gets a handler function. Path sandboxing prevents the model from escaping the workspace -- every requested path is resolved and checked against the working directory before any I/O happens. ```python def safe_path(p: str) -> Path: @@ -45,10 +49,10 @@ def run_read(path: str, limit: int = None) -> str: lines = text.splitlines() if limit and limit < len(lines): lines = lines[:limit] - return "\n".join(lines)[:50000] + return "\n".join(lines)[:50000] # hard cap to avoid blowing up the context ``` -2. The dispatch map links tool names to handlers. +**Step 2.** The dispatch map links tool names to handlers. This is the entire routing layer -- no if/elif chain, no class hierarchy, just a dictionary. ```python TOOL_HANDLERS = { @@ -60,7 +64,7 @@ TOOL_HANDLERS = { } ``` -3. In the loop, look up the handler by name. The loop body itself is unchanged from s01. +**Step 3.** In the loop, look up the handler by name. The loop body itself is unchanged from s01 -- only the dispatch line is new. ```python for block in response.content: @@ -97,3 +101,21 @@ python agents/s02_tool_use.py 2. `Create a file called greet.py with a greet(name) function` 3. `Edit greet.py to add a docstring to the function` 4. `Read greet.py to verify the edit worked` + +## What You've Mastered + +At this point, you can: + +- Wire any new tool into the agent by adding one handler and one schema entry -- without touching the loop. +- Enforce path sandboxing so the model cannot read or write outside its workspace. +- Explain why a dispatch map scales better than an if/elif chain. + +Keep the boundary clean: a tool schema is enough for now. You do not need policy layers, approval UIs, or plugin ecosystems yet. If you can add one new tool without rewriting the loop, you have the core pattern down. + +## What's Next + +Your agent can now read, write, and edit files safely. But what happens when you ask it to do a 10-step refactoring? It finishes steps 1 through 3 and then starts improvising because it forgot the rest. In s03, you will give the agent a session plan -- a structured todo list that keeps it on track through complex, multi-step tasks. + +## Key Takeaway + +> The loop should not care how a tool works internally. It only needs a reliable route from tool name to handler. diff --git a/docs/en/s02a-tool-control-plane.md b/docs/en/s02a-tool-control-plane.md new file mode 100644 index 000000000..e5108226b --- /dev/null +++ b/docs/en/s02a-tool-control-plane.md @@ -0,0 +1,214 @@ +# s02a: Tool Control Plane + +> **Deep Dive** -- Best read after s02 and before s07. It shows why tools become more than a simple lookup table. + +### When to Read This + +After you understand basic tool dispatch and before you add permissions. + +--- + +> This bridge document answers another key question: +> +> **Why is a tool system more than a `tool_name -> handler` table?** + +## Why This Document Exists + +`s02` correctly teaches tool registration and dispatch first. + +That is the right teaching move because you should first understand how the model turns intent into action. + +But later the tool layer starts carrying much more responsibility: + +- permission checks +- MCP routing +- notifications +- shared runtime state +- message access +- app state +- capability-specific restrictions + +At that point, the tool layer is no longer just a function table. + +It becomes a control plane (the coordination layer that decides *how* each tool call gets routed and executed, rather than performing the tool work itself). + +## Terms First + +### Tool control plane + +The part of the system that decides **how** a tool call executes: + +- where it runs +- whether it is allowed +- what state it can access +- whether it is native or external + +### Execution context + +The runtime environment visible to the tool: + +- current working directory +- current permission mode +- current messages +- available MCP clients +- app state and notification channels + +### Capability source + +Not every tool comes from the same place. Common sources: + +- native local tools +- MCP tools +- agent/team/task/worktree platform tools + +## The Smallest Useful Mental Model + +Think of the tool system as four layers: + +```text +1. ToolSpec + what the model sees + +2. Tool Router + where the request gets sent + +3. ToolUseContext + what environment the tool can access + +4. Tool Result Envelope + how the output returns to the main loop +``` + +The biggest step up is layer 3: + +**high-completion systems are defined less by the dispatch table and more by the shared execution context.** + +## Core Structures + +### `ToolSpec` + +```python +tool = { + "name": "read_file", + "description": "Read file contents.", + "input_schema": {...}, +} +``` + +### `ToolDispatchMap` + +```python +handlers = { + "read_file": read_file, + "write_file": write_file, + "bash": run_bash, +} +``` + +Necessary, but not sufficient. + +### `ToolUseContext` + +```python +tool_use_context = { + "tools": handlers, + "permission_context": {...}, + "mcp_clients": {}, + "messages": [...], + "app_state": {...}, + "notifications": [], + "cwd": "...", +} +``` + +The key point: + +Tools stop receiving only input parameters. +They start receiving a shared runtime environment. + +### `ToolResultEnvelope` + +```python +result = { + "ok": True, + "content": "...", + "is_error": False, + "attachments": [], +} +``` + +This makes it easier to support: + +- plain text output +- structured output +- error output +- attachment-like results + +## Why `ToolUseContext` Eventually Becomes Necessary + +Compare two systems. + +### System A: dispatch map only + +```python +output = handlers[tool_name](**tool_input) +``` + +Fine for a demo. + +### System B: dispatch map plus execution context + +```python +output = handlers[tool_name](tool_input, tool_use_context) +``` + +Closer to a real platform. + +Why? + +Because now: + +- `bash` needs permissions +- `mcp__...` needs a client +- `agent` tools need execution environment setup +- `task_output` may need file writes plus notification write-back + +## Minimal Implementation Path + +### 1. Keep `ToolSpec` and handlers + +Do not throw away the simple model. + +### 2. Introduce one shared context object + +```python +class ToolUseContext: + def __init__(self): + self.handlers = {} + self.permission_context = {} + self.mcp_clients = {} + self.messages = [] + self.app_state = {} + self.notifications = [] +``` + +### 3. Let all handlers receive the context + +```python +def run_tool(tool_name: str, tool_input: dict, ctx: ToolUseContext): + handler = ctx.handlers[tool_name] + return handler(tool_input, ctx) +``` + +### 4. Route by capability source + +```python +def route_tool(tool_name: str, tool_input: dict, ctx: ToolUseContext): + if tool_name.startswith("mcp__"): + return run_mcp_tool(tool_name, tool_input, ctx) + return run_native_tool(tool_name, tool_input, ctx) +``` + +## Key Takeaway + +**A mature tool system is not just a name-to-function map. It is a shared execution plane that decides how model action intent becomes real work.** diff --git a/docs/en/s02b-tool-execution-runtime.md b/docs/en/s02b-tool-execution-runtime.md new file mode 100644 index 000000000..aa43438d9 --- /dev/null +++ b/docs/en/s02b-tool-execution-runtime.md @@ -0,0 +1,287 @@ +# s02b: Tool Execution Runtime + +> **Deep Dive** -- Best read after s02, when you want to understand concurrent tool execution. + +### When to Read This + +When you start wondering how multiple tool calls in one turn get executed safely. + +--- + +> This bridge note is not about how tools are registered. +> +> It is about a deeper question: +> +> **When the model emits multiple tool calls, what rules decide concurrency, progress updates, result ordering, and context merging?** + +## Why This Note Exists + +`s02` correctly teaches: + +- tool schema +- dispatch map +- `tool_result` flowing back into the loop + +That is the right starting point. + +But once the system grows, the hard questions move one layer deeper: + +- which tools can run in parallel +- which tools should stay serial +- whether long-running tools should emit progress first +- whether concurrent results should write back in completion order or original order +- whether tool execution mutates shared context +- how concurrent mutations should merge safely + +Those questions are not about registration anymore. + +They belong to the **tool execution runtime** -- the set of rules the system follows once tool calls actually start executing, including scheduling, tracking, yielding progress, and merging results. + +## Terms First + +### What "tool execution runtime" means here + +This is not the programming language runtime. + +Here it means: + +> the rules the system uses once tool calls actually start executing + +Those rules include scheduling, tracking, yielding progress, and merging results. + +### What "concurrency safe" means + +A tool is concurrency safe when: + +> it can run alongside similar work without corrupting shared state + +Typical read-only tools are often safe: + +- `read_file` +- some search tools +- query-only MCP tools + +Many write tools are not: + +- `write_file` +- `edit_file` +- tools that modify shared application state + +### What a progress message is + +A progress message means: + +> the tool is not done yet, but the system already surfaces what it is doing + +This keeps the user informed during long-running operations rather than leaving them staring at silence. + +### What a context modifier is + +Some tools do more than return text. + +They also modify shared runtime context, for example: + +- update a notification queue +- record active tools +- mutate app state + +That shared-state mutation is called a context modifier. + +## The Minimum Mental Model + +Do not flatten tool execution into: + +```text +tool_use -> handler -> result +``` + +A better mental model is: + +```text +tool_use blocks + -> +partition by concurrency safety + -> +choose concurrent or serial execution + -> +emit progress if needed + -> +write results back in stable order + -> +merge queued context modifiers +``` + +Two upgrades matter most: + +- concurrency is not "all tools run together" +- shared context should not be mutated in random completion order + +## Core Records + +### 1. `ToolExecutionBatch` + +A minimal teaching batch can look like: + +```python +batch = { + "is_concurrency_safe": True, + "blocks": [tool_use_1, tool_use_2, tool_use_3], +} +``` + +The point is simple: + +- tools are not always handled one by one +- the runtime groups them into execution batches first + +### 2. `TrackedTool` + +If you want a higher-completion execution layer, track each tool explicitly: + +```python +tracked_tool = { + "id": "toolu_01", + "name": "read_file", + "status": "queued", # queued / executing / completed / yielded + "is_concurrency_safe": True, + "pending_progress": [], + "results": [], + "context_modifiers": [], +} +``` + +This makes the runtime able to answer: + +- what is still waiting +- what is already running +- what has completed +- what has already yielded progress + +### 3. `MessageUpdate` + +Tool execution may produce more than one final result. + +A minimal update can be treated as: + +```python +update = { + "message": maybe_message, + "new_context": current_context, +} +``` + +In a larger runtime, updates usually split into two channels: + +- messages that should surface upstream immediately +- context changes that should stay internal until merge time + +### 4. Queued context modifiers + +This is easy to skip, but it is one of the most important ideas. + +In a concurrent batch, the safer strategy is not: + +> "whichever tool finishes first mutates shared context first" + +The safer strategy is: + +> queue context modifiers first, then merge them later in the original tool order + +For example: + +```python +queued_context_modifiers = { + "toolu_01": [modify_ctx_a], + "toolu_02": [modify_ctx_b], +} +``` + +## Minimum Implementation Steps + +### Step 1: classify concurrency safety + +```python +def is_concurrency_safe(tool_name: str, tool_input: dict) -> bool: + return tool_name in {"read_file", "search_files"} +``` + +### Step 2: partition before execution + +```python +batches = partition_tool_calls(tool_uses) + +for batch in batches: + if batch["is_concurrency_safe"]: + run_concurrently(batch["blocks"]) + else: + run_serially(batch["blocks"]) +``` + +### Step 3: let concurrent batches emit progress + +```python +for update in run_concurrently(...): + if update.get("message"): + yield update["message"] +``` + +### Step 4: merge context in stable order + +```python +queued_modifiers = {} + +for update in concurrent_updates: + if update.get("context_modifier"): + queued_modifiers[update["tool_id"]].append(update["context_modifier"]) + +for tool in original_batch_order: + for modifier in queued_modifiers.get(tool["id"], []): + context = modifier(context) +``` + +This is one of the places where a teaching repo can still stay simple while remaining honest about the real system shape. + +## The Picture You Should Hold + +```text +tool_use blocks + | + v +partition by concurrency safety + | + +-- safe batch ----------> concurrent execution + | | + | +-- progress updates + | +-- final results + | +-- queued context modifiers + | + +-- exclusive batch -----> serial execution + | + +-- direct result + +-- direct context update +``` + +## Why This Matters More Than the Dispatch Map + +In a tiny demo: + +```python +handlers[tool_name](tool_input) +``` + +is enough. + +But in a higher-completion agent, the hard part is no longer calling the right handler. + +The hard part is: + +- scheduling multiple tools safely +- keeping progress visible +- making result ordering stable +- preventing shared context from becoming nondeterministic + +That is why tool execution runtime deserves its own deep dive. + +## Key Takeaway + +**Once the model emits multiple tool calls per turn, the hard problem shifts from dispatch to safe concurrent execution with stable result ordering.** diff --git a/docs/en/s03-todo-write.md b/docs/en/s03-todo-write.md index e44611475..5b6beba07 100644 --- a/docs/en/s03-todo-write.md +++ b/docs/en/s03-todo-write.md @@ -1,16 +1,22 @@ # s03: TodoWrite -`s01 > s02 > [ s03 ] s04 > s05 > s06 | s07 > s08 > s09 > s10 > s11 > s12` +`s01 > s02 > [ s03 ] > s04 > s05 > s06 > s07 > s08 > s09 > s10 > s11 > s12 > s13 > s14 > s15 > s16 > s17 > s18 > s19` -> *"An agent without a plan drifts"* -- list the steps first, then execute. -> -> **Harness layer**: Planning -- keeping the model on course without scripting the route. +## What You'll Learn -## Problem +- How session planning keeps the model on track during multi-step tasks +- How a structured todo list with status tracking replaces fragile free-form plans +- How gentle reminders (nag injection) pull the model back when it drifts -On multi-step tasks, the model loses track. It repeats work, skips steps, or wanders off. Long conversations make this worse -- the system prompt fades as tool results fill the context. A 10-step refactoring might complete steps 1-3, then the model starts improvising because it forgot steps 4-10. +Have you ever asked an AI to do a complex task and watched it lose track halfway through? You say "refactor this module: add type hints, docstrings, tests, and a main guard" and it nails the first two steps, then wanders off into something you never asked for. This is not a model intelligence problem -- it is a working memory problem. As tool results pile up in the conversation, the original plan fades. By step 4, the model has effectively forgotten steps 5 through 10. You need a way to keep the plan visible. -## Solution +## The Problem + +On multi-step tasks, the model drifts. It repeats work, skips steps, or improvises once the system prompt fades behind pages of tool output. The context window (the total amount of text the model can hold in working memory at once) is finite, and earlier instructions get pushed further away with every tool call. A 10-step refactoring might complete steps 1-3, then the model starts making things up because it simply cannot "see" steps 4-10 anymore. + +## The Solution + +Give the model a `todo` tool that maintains a structured checklist. Then inject gentle reminders when the model goes too long without updating its plan. ``` +--------+ +-------+ +---------+ @@ -34,7 +40,7 @@ On multi-step tasks, the model loses track. It repeats work, skips steps, or wan ## How It Works -1. TodoManager stores items with statuses. Only one item can be `in_progress` at a time. +**Step 1.** TodoManager stores items with statuses. The "one `in_progress` at a time" constraint forces the model to finish what it started before moving on. ```python class TodoManager: @@ -49,10 +55,10 @@ class TodoManager: if in_progress_count > 1: raise ValueError("Only one task can be in_progress") self.items = validated - return self.render() + return self.render() # returns the checklist as formatted text ``` -2. The `todo` tool goes into the dispatch map like any other tool. +**Step 2.** The `todo` tool goes into the dispatch map like any other tool -- no special wiring needed, just one more entry in the dictionary you built in s02. ```python TOOL_HANDLERS = { @@ -61,19 +67,18 @@ TOOL_HANDLERS = { } ``` -3. A nag reminder injects a nudge if the model goes 3+ rounds without calling `todo`. +**Step 3.** A nag reminder injects a nudge if the model goes 3+ rounds without calling `todo`. This is the write-back trick (feeding tool results back into the conversation) used for a new purpose: the harness (the code wrapping around the model) quietly inserts a reminder into the results payload before it is appended to messages. ```python -if rounds_since_todo >= 3 and messages: - last = messages[-1] - if last["role"] == "user" and isinstance(last.get("content"), list): - last["content"].insert(0, { - "type": "text", - "text": "Update your todos.", - }) +if rounds_since_todo >= 3: + results.insert(0, { + "type": "text", + "text": "Update your todos.", + }) +messages.append({"role": "user", "content": results}) ``` -The "one in_progress at a time" constraint forces sequential focus. The nag reminder creates accountability. +The "one in_progress at a time" constraint forces sequential focus. The nag reminder creates accountability. Together, they keep the model working through its plan instead of drifting. ## What Changed From s02 @@ -94,3 +99,24 @@ python agents/s03_todo_write.py 1. `Refactor the file hello.py: add type hints, docstrings, and a main guard` 2. `Create a Python package with __init__.py, utils.py, and tests/test_utils.py` 3. `Review all Python files and fix any style issues` + +Watch the model create a plan, work through it step by step, and check off items as it goes. If it forgets to update the plan for a few rounds, you will see the `` nudge appear in the conversation. + +## What You've Mastered + +At this point, you can: + +- Add session planning to any agent by dropping a `todo` tool into the dispatch map. +- Enforce sequential focus with the "one in_progress at a time" constraint. +- Use nag injection to pull the model back on track when it drifts. +- Explain why structured state beats free-form prose for multi-step plans. + +Keep three boundaries in mind: `todo` here means "plan for the current conversation", not a durable task database. The tiny schema `{id, text, status}` is enough. A direct reminder is enough -- you do not need a sophisticated planning UI yet. + +## What's Next + +Your agent can now plan its work and stay on track. But every file it reads, every bash output it produces -- all of it stays in the conversation forever, eating into the context window. A five-file investigation might burn thousands of tokens (roughly word-sized pieces -- a 1000-line file uses about 4000 tokens) that the parent conversation never needs again. In s04, you will learn how to spin up subagents with fresh, isolated context -- so the parent stays clean and the model stays sharp. + +## Key Takeaway + +> Once the plan lives in structured state instead of free-form prose, the agent drifts much less. diff --git a/docs/en/s04-subagent.md b/docs/en/s04-subagent.md index 8a6ff2a6e..37ba0adf4 100644 --- a/docs/en/s04-subagent.md +++ b/docs/en/s04-subagent.md @@ -1,16 +1,22 @@ # s04: Subagents -`s01 > s02 > s03 > [ s04 ] s05 > s06 | s07 > s08 > s09 > s10 > s11 > s12` +`s01 > s02 > s03 > [ s04 ] > s05 > s06 > s07 > s08 > s09 > s10 > s11 > s12 > s13 > s14 > s15 > s16 > s17 > s18 > s19` -> *"Break big tasks down; each subtask gets a clean context"* -- subagents use independent messages[], keeping the main conversation clean. -> -> **Harness layer**: Context isolation -- protecting the model's clarity of thought. +## What You'll Learn +- Why exploring a side question can pollute the parent agent's context +- How a subagent gets a fresh, empty message history +- How only a short summary travels back to the parent +- Why the child's full message history is discarded after use -## Problem +Imagine you ask your agent "What testing framework does this project use?" To answer, it reads five files, parses config blocks, and compares import statements. All of that exploration is useful for a moment -- but once the answer is "pytest," you really don't want those five file dumps sitting in the conversation forever. Every future API call now carries that dead weight, burning tokens and distracting the model. You need a way to ask a side question in a clean room and bring back only the answer. -As the agent works, its messages array grows. Every file read, every bash output stays in context permanently. "What testing framework does this project use?" might require reading 5 files, but the parent only needs the answer: "pytest." +## The Problem -## Solution +As the agent works, its `messages` array grows. Every file read, every bash output stays in context permanently. A simple question like "what testing framework is this?" might require reading five files, but the parent only needs one word back: "pytest." Without isolation, those intermediate artifacts stay in context for the rest of the session, wasting tokens on every subsequent API call and muddying the model's attention. The longer a session runs, the worse this gets -- context fills with exploration debris that has nothing to do with the current task. + +## The Solution + +The parent agent delegates side tasks to a child agent that starts with an empty `messages=[]`. The child does all the messy exploration, then only its final text summary travels back. The child's full history is discarded. ``` Parent agent Subagent @@ -28,7 +34,7 @@ Parent context stays clean. Subagent context is discarded. ## How It Works -1. The parent gets a `task` tool. The child gets all base tools except `task` (no recursive spawning). +**Step 1.** The parent gets a `task` tool that the child does not. This prevents recursive spawning -- a child cannot create its own children. ```python PARENT_TOOLS = CHILD_TOOLS + [ @@ -42,7 +48,7 @@ PARENT_TOOLS = CHILD_TOOLS + [ ] ``` -2. The subagent starts with `messages=[]` and runs its own loop. Only the final text returns to the parent. +**Step 2.** The subagent starts with `messages=[]` and runs its own agent loop. Only the final text block returns to the parent as a `tool_result`. ```python def run_subagent(prompt: str) -> str: @@ -66,12 +72,13 @@ def run_subagent(prompt: str) -> str: "tool_use_id": block.id, "content": str(output)[:50000]}) sub_messages.append({"role": "user", "content": results}) + # Extract only the final text -- everything else is thrown away return "".join( b.text for b in response.content if hasattr(b, "text") ) or "(no summary)" ``` -The child's entire message history (possibly 30+ tool calls) is discarded. The parent receives a one-paragraph summary as a normal `tool_result`. +The child's entire message history (possibly 30+ tool calls worth of file reads and bash outputs) is discarded the moment `run_subagent` returns. The parent receives a one-paragraph summary as a normal `tool_result`, keeping its own context clean. ## What Changed From s03 @@ -92,3 +99,22 @@ python agents/s04_subagent.py 1. `Use a subtask to find what testing framework this project uses` 2. `Delegate: read all .py files and summarize what each one does` 3. `Use a task to create a new module, then verify it from here` + +## What You've Mastered + +At this point, you can: + +- Explain why a subagent is primarily a **context boundary**, not a process trick +- Spawn a one-shot child agent with a fresh `messages=[]` +- Return only a summary to the parent, discarding all intermediate exploration +- Decide which tools the child should and should not have access to + +You don't need long-lived workers, resumable sessions, or worktree isolation yet. The core idea is simple: give the subtask a clean workspace in memory, then bring back only the answer the parent still needs. + +## What's Next + +So far you've learned to keep context clean by isolating side tasks. But what about the knowledge the agent carries in the first place? In s05, you'll see how to avoid bloating the system prompt with domain expertise the model might never use -- loading skills on demand instead of upfront. + +## Key Takeaway + +> A subagent is a disposable scratch pad: fresh context in, short summary out, everything else discarded. diff --git a/docs/en/s05-skill-loading.md b/docs/en/s05-skill-loading.md index 0cf193850..96bcbacf1 100644 --- a/docs/en/s05-skill-loading.md +++ b/docs/en/s05-skill-loading.md @@ -1,16 +1,22 @@ # s05: Skills -`s01 > s02 > s03 > s04 > [ s05 ] s06 | s07 > s08 > s09 > s10 > s11 > s12` +`s01 > s02 > s03 > s04 > [ s05 ] > s06 > s07 > s08 > s09 > s10 > s11 > s12 > s13 > s14 > s15 > s16 > s17 > s18 > s19` -> *"Load knowledge when you need it, not upfront"* -- inject via tool_result, not the system prompt. -> -> **Harness layer**: On-demand knowledge -- domain expertise, loaded when the model asks. +## What You'll Learn +- Why stuffing all domain knowledge into the system prompt wastes tokens +- The two-layer loading pattern: cheap names up front, expensive bodies on demand +- How frontmatter (YAML metadata at the top of a file) gives each skill a name and description +- How the model decides for itself which skill to load and when -## Problem +You don't memorize every recipe in every cookbook you own. You know which shelf each cookbook sits on, and you pull one down only when you're actually cooking that dish. An agent's domain knowledge works the same way. You might have expertise files for git workflows, testing patterns, code review checklists, PDF processing -- dozens of topics. Loading all of them into the system prompt on every request is like reading every cookbook cover to cover before cracking a single egg. Most of that knowledge is irrelevant to any given task. -You want the agent to follow domain-specific workflows: git conventions, testing patterns, code review checklists. Putting everything in the system prompt wastes tokens on unused skills. 10 skills at 2000 tokens each = 20,000 tokens, most of which are irrelevant to any given task. +## The Problem -## Solution +You want your agent to follow domain-specific workflows: git conventions, testing best practices, code review checklists. The naive approach is to put everything in the system prompt. But 10 skills at 2,000 tokens each means 20,000 tokens of instructions on every API call -- most of which have nothing to do with the current question. You pay for those tokens every turn, and worse, all that irrelevant text competes for the model's attention with the content that actually matters. + +## The Solution + +Split knowledge into two layers. Layer 1 lives in the system prompt and is cheap: just skill names and one-line descriptions (~100 tokens per skill). Layer 2 is the full skill body, loaded on demand through a tool call only when the model decides it needs that knowledge. ``` System prompt (Layer 1 -- always present): @@ -31,11 +37,9 @@ When model calls load_skill("git"): +--------------------------------------+ ``` -Layer 1: skill *names* in system prompt (cheap). Layer 2: full *body* via tool_result (on demand). - ## How It Works -1. Each skill is a directory containing a `SKILL.md` with YAML frontmatter. +**Step 1.** Each skill is a directory containing a `SKILL.md` file. The file starts with YAML frontmatter (a metadata block delimited by `---` lines) that declares the skill's name and description, followed by the full instruction body. ``` skills/ @@ -45,7 +49,7 @@ skills/ SKILL.md # ---\n name: code-review\n description: Review code\n ---\n ... ``` -2. SkillLoader scans for `SKILL.md` files, uses the directory name as the skill identifier. +**Step 2.** `SkillLoader` scans for all `SKILL.md` files at startup. It parses the frontmatter to extract names and descriptions, and stores the full body for later retrieval. ```python class SkillLoader: @@ -54,10 +58,12 @@ class SkillLoader: for f in sorted(skills_dir.rglob("SKILL.md")): text = f.read_text() meta, body = self._parse_frontmatter(text) + # Use the frontmatter name, or fall back to the directory name name = meta.get("name", f.parent.name) self.skills[name] = {"meta": meta, "body": body} def get_descriptions(self) -> str: + """Layer 1: cheap one-liners for the system prompt.""" lines = [] for name, skill in self.skills.items(): desc = skill["meta"].get("description", "") @@ -65,13 +71,14 @@ class SkillLoader: return "\n".join(lines) def get_content(self, name: str) -> str: + """Layer 2: full body, returned as a tool_result.""" skill = self.skills.get(name) if not skill: return f"Error: Unknown skill '{name}'." return f"\n{skill['body']}\n" ``` -3. Layer 1 goes into the system prompt. Layer 2 is just another tool handler. +**Step 3.** Layer 1 goes into the system prompt so the model always knows what skills exist. Layer 2 is wired up as a normal tool handler -- the model calls `load_skill` when it decides it needs the full instructions. ```python SYSTEM = f"""You are a coding agent at {WORKDIR}. @@ -84,7 +91,7 @@ TOOL_HANDLERS = { } ``` -The model learns what skills exist (cheap) and loads them when relevant (expensive). +The model learns what skills exist (cheap, ~100 tokens each) and loads them only when relevant (expensive, ~2000 tokens each). On a typical turn, only one skill is loaded instead of all ten. ## What Changed From s04 @@ -106,3 +113,22 @@ python agents/s05_skill_loading.py 2. `Load the agent-builder skill and follow its instructions` 3. `I need to do a code review -- load the relevant skill first` 4. `Build an MCP server using the mcp-builder skill` + +## What You've Mastered + +At this point, you can: + +- Explain why "list first, load later" beats stuffing everything into the system prompt +- Write a `SKILL.md` with YAML frontmatter that a `SkillLoader` can discover +- Wire up two-layer loading: cheap descriptions in the system prompt, full bodies via `tool_result` +- Let the model decide for itself when domain knowledge is worth loading + +You don't need skill ranking systems, multi-provider merging, parameterized templates, or recovery-time restoration rules. The core pattern is simple: advertise cheaply, load on demand. + +## What's Next + +You now know how to keep knowledge out of context until it's needed. But what happens when context grows large anyway -- after dozens of turns of real work? In s06, you'll learn how to compress a long conversation down to its essentials so the agent can keep working without hitting token limits. + +## Key Takeaway + +> Advertise skill names cheaply in the system prompt; load the full body through a tool call only when the model actually needs it. diff --git a/docs/en/s06-context-compact.md b/docs/en/s06-context-compact.md index 2fbef2ec1..f51df3aab 100644 --- a/docs/en/s06-context-compact.md +++ b/docs/en/s06-context-compact.md @@ -1,29 +1,42 @@ # s06: Context Compact -`s01 > s02 > s03 > s04 > s05 > [ s06 ] | s07 > s08 > s09 > s10 > s11 > s12` +`s01 > s02 > s03 > s04 > s05 > [ s06 ] > s07 > s08 > s09 > s10 > s11 > s12 > s13 > s14 > s15 > s16 > s17 > s18 > s19` -> *"Context will fill up; you need a way to make room"* -- three-layer compression strategy for infinite sessions. -> -> **Harness layer**: Compression -- clean memory for infinite sessions. +## What You'll Learn -## Problem +- Why long sessions inevitably run out of context space, and what happens when they do +- A four-lever compression strategy: persisted output, micro-compact, auto-compact, and manual compact +- How to move detail out of active memory without losing it +- How to keep a session alive indefinitely by summarizing and continuing -The context window is finite. A single `read_file` on a 1000-line file costs ~4000 tokens. After reading 30 files and running 20 bash commands, you hit 100,000+ tokens. The agent cannot work on large codebases without compression. +Your agent from s05 is capable. It reads files, runs commands, edits code, and delegates subtasks. But try something ambitious -- ask it to refactor a module that touches 30 files. After reading all of them and running 20 shell commands, you will notice the responses get worse. The model starts forgetting what it already read. It repeats work. Eventually the API rejects your request entirely. You have hit the context window limit, and without a plan for that, your agent is stuck. -## Solution +## The Problem -Three layers, increasing in aggressiveness: +Every API call to the model includes the entire conversation so far: every user message, every assistant response, every tool call and its result. The model's context window (the total amount of text it can hold in working memory at once) is finite. A single `read_file` on a 1000-line source file costs roughly 4,000 tokens (roughly word-sized pieces -- a 1,000-line file uses about 4,000 tokens). Read 30 files and run 20 bash commands, and you have burned through 100,000+ tokens. The context is full, but the work is only half done. + +The naive fix -- just truncating old messages -- throws away information the agent might need later. A smarter approach compresses strategically: keep the important bits, move the bulky details to disk, and summarize when the conversation gets too long. That is what this chapter builds. + +## The Solution + +We use four levers, each working at a different stage of the pipeline, from output-time filtering to full conversation summarization. ``` -Every turn: +Every tool call: +------------------+ | Tool call result | +------------------+ | v -[Layer 1: micro_compact] (silent, every turn) +[Lever 0: persisted-output] (at tool execution time) + Large outputs (>50KB, bash >30KB) are written to disk + and replaced with a preview marker. + | + v +[Lever 1: micro_compact] (silent, every turn) Replace tool_result > 3 turns old with "[Previous: used {tool_name}]" + (preserves read_file results as reference material) | v [Check: tokens > 50000?] @@ -31,38 +44,62 @@ Every turn: no yes | | v v -continue [Layer 2: auto_compact] +continue [Lever 2: auto_compact] Save transcript to .transcripts/ LLM summarizes conversation. Replace all messages with [summary]. | v - [Layer 3: compact tool] + [Lever 3: compact tool] Model calls compact explicitly. Same summarization as auto_compact. ``` ## How It Works -1. **Layer 1 -- micro_compact**: Before each LLM call, replace old tool results with placeholders. +### Step 1: Lever 0 -- Persisted Output + +The first line of defense runs at tool execution time, before a result even enters the conversation. When a tool result exceeds a size threshold, we write the full output to disk and replace it with a short preview. This prevents a single giant command output from consuming half the context window. ```python +PERSIST_OUTPUT_TRIGGER_CHARS_DEFAULT = 50000 +PERSIST_OUTPUT_TRIGGER_CHARS_BASH = 30000 # bash uses a lower threshold + +def maybe_persist_output(tool_use_id, output, trigger_chars=None): + if len(output) <= trigger: + return output # small enough -- keep inline + stored_path = _persist_tool_result(tool_use_id, output) + return _build_persisted_marker(stored_path, output) # swap in a compact preview + # Returns: + # Output too large (48.8KB). Full output saved to: .task_outputs/tool-results/abc123.txt + # Preview (first 2.0KB): + # ... first 2000 chars ... + # +``` + +The model can later `read_file` the stored path to access the full content if needed. Nothing is lost -- the detail just lives on disk instead of in the conversation. + +### Step 2: Lever 1 -- Micro-Compact + +Before each LLM call, we scan for old tool results and replace them with one-line placeholders. This is invisible to the user and runs every turn. The key subtlety: we preserve `read_file` results because those serve as reference material the model often needs to look back at. + +```python +PRESERVE_RESULT_TOOLS = {"read_file"} + def micro_compact(messages: list) -> list: - tool_results = [] - for i, msg in enumerate(messages): - if msg["role"] == "user" and isinstance(msg.get("content"), list): - for j, part in enumerate(msg["content"]): - if isinstance(part, dict) and part.get("type") == "tool_result": - tool_results.append((i, j, part)) + tool_results = [...] # collect all tool_result entries if len(tool_results) <= KEEP_RECENT: - return messages - for _, _, part in tool_results[:-KEEP_RECENT]: - if len(part.get("content", "")) > 100: - part["content"] = f"[Previous: used {tool_name}]" + return messages # not enough results to compact yet + for part in tool_results[:-KEEP_RECENT]: + if tool_name in PRESERVE_RESULT_TOOLS: + continue # keep reference material + part["content"] = f"[Previous: used {tool_name}]" # replace with short placeholder return messages ``` -2. **Layer 2 -- auto_compact**: When tokens exceed threshold, save full transcript to disk, then ask the LLM to summarize. +### Step 3: Lever 2 -- Auto-Compact + +When micro-compaction is not enough and the token count crosses a threshold, the harness takes a bigger step: it saves the full transcript to disk for recovery, asks the LLM to summarize the entire conversation, and then replaces all messages with that summary. The agent continues from the summary as if nothing happened. ```python def auto_compact(messages: list) -> list: @@ -76,7 +113,7 @@ def auto_compact(messages: list) -> list: model=MODEL, messages=[{"role": "user", "content": "Summarize this conversation for continuity..." - + json.dumps(messages, default=str)[:80000]}], + + json.dumps(messages, default=str)[:80000]}], # cap at 80K chars for the summary call max_tokens=2000, ) return [ @@ -84,33 +121,38 @@ def auto_compact(messages: list) -> list: ] ``` -3. **Layer 3 -- manual compact**: The `compact` tool triggers the same summarization on demand. +### Step 4: Lever 3 -- Manual Compact + +The `compact` tool lets the model itself trigger summarization on demand. It uses exactly the same mechanism as auto-compact. The difference is who decides: auto-compact fires on a threshold, manual compact fires when the agent judges it is the right time to compress. + +### Step 5: Integration in the Agent Loop -4. The loop integrates all three: +All four levers compose naturally inside the main loop: ```python def agent_loop(messages: list): while True: - micro_compact(messages) # Layer 1 + micro_compact(messages) # Lever 1 if estimate_tokens(messages) > THRESHOLD: - messages[:] = auto_compact(messages) # Layer 2 + messages[:] = auto_compact(messages) # Lever 2 response = client.messages.create(...) - # ... tool execution ... + # ... tool execution with persisted-output ... # Lever 0 if manual_compact: - messages[:] = auto_compact(messages) # Layer 3 + messages[:] = auto_compact(messages) # Lever 3 ``` -Transcripts preserve full history on disk. Nothing is truly lost -- just moved out of active context. +Transcripts preserve full history on disk. Large outputs are saved to `.task_outputs/tool-results/`. Nothing is truly lost -- just moved out of active context. ## What Changed From s05 -| Component | Before (s05) | After (s06) | -|----------------|------------------|----------------------------| -| Tools | 5 | 5 (base + compact) | -| Context mgmt | None | Three-layer compression | -| Micro-compact | None | Old results -> placeholders| -| Auto-compact | None | Token threshold trigger | -| Transcripts | None | Saved to .transcripts/ | +| Component | Before (s05) | After (s06) | +|-------------------|------------------|----------------------------| +| Tools | 5 | 5 (base + compact) | +| Context mgmt | None | Four-lever compression | +| Persisted-output | None | Large outputs -> disk + preview | +| Micro-compact | None | Old results -> placeholders| +| Auto-compact | None | Token threshold trigger | +| Transcripts | None | Saved to .transcripts/ | ## Try It @@ -122,3 +164,25 @@ python agents/s06_context_compact.py 1. `Read every Python file in the agents/ directory one by one` (watch micro-compact replace old results) 2. `Keep reading files until compression triggers automatically` 3. `Use the compact tool to manually compress the conversation` + +## What You've Mastered + +At this point, you can: + +- Explain why a long agent session degrades and eventually fails without compression +- Intercept oversized tool outputs before they enter the context window +- Silently replace stale tool results with lightweight placeholders each turn +- Trigger a full conversation summarization -- automatically on a threshold or manually via a tool call +- Preserve full transcripts on disk so nothing is permanently lost + +## Stage 1 Complete + +You now have a complete single-agent system. Starting from a bare API call in s01, you have built up tool use, structured planning, sub-agent delegation, dynamic skill loading, and context compression. Your agent can read, write, execute, plan, delegate, and work indefinitely without running out of memory. That is a real coding agent. + +Before moving on, consider going back to s01 and rebuilding the whole stack from scratch without looking at the code. If you can write all six layers from memory, you truly own the ideas -- not just the implementation. + +Stage 2 begins with s07 and hardens this foundation. You will add permission controls, hook systems, persistent memory, error recovery, and more. The single agent you built here becomes the kernel that everything else wraps around. + +## Key Takeaway + +> Compaction is not deleting history -- it is relocating detail so the agent can keep working. diff --git a/docs/en/s07-permission-system.md b/docs/en/s07-permission-system.md new file mode 100644 index 000000000..92a625f7b --- /dev/null +++ b/docs/en/s07-permission-system.md @@ -0,0 +1,157 @@ +# s07: Permission System + +`s01 > s02 > s03 > s04 > s05 > s06 > [ s07 ] > s08 > s09 > s10 > s11 > s12 > s13 > s14 > s15 > s16 > s17 > s18 > s19` + +## What You'll Learn + +- A four-stage permission pipeline that every tool call must pass through before execution +- Three permission modes that control how aggressively the agent auto-approves actions +- How deny and allow rules use pattern matching to create a first-match-wins policy +- Interactive approval with an "always" option that writes permanent allow rules at runtime + +Your agent from s06 is capable and long-lived. It reads files, writes code, runs shell commands, delegates subtasks, and compresses its own context to keep going. But there is no safety catch. Every tool call the model proposes goes straight to execution. Ask it to delete a directory and it will -- no questions asked. Before you give this agent access to anything that matters, you need a gate between "the model wants to do X" and "the system actually does X." + +## The Problem + +Imagine your agent is helping refactor a codebase. It reads a few files, proposes some edits, and then decides to run `rm -rf /tmp/old_build` to clean up. Except the model hallucinated the path -- the real directory is your home folder. Or it decides to `sudo` something because the model has seen that pattern in training data. Without a permission layer, intent becomes execution instantly. There is no moment where the system can say "wait, that looks dangerous" or where you can say "no, do not do that." The agent needs a checkpoint -- a pipeline (a sequence of stages that every request passes through) between what the model asks for and what actually happens. + +## The Solution + +Every tool call now passes through a four-stage permission pipeline before execution. The stages run in order, and the first one that produces a definitive answer wins. + +``` +tool_call from LLM + | + v +[1. Deny rules] -- blocklist: always block these + | + v +[2. Mode check] -- plan mode? auto mode? default? + | + v +[3. Allow rules] -- allowlist: always allow these + | + v +[4. Ask user] -- interactive y/n/always prompt + | + v +execute (or reject) +``` + +## Read Together + +- If you start blurring "the model proposed an action" with "the system actually executed an action," you might find it helpful to revisit [`s00a-query-control-plane.md`](./s00a-query-control-plane.md). +- If you are not yet clear on why tool requests should not drop straight into handlers, keeping [`s02a-tool-control-plane.md`](./s02a-tool-control-plane.md) open beside this chapter may help. +- If `PermissionRule`, `PermissionDecision`, and `tool_result` start to collapse into one vague idea, [`data-structures.md`](./data-structures.md) can reset them. + +## How It Works + +**Step 1.** Define three permission modes. Each mode changes how the pipeline treats tool calls that do not match any explicit rule. "Default" mode is the safest -- it asks you about everything. "Plan" mode blocks all writes outright, useful when you want the agent to explore without touching anything. "Auto" mode lets reads through silently and only asks about writes, good for fast exploration. + +| Mode | Behavior | Use Case | +|------|----------|----------| +| `default` | Ask user for every unmatched tool call | Normal interactive use | +| `plan` | Block all writes, allow reads | Planning/review mode | +| `auto` | Auto-allow reads, ask for writes | Fast exploration mode | + +**Step 2.** Set up deny and allow rules with pattern matching. Rules are checked in order -- first match wins. Deny rules catch dangerous patterns that should never execute, regardless of mode. Allow rules let known-safe operations pass without asking. + +```python +rules = [ + # Always deny dangerous patterns + {"tool": "bash", "content": "rm -rf /", "behavior": "deny"}, + {"tool": "bash", "content": "sudo *", "behavior": "deny"}, + # Allow reading anything + {"tool": "read_file", "path": "*", "behavior": "allow"}, +] +``` + +When the user answers "always" at the interactive prompt, a permanent allow rule is added at runtime. + +**Step 3.** Implement the four-stage check. This is the core of the permission system. Notice that deny rules run first and cannot be bypassed -- this is intentional. No matter what mode you are in or what allow rules exist, a deny rule always wins. + +```python +def check(self, tool_name, tool_input): + # Step 1: Deny rules (bypass-immune, always checked first) + for rule in self.rules: + if rule["behavior"] == "deny" and self._matches(rule, ...): + return {"behavior": "deny", "reason": "..."} + + # Step 2: Mode-based decisions + if self.mode == "plan" and tool_name in WRITE_TOOLS: + return {"behavior": "deny", "reason": "Plan mode: writes blocked"} + if self.mode == "auto" and tool_name in READ_ONLY_TOOLS: + return {"behavior": "allow", "reason": "Auto: read-only approved"} + + # Step 3: Allow rules + for rule in self.rules: + if rule["behavior"] == "allow" and self._matches(rule, ...): + return {"behavior": "allow", "reason": "..."} + + # Step 4: Fall through to ask user + return {"behavior": "ask", "reason": "..."} +``` + +**Step 4.** Integrate the permission check into the agent loop. Every tool call now goes through the pipeline before execution. The result is one of three outcomes: denied (with a reason), allowed (silently), or asked (interactively). + +```python +for block in response.content: + if block.type == "tool_use": + decision = perms.check(block.name, block.input) + + if decision["behavior"] == "deny": + output = f"Permission denied: {decision['reason']}" + elif decision["behavior"] == "ask": + if perms.ask_user(block.name, block.input): + output = handler(**block.input) + else: + output = "Permission denied by user" + else: # allow + output = handler(**block.input) + + results.append({"type": "tool_result", ...}) +``` + +**Step 5.** Add denial tracking as a simple circuit breaker. The `PermissionManager` tracks consecutive denials. After 3 in a row, it suggests switching to plan mode -- this prevents the agent from repeatedly hitting the same wall and wasting turns. + +## What Changed From s06 + +| Component | Before (s06) | After (s07) | +|-----------|-------------|-------------| +| Safety | None | 4-stage permission pipeline | +| Modes | None | 3 modes: default, plan, auto | +| Rules | None | Deny/allow rules with pattern matching | +| User control | None | Interactive approval with "always" option | +| Denial tracking | None | Circuit breaker after 3 consecutive denials | + +## Try It + +```sh +cd learn-claude-code +python agents/s07_permission_system.py +``` + +1. Start in `default` mode -- every write tool asks for approval +2. Try `plan` mode -- all writes are blocked, reads pass through +3. Try `auto` mode -- reads auto-approved, writes still ask +4. Answer "always" to permanently allow a tool +5. Type `/mode plan` to switch modes at runtime +6. Type `/rules` to inspect current rule set + +## What You've Mastered + +At this point, you can: + +- Explain why model intent must pass through a decision pipeline before it becomes execution +- Build a four-stage permission check: deny, mode, allow, ask +- Configure three permission modes that give you different safety/speed tradeoffs +- Add rules dynamically at runtime when a user answers "always" +- Implement a simple circuit breaker that catches repeated denial loops + +## What's Next + +Your permission system controls what the agent is allowed to do, but it lives entirely inside the agent's own code. What if you want to extend behavior -- add logging, auditing, or custom validation -- without modifying the agent loop at all? That is what s08 introduces: a hook system that lets external shell scripts observe and influence every tool call. + +## Key Takeaway + +> Safety is a pipeline, not a boolean -- deny first, then consider mode, then check allow rules, then ask the user. diff --git a/docs/en/s07-task-system.md b/docs/en/s07-task-system.md deleted file mode 100644 index b110d0ca4..000000000 --- a/docs/en/s07-task-system.md +++ /dev/null @@ -1,131 +0,0 @@ -# s07: Task System - -`s01 > s02 > s03 > s04 > s05 > s06 | [ s07 ] s08 > s09 > s10 > s11 > s12` - -> *"Break big goals into small tasks, order them, persist to disk"* -- a file-based task graph with dependencies, laying the foundation for multi-agent collaboration. -> -> **Harness layer**: Persistent tasks -- goals that outlive any single conversation. - -## Problem - -s03's TodoManager is a flat checklist in memory: no ordering, no dependencies, no status beyond done-or-not. Real goals have structure -- task B depends on task A, tasks C and D can run in parallel, task E waits for both C and D. - -Without explicit relationships, the agent can't tell what's ready, what's blocked, or what can run concurrently. And because the list lives only in memory, context compression (s06) wipes it clean. - -## Solution - -Promote the checklist into a **task graph** persisted to disk. Each task is a JSON file with status, dependencies (`blockedBy`). The graph answers three questions at any moment: - -- **What's ready?** -- tasks with `pending` status and empty `blockedBy`. -- **What's blocked?** -- tasks waiting on unfinished dependencies. -- **What's done?** -- `completed` tasks, whose completion automatically unblocks dependents. - -``` -.tasks/ - task_1.json {"id":1, "status":"completed"} - task_2.json {"id":2, "blockedBy":[1], "status":"pending"} - task_3.json {"id":3, "blockedBy":[1], "status":"pending"} - task_4.json {"id":4, "blockedBy":[2,3], "status":"pending"} - -Task graph (DAG): - +----------+ - +--> | task 2 | --+ - | | pending | | -+----------+ +----------+ +--> +----------+ -| task 1 | | task 4 | -| completed| --> +----------+ +--> | blocked | -+----------+ | task 3 | --+ +----------+ - | pending | - +----------+ - -Ordering: task 1 must finish before 2 and 3 -Parallelism: tasks 2 and 3 can run at the same time -Dependencies: task 4 waits for both 2 and 3 -Status: pending -> in_progress -> completed -``` - -This task graph becomes the coordination backbone for everything after s07: background execution (s08), multi-agent teams (s09+), and worktree isolation (s12) all read from and write to this same structure. - -## How It Works - -1. **TaskManager**: one JSON file per task, CRUD with dependency graph. - -```python -class TaskManager: - def __init__(self, tasks_dir: Path): - self.dir = tasks_dir - self.dir.mkdir(exist_ok=True) - self._next_id = self._max_id() + 1 - - def create(self, subject, description=""): - task = {"id": self._next_id, "subject": subject, - "status": "pending", "blockedBy": [], - "owner": ""} - self._save(task) - self._next_id += 1 - return json.dumps(task, indent=2) -``` - -2. **Dependency resolution**: completing a task clears its ID from every other task's `blockedBy` list, automatically unblocking dependents. - -```python -def _clear_dependency(self, completed_id): - for f in self.dir.glob("task_*.json"): - task = json.loads(f.read_text()) - if completed_id in task.get("blockedBy", []): - task["blockedBy"].remove(completed_id) - self._save(task) -``` - -3. **Status + dependency wiring**: `update` handles transitions and dependency edges. - -```python -def update(self, task_id, status=None, - add_blocked_by=None, remove_blocked_by=None): - task = self._load(task_id) - if status: - task["status"] = status - if status == "completed": - self._clear_dependency(task_id) - if add_blocked_by: - task["blockedBy"] = list(set(task["blockedBy"] + add_blocked_by)) - if remove_blocked_by: - task["blockedBy"] = [x for x in task["blockedBy"] if x not in remove_blocked_by] - self._save(task) -``` - -4. Four task tools go into the dispatch map. - -```python -TOOL_HANDLERS = { - # ...base tools... - "task_create": lambda **kw: TASKS.create(kw["subject"]), - "task_update": lambda **kw: TASKS.update(kw["task_id"], kw.get("status")), - "task_list": lambda **kw: TASKS.list_all(), - "task_get": lambda **kw: TASKS.get(kw["task_id"]), -} -``` - -From s07 onward, the task graph is the default for multi-step work. s03's Todo remains for quick single-session checklists. - -## What Changed From s06 - -| Component | Before (s06) | After (s07) | -|---|---|---| -| Tools | 5 | 8 (`task_create/update/list/get`) | -| Planning model | Flat checklist (in-memory) | Task graph with dependencies (on disk) | -| Relationships | None | `blockedBy` edges | -| Status tracking | Done or not | `pending` -> `in_progress` -> `completed` | -| Persistence | Lost on compression | Survives compression and restarts | - -## Try It - -```sh -cd learn-claude-code -python agents/s07_task_system.py -``` - -1. `Create 3 tasks: "Setup project", "Write code", "Write tests". Make them depend on each other in order.` -2. `List all tasks and show the dependency graph` -3. `Complete task 1 and then list tasks to see task 2 unblocked` -4. `Create a task board for refactoring: parse -> transform -> emit -> test, where transform and emit can run in parallel after parse` diff --git a/docs/en/s08-background-tasks.md b/docs/en/s08-background-tasks.md deleted file mode 100644 index 5a98f2126..000000000 --- a/docs/en/s08-background-tasks.md +++ /dev/null @@ -1,107 +0,0 @@ -# s08: Background Tasks - -`s01 > s02 > s03 > s04 > s05 > s06 | s07 > [ s08 ] s09 > s10 > s11 > s12` - -> *"Run slow operations in the background; the agent keeps thinking"* -- daemon threads run commands, inject notifications on completion. -> -> **Harness layer**: Background execution -- the model thinks while the harness waits. - -## Problem - -Some commands take minutes: `npm install`, `pytest`, `docker build`. With a blocking loop, the model sits idle waiting. If the user asks "install dependencies and while that runs, create the config file," the agent does them sequentially, not in parallel. - -## Solution - -``` -Main thread Background thread -+-----------------+ +-----------------+ -| agent loop | | subprocess runs | -| ... | | ... | -| [LLM call] <---+------- | enqueue(result) | -| ^drain queue | +-----------------+ -+-----------------+ - -Timeline: -Agent --[spawn A]--[spawn B]--[other work]---- - | | - v v - [A runs] [B runs] (parallel) - | | - +-- results injected before next LLM call --+ -``` - -## How It Works - -1. BackgroundManager tracks tasks with a thread-safe notification queue. - -```python -class BackgroundManager: - def __init__(self): - self.tasks = {} - self._notification_queue = [] - self._lock = threading.Lock() -``` - -2. `run()` starts a daemon thread and returns immediately. - -```python -def run(self, command: str) -> str: - task_id = str(uuid.uuid4())[:8] - self.tasks[task_id] = {"status": "running", "command": command} - thread = threading.Thread( - target=self._execute, args=(task_id, command), daemon=True) - thread.start() - return f"Background task {task_id} started" -``` - -3. When the subprocess finishes, its result goes into the notification queue. - -```python -def _execute(self, task_id, command): - try: - r = subprocess.run(command, shell=True, cwd=WORKDIR, - capture_output=True, text=True, timeout=300) - output = (r.stdout + r.stderr).strip()[:50000] - except subprocess.TimeoutExpired: - output = "Error: Timeout (300s)" - with self._lock: - self._notification_queue.append({ - "task_id": task_id, "result": output[:500]}) -``` - -4. The agent loop drains notifications before each LLM call. - -```python -def agent_loop(messages: list): - while True: - notifs = BG.drain_notifications() - if notifs: - notif_text = "\n".join( - f"[bg:{n['task_id']}] {n['result']}" for n in notifs) - messages.append({"role": "user", - "content": f"\n{notif_text}\n" - f""}) - response = client.messages.create(...) -``` - -The loop stays single-threaded. Only subprocess I/O is parallelized. - -## What Changed From s07 - -| Component | Before (s07) | After (s08) | -|----------------|------------------|----------------------------| -| Tools | 8 | 6 (base + background_run + check)| -| Execution | Blocking only | Blocking + background threads| -| Notification | None | Queue drained per loop | -| Concurrency | None | Daemon threads | - -## Try It - -```sh -cd learn-claude-code -python agents/s08_background_tasks.py -``` - -1. `Run "sleep 5 && echo done" in the background, then create a file while it runs` -2. `Start 3 background tasks: "sleep 2", "sleep 4", "sleep 6". Check their status.` -3. `Run pytest in the background and keep working on other things` diff --git a/docs/en/s08-hook-system.md b/docs/en/s08-hook-system.md new file mode 100644 index 000000000..7575391f9 --- /dev/null +++ b/docs/en/s08-hook-system.md @@ -0,0 +1,163 @@ +# s08: Hook System + +`s01 > s02 > s03 > s04 > s05 > s06 > s07 > [ s08 ] > s09 > s10 > s11 > s12 > s13 > s14 > s15 > s16 > s17 > s18 > s19` + +## What You'll Learn + +- Three lifecycle events that let external code observe and influence the agent loop +- How shell-based hooks run as subprocesses with full context about the current tool call +- The exit code protocol: 0 means continue, 1 means block, 2 means inject a message +- How to configure hooks in an external JSON file so you never touch the main loop code + +Your agent from s07 has a permission system that controls what it is allowed to do. But permissions are a yes/no gate -- they do not let you add new behavior. Suppose you want every bash command to be logged to an audit file, or you want a linter to run automatically after every file write, or you want a custom security scanner to inspect tool inputs before they execute. You could add if/else branches inside the main loop for each of these, but that turns your clean loop into a tangle of special cases. What you really want is a way to extend the agent's behavior from the outside, without modifying the loop itself. + +## The Problem + +You are running your agent in a team environment. Different teams want different behaviors: the security team wants to scan every bash command, the QA team wants to auto-run tests after file edits, and the ops team wants an audit trail of every tool call. If each of these requires code changes to the agent loop, you end up with a mess of conditionals that nobody can maintain. Worse, every new requirement means redeploying the agent. You need a way for teams to plug in their own logic at well-defined moments -- without touching the core code. + +## The Solution + +The agent loop exposes three fixed extension points (lifecycle events). At each point, it runs external shell commands called hooks. Each hook communicates its intent through its exit code: continue silently, block the operation, or inject a message into the conversation. + +``` +tool_call from LLM + | + v +[PreToolUse hooks] + | exit 0 -> continue + | exit 1 -> block tool, return stderr as error + | exit 2 -> inject stderr into conversation, continue + | + v +[execute tool] + | + v +[PostToolUse hooks] + | exit 0 -> continue + | exit 2 -> append stderr to result + | + v +return result +``` + +## Read Together + +- If you still picture hooks as "more if/else branches inside the main loop," you might find it helpful to revisit [`s02a-tool-control-plane.md`](./s02a-tool-control-plane.md) first. +- If the main loop, the tool handler, and hook side effects start to blur together, [`entity-map.md`](./entity-map.md) can help you separate who advances core state and who only watches from the side. +- If you plan to continue into prompt assembly, recovery, or teams, keeping [`s00e-reference-module-map.md`](./s00e-reference-module-map.md) nearby is useful because this "core loop plus sidecar extension" pattern returns repeatedly. + +## How It Works + +**Step 1.** Define three lifecycle events. `SessionStart` fires once when the agent starts up -- useful for initialization, logging, or environment checks. `PreToolUse` fires before every tool call and is the only event that can block execution. `PostToolUse` fires after every tool call and can annotate the result but cannot undo it. + +| Event | When | Can Block? | +|-------|------|-----------| +| `SessionStart` | Once at session start | No | +| `PreToolUse` | Before each tool call | Yes (exit 1) | +| `PostToolUse` | After each tool call | No | + +**Step 2.** Configure hooks in an external `.hooks.json` file at the workspace root. Each hook specifies a shell command to run. An optional `matcher` field filters by tool name -- without a matcher, the hook fires for every tool. + +```json +{ + "hooks": { + "PreToolUse": [ + {"matcher": "bash", "command": "echo 'Checking bash command...'"}, + {"matcher": "write_file", "command": "/path/to/lint-check.sh"} + ], + "PostToolUse": [ + {"command": "echo 'Tool finished'"} + ], + "SessionStart": [ + {"command": "echo 'Session started at $(date)'"} + ] + } +} +``` + +**Step 3.** Implement the exit code protocol. This is the heart of the hook system -- three exit codes, three meanings. The protocol is deliberately simple so that any language or script can participate. Write your hook in bash, Python, Ruby, whatever -- as long as it exits with the right code. + +| Exit Code | Meaning | PreToolUse | PostToolUse | +|-----------|---------|-----------|------------| +| 0 | Success | Continue to execute tool | Continue normally | +| 1 | Block | Tool NOT executed, stderr returned as error | Warning logged | +| 2 | Inject | stderr injected as message, tool still executes | stderr appended to result | + +**Step 4.** Pass context to hooks via environment variables. Hooks need to know what is happening -- which event triggered them, which tool is being called, and what the input looks like. For `PostToolUse` hooks, the tool output is also available. + +``` +HOOK_EVENT=PreToolUse +HOOK_TOOL_NAME=bash +HOOK_TOOL_INPUT={"command": "npm test"} +HOOK_TOOL_OUTPUT=... (PostToolUse only) +``` + +**Step 5.** Integrate hooks into the agent loop. The integration is clean: run pre-hooks before execution, check if any blocked, execute the tool, run post-hooks, and collect any injected messages. The loop still owns control flow -- hooks only observe, block, or annotate at named moments. + +```python +# Before tool execution +pre_result = hooks.run_hooks("PreToolUse", ctx) +if pre_result["blocked"]: + output = f"Blocked by hook: {pre_result['block_reason']}" + continue + +# Execute tool +output = handler(**tool_input) + +# After tool execution +post_result = hooks.run_hooks("PostToolUse", ctx) +for msg in post_result["messages"]: + output += f"\n[Hook note]: {msg}" +``` + +## What Changed From s07 + +| Component | Before (s07) | After (s08) | +|-----------|-------------|-------------| +| Extensibility | None | Shell-based hook system | +| Events | None | PreToolUse, PostToolUse, SessionStart | +| Control flow | Permission pipeline only | Permission + hooks | +| Configuration | In-code rules | External `.hooks.json` file | + +## Try It + +```sh +cd learn-claude-code +# Create a hook config +cat > .hooks.json << 'EOF' +{ + "hooks": { + "PreToolUse": [ + {"matcher": "bash", "command": "echo 'Auditing bash command' >&2; exit 0"} + ], + "SessionStart": [ + {"command": "echo 'Agent session started'"} + ] + } +} +EOF +python agents/s08_hook_system.py +``` + +1. Watch SessionStart hook fire at startup +2. Ask the agent to run a bash command -- see PreToolUse hook fire +3. Create a blocking hook (exit 1) and watch it prevent tool execution +4. Create an injecting hook (exit 2) and watch it add messages to the conversation + +## What You've Mastered + +At this point, you can: + +- Explain why extension points are better than in-loop conditionals for adding new behavior +- Define lifecycle events at the right moments in the agent loop +- Write shell hooks that communicate intent through a three-code exit protocol +- Configure hooks externally so different teams can customize behavior without touching the agent code +- Maintain the boundary: the loop owns control flow, the handler owns execution, hooks only observe, block, or annotate + +## What's Next + +Your agent can now execute tools safely (s07) and be extended without code changes (s08). But it still has amnesia -- every new session starts from zero. The user's preferences, corrections, and project context are forgotten the moment the session ends. In s09, you will build a memory system that lets the agent carry durable facts across sessions. + +## Key Takeaway + +> The main loop can expose fixed extension points without giving up ownership of control flow -- hooks observe, block, or annotate, but the loop still decides what happens next. diff --git a/docs/en/s09-agent-teams.md b/docs/en/s09-agent-teams.md deleted file mode 100644 index 9f19723aa..000000000 --- a/docs/en/s09-agent-teams.md +++ /dev/null @@ -1,125 +0,0 @@ -# s09: Agent Teams - -`s01 > s02 > s03 > s04 > s05 > s06 | s07 > s08 > [ s09 ] s10 > s11 > s12` - -> *"When the task is too big for one, delegate to teammates"* -- persistent teammates + async mailboxes. -> -> **Harness layer**: Team mailboxes -- multiple models, coordinated through files. - -## Problem - -Subagents (s04) are disposable: spawn, work, return summary, die. No identity, no memory between invocations. Background tasks (s08) run shell commands but can't make LLM-guided decisions. - -Real teamwork needs: (1) persistent agents that outlive a single prompt, (2) identity and lifecycle management, (3) a communication channel between agents. - -## Solution - -``` -Teammate lifecycle: - spawn -> WORKING -> IDLE -> WORKING -> ... -> SHUTDOWN - -Communication: - .team/ - config.json <- team roster + statuses - inbox/ - alice.jsonl <- append-only, drain-on-read - bob.jsonl - lead.jsonl - - +--------+ send("alice","bob","...") +--------+ - | alice | -----------------------------> | bob | - | loop | bob.jsonl << {json_line} | loop | - +--------+ +--------+ - ^ | - | BUS.read_inbox("alice") | - +---- alice.jsonl -> read + drain ---------+ -``` - -## How It Works - -1. TeammateManager maintains config.json with the team roster. - -```python -class TeammateManager: - def __init__(self, team_dir: Path): - self.dir = team_dir - self.dir.mkdir(exist_ok=True) - self.config_path = self.dir / "config.json" - self.config = self._load_config() - self.threads = {} -``` - -2. `spawn()` creates a teammate and starts its agent loop in a thread. - -```python -def spawn(self, name: str, role: str, prompt: str) -> str: - member = {"name": name, "role": role, "status": "working"} - self.config["members"].append(member) - self._save_config() - thread = threading.Thread( - target=self._teammate_loop, - args=(name, role, prompt), daemon=True) - thread.start() - return f"Spawned teammate '{name}' (role: {role})" -``` - -3. MessageBus: append-only JSONL inboxes. `send()` appends a JSON line; `read_inbox()` reads all and drains. - -```python -class MessageBus: - def send(self, sender, to, content, msg_type="message", extra=None): - msg = {"type": msg_type, "from": sender, - "content": content, "timestamp": time.time()} - if extra: - msg.update(extra) - with open(self.dir / f"{to}.jsonl", "a") as f: - f.write(json.dumps(msg) + "\n") - - def read_inbox(self, name): - path = self.dir / f"{name}.jsonl" - if not path.exists(): return "[]" - msgs = [json.loads(l) for l in path.read_text().strip().splitlines() if l] - path.write_text("") # drain - return json.dumps(msgs, indent=2) -``` - -4. Each teammate checks its inbox before every LLM call, injecting received messages into context. - -```python -def _teammate_loop(self, name, role, prompt): - messages = [{"role": "user", "content": prompt}] - for _ in range(50): - inbox = BUS.read_inbox(name) - if inbox != "[]": - messages.append({"role": "user", - "content": f"{inbox}"}) - response = client.messages.create(...) - if response.stop_reason != "tool_use": - break - # execute tools, append results... - self._find_member(name)["status"] = "idle" -``` - -## What Changed From s08 - -| Component | Before (s08) | After (s09) | -|----------------|------------------|----------------------------| -| Tools | 6 | 9 (+spawn/send/read_inbox) | -| Agents | Single | Lead + N teammates | -| Persistence | None | config.json + JSONL inboxes| -| Threads | Background cmds | Full agent loops per thread| -| Lifecycle | Fire-and-forget | idle -> working -> idle | -| Communication | None | message + broadcast | - -## Try It - -```sh -cd learn-claude-code -python agents/s09_agent_teams.py -``` - -1. `Spawn alice (coder) and bob (tester). Have alice send bob a message.` -2. `Broadcast "status update: phase 1 complete" to all teammates` -3. `Check the lead inbox for any messages` -4. Type `/team` to see the team roster with statuses -5. Type `/inbox` to manually check the lead's inbox diff --git a/docs/en/s09-memory-system.md b/docs/en/s09-memory-system.md new file mode 100644 index 000000000..39bdc8d79 --- /dev/null +++ b/docs/en/s09-memory-system.md @@ -0,0 +1,176 @@ +# s09: Memory System + +`s01 > s02 > s03 > s04 > s05 > s06 > s07 > s08 > [ s09 ] > s10 > s11 > s12 > s13 > s14 > s15 > s16 > s17 > s18 > s19` + +## What You'll Learn + +- Four memory categories that cover what is worth remembering: user preferences, feedback, project facts, and references +- How YAML frontmatter files give each memory record a name, type, and description +- What should NOT go into memory -- and why getting this boundary wrong is the most common mistake +- The difference between memory, tasks, plans, and CLAUDE.md + +Your agent from s08 is powerful and extensible. It can execute tools safely, be extended through hooks, and work for long sessions thanks to context compression. But it has amnesia. Every time you start a new session, the agent meets you for the first time. It does not remember that you prefer pnpm over npm, that you told it three times to stop modifying test snapshots, or that the legacy directory cannot be deleted because deployment depends on it. You end up repeating yourself every session. The fix is a small, durable memory store -- not a dump of everything the agent has seen, but a curated set of facts that should still matter next time. + +## The Problem + +Without memory, a new session starts from zero. The agent keeps forgetting things like long-term user preferences, corrections you have repeated multiple times, project constraints that are not obvious from the code itself, and external references the project depends on. The result is an agent that always feels like it is meeting you for the first time. You waste time re-establishing context that should have been saved once and loaded automatically. + +## The Solution + +A small file-based memory store saves durable facts as individual markdown files with YAML frontmatter (a metadata block at the top of each file, delimited by `---` lines). At the start of each session, relevant memories are loaded and injected into the model's context. + +```text +conversation + | + | durable fact appears + v +save_memory + | + v +.memory/ + ├── MEMORY.md + ├── prefer_pnpm.md + ├── ask_before_codegen.md + └── incident_dashboard.md + | + v +next session loads relevant entries +``` + +## Read Together + +- If you still think memory is just "a longer context window," you might find it helpful to revisit [`s06-context-compact.md`](./s06-context-compact.md) and re-separate compaction from durable memory. +- If `messages[]`, summary blocks, and the memory store start to blend together, keeping [`data-structures.md`](./data-structures.md) open while reading can help. +- If you are about to continue into s10, reading [`s10a-message-prompt-pipeline.md`](./s10a-message-prompt-pipeline.md) alongside this chapter is useful because memory matters most when it re-enters the next model input. + +## How It Works + +**Step 1.** Define four memory categories. These are the types of facts worth keeping across sessions. Each category has a clear purpose -- if a fact does not fit one of these, it probably should not be in memory. + +### 1. `user` -- Stable user preferences + +Examples: prefers `pnpm`, wants concise answers, dislikes large refactors without a plan. + +### 2. `feedback` -- Corrections the user wants enforced + +Examples: "do not change test snapshots unless I ask", "ask before modifying generated files." + +### 3. `project` -- Durable project facts not obvious from the repo + +Examples: "this old directory still cannot be deleted because deployment depends on it", "this service exists because of a compliance requirement, not technical preference." + +### 4. `reference` -- Pointers to external resources + +Examples: incident board URL, monitoring dashboard location, spec document location. + +```python +MEMORY_TYPES = ("user", "feedback", "project", "reference") +``` + +**Step 2.** Save one record per file using frontmatter. Each memory is a markdown file with YAML frontmatter that tells the system what the memory is called, what kind it is, and what it is roughly about. + +```md +--- +name: prefer_pnpm +description: User prefers pnpm over npm +type: user +--- +The user explicitly prefers pnpm for package management commands. +``` + +```python +def save_memory(name, description, mem_type, content): + path = memory_dir / f"{slugify(name)}.md" + path.write_text(render_frontmatter(name, description, mem_type) + content) + rebuild_index() +``` + +**Step 3.** Build a small index so the system knows what memories exist without reading every file. + +```md +# Memory Index + +- prefer_pnpm [user] +- ask_before_codegen [feedback] +- incident_dashboard [reference] +``` + +The index is not the memory itself -- it is a quick map of what exists. + +**Step 4.** Load relevant memory at session start and turn it into a prompt section. Memory becomes useful only when it is fed back into the model input. This is why s09 naturally connects into s10. + +```python +memories = memory_store.load_all() +``` + +**Step 5.** Know what should NOT go into memory. This boundary is the most important part of the chapter, and the place where most beginners go wrong. + +| Do not store | Why | +|---|---| +| file tree layout | can be re-read from the repo | +| function names and signatures | code is the source of truth | +| current task status | belongs to task / plan, not memory | +| temporary branch names or PR numbers | gets stale quickly | +| secrets or credentials | security risk | + +The right rule is: only keep information that still matters across sessions and cannot be cheaply re-derived from the current workspace. + +**Step 6.** Understand the boundaries against neighbor concepts. These four things sound similar but serve different purposes. + +| Concept | Purpose | Lifetime | +|---------|---------|----------| +| Memory | Facts that should survive across sessions | Persistent | +| Task | What the system is trying to finish right now | One task | +| Plan | How this turn or session intends to proceed | One session | +| CLAUDE.md | Stable instruction documents and project-level standing rules | Persistent | + +Short rule of thumb: only useful for this task -- use `task` or `plan`. Useful next session too -- use `memory`. Long-lived instruction text -- use `CLAUDE.md`. + +## Common Mistakes + +**Mistake 1: Storing things the repo can tell you.** If the code can answer it, memory should not duplicate it. You will just end up with stale copies that conflict with reality. + +**Mistake 2: Storing live task progress.** "Currently fixing auth" is not memory. That belongs to plan or task state. When the task is done, the memory is meaningless. + +**Mistake 3: Treating memory as absolute truth.** Memory can be stale. The safer rule is: memory gives direction, current observation gives truth. + +## What Changed From s08 + +| Component | Before (s08) | After (s09) | +|-----------|-------------|-------------| +| Cross-session state | None | File-based memory store | +| Memory types | None | user, feedback, project, reference | +| Storage format | None | YAML frontmatter markdown files | +| Session start | Cold start | Loads relevant memories | +| Durability | Everything forgotten | Key facts persist | + +## Try It + +```sh +cd learn-claude-code +python agents/s09_memory_system.py +``` + +Try asking it to remember: + +- a user preference +- a correction you want enforced later +- a project fact that is not obvious from the repository + +## What You've Mastered + +At this point, you can: + +- Explain why memory is a curated store of durable facts, not a dump of everything the agent has seen +- Categorize facts into four types: user preferences, feedback, project knowledge, and references +- Store and retrieve memories using frontmatter-based markdown files +- Draw a clear line between what belongs in memory and what belongs in task state, plans, or CLAUDE.md +- Avoid the three most common mistakes: duplicating the repo, storing transient state, and treating memories as ground truth + +## What's Next + +Your agent now remembers things across sessions, but those memories just sit in a file until session start. In s10, you will build the system prompt assembly pipeline -- the mechanism that takes memories, skills, permissions, and other context and weaves them into the prompt that the model actually sees on every turn. + +## Key Takeaway + +> Memory is not a dump of everything the agent has seen -- it is a small store of durable facts that should still matter next session. diff --git a/docs/en/s10-system-prompt.md b/docs/en/s10-system-prompt.md new file mode 100644 index 000000000..e0bfdfb4c --- /dev/null +++ b/docs/en/s10-system-prompt.md @@ -0,0 +1,158 @@ +# s10: System Prompt + +`s01 > s02 > s03 > s04 > s05 > s06 > s07 > s08 > s09 > [ s10 ] > s11 > s12 > s13 > s14 > s15 > s16 > s17 > s18 > s19` + +## What You'll Learn + +- How to assemble the system prompt from independent sections instead of one hardcoded string +- The boundary between stable content (role, rules) and dynamic content (date, cwd, per-turn reminders) +- How CLAUDE.md files layer instructions without overwriting each other +- Why memory must be re-injected through the prompt pipeline to actually guide the agent + +When your agent had one tool and one job, a single hardcoded prompt string worked fine. But look at everything your harness has accumulated by now: a role description, tool definitions, loaded skills, saved memory, CLAUDE.md instruction files, and per-turn runtime context. If you keep cramming all of that into one big string, nobody -- including you -- can tell where each piece came from, why it is there, or how to change it safely. The fix is to stop treating the prompt as a blob and start treating it as an assembly pipeline. + +## The Problem + +Imagine you want to add a new tool to your agent. You open the system prompt, scroll past the role paragraph, past the safety rules, past the three skill descriptions, past the memory block, and paste a tool description somewhere in the middle. Next week someone else adds a CLAUDE.md loader and appends its output to the same string. A month later the prompt is 6,000 characters long, half of it is stale, and nobody remembers which lines are supposed to change per turn and which should stay fixed across the entire session. + +This is not a hypothetical scenario -- it is the natural trajectory of every agent that keeps its prompt in a single variable. + +## The Solution + +Turn prompt construction into a pipeline. Each section has one source and one responsibility. A builder object assembles them in a fixed order, with a clear boundary between parts that stay stable and parts that change every turn. + +```text +1. core identity and rules +2. tool catalog +3. skills +4. memory +5. CLAUDE.md instruction chain +6. dynamic runtime context +``` + +Then assemble: + +```text +core ++ tools ++ skills ++ memory ++ claude_md ++ dynamic_context += final model input +``` + +## How It Works + +**Step 1. Define the builder.** Each method owns exactly one source of content. + +```python +class SystemPromptBuilder: + def build(self) -> str: + parts = [] + parts.append(self._build_core()) + parts.append(self._build_tools()) + parts.append(self._build_skills()) + parts.append(self._build_memory()) + parts.append(self._build_claude_md()) + parts.append(self._build_dynamic()) + return "\n\n".join(p for p in parts if p) +``` + +That is the central idea of the chapter. Each `_build_*` method pulls from one source only: `_build_tools()` reads the tool list, `_build_memory()` reads the memory store, and so on. If you want to know where a line in the prompt came from, you check the one method responsible for it. + +**Step 2. Separate stable content from dynamic content.** This is the most important boundary in the entire pipeline. + +Stable content changes rarely or never during a session: + +- role description +- tool contract (the list of tools and their schemas) +- long-lived safety rules +- project instruction chain (CLAUDE.md files) + +Dynamic content changes every turn or every few turns: + +- current date +- current working directory +- current mode (plan mode, code mode, etc.) +- per-turn warnings or reminders + +Mixing these together means the model re-reads thousands of tokens of stable text that have not changed, while the few tokens that did change are buried somewhere in the middle. A real system separates them with a boundary marker so the stable prefix can be cached across turns to save prompt tokens. + +**Step 3. Layer CLAUDE.md instructions.** `CLAUDE.md` is not the same as memory and not the same as a skill. It is a layered instruction source -- meaning multiple files contribute, and later layers add to earlier ones rather than replacing them: + +1. user-level instruction file (`~/.claude/CLAUDE.md`) +2. project-root instruction file (`/CLAUDE.md`) +3. deeper subdirectory instruction files + +The important point is not the filename itself. The important point is that instruction sources can be layered instead of overwritten. + +**Step 4. Re-inject memory.** Saving memory (in s09) is only half the mechanism. If memory never re-enters the model input, it is not actually guiding the agent. So memory naturally belongs in the prompt pipeline: + +- save durable facts in `s09` +- re-inject them through the prompt builder in `s10` + +**Step 5. Attach per-turn reminders separately.** Some information is even more short-lived than "dynamic context" -- it only matters for this one turn and should not pollute the stable system prompt. A `system-reminder` user message keeps these transient signals outside the builder entirely: + +- this-turn-only instructions +- temporary notices +- transient recovery guidance + +## What Changed from s09 + +| Aspect | s09: Memory System | s10: System Prompt | +|--------|--------------------|--------------------| +| Core concern | Persist durable facts across sessions | Assemble all sources into model input | +| Memory's role | Write and store | Read and inject | +| Prompt structure | Assumed but not managed | Explicit pipeline with sections | +| Instruction files | Not addressed | CLAUDE.md layering introduced | +| Dynamic context | Not addressed | Separated from stable content | + +## Read Together + +- If you still treat the prompt as one mysterious blob of text, revisit [`s00a-query-control-plane.md`](./s00a-query-control-plane.md) to see what reaches the model and through which control layers. +- If you want to stabilize the order of assembly, keep [`s10a-message-prompt-pipeline.md`](./s10a-message-prompt-pipeline.md) beside this chapter -- it is the key bridge note for `s10`. +- If system rules, tool docs, memory, and runtime state start to collapse into one big input lump, reset with [`data-structures.md`](./data-structures.md). + +## Common Beginner Mistakes + +**Mistake 1: teaching the prompt as one fixed string.** That hides how the system really grows. A fixed string is fine for a demo; it stops being fine the moment you add a second capability. + +**Mistake 2: putting every changing detail into the same prompt block.** That mixes durable rules with per-turn noise. When you update one, you risk breaking the other. + +**Mistake 3: treating skills, memory, and CLAUDE.md as the same thing.** They may all become prompt sections, but their source and purpose are different: + +- `skills`: optional capability packages loaded on demand +- `memory`: durable cross-session facts about the user or project +- `CLAUDE.md`: standing instruction documents that layer without overwriting + +## Try It + +```sh +cd learn-claude-code +python agents/s10_system_prompt.py +``` + +Look for these three things: + +1. where each section comes from +2. which parts are stable +3. which parts are generated dynamically each turn + +## What You've Mastered + +At this point, you can: + +- Build a system prompt from independent, testable sections instead of one opaque string +- Draw a clear line between stable content and dynamic content +- Layer instruction files so that project-level and directory-level rules coexist without overwriting +- Re-inject memory into the prompt pipeline so saved facts actually influence the model +- Attach per-turn reminders separately from the main system prompt + +## What's Next + +The prompt assembly pipeline means your agent now enters each turn with the right instructions, the right tools, and the right context. But real work produces real failures -- output gets cut off, the prompt grows too large, the API times out. In [s11: Error Recovery](./s11-error-recovery.md), you will teach the harness to classify those failures and choose a recovery path instead of crashing. + +## Key Takeaway + +> The system prompt is an assembly pipeline with clear sections and clear boundaries, not one big mysterious string. diff --git a/docs/en/s10-team-protocols.md b/docs/en/s10-team-protocols.md deleted file mode 100644 index e784e5ee0..000000000 --- a/docs/en/s10-team-protocols.md +++ /dev/null @@ -1,106 +0,0 @@ -# s10: Team Protocols - -`s01 > s02 > s03 > s04 > s05 > s06 | s07 > s08 > s09 > [ s10 ] s11 > s12` - -> *"Teammates need shared communication rules"* -- one request-response pattern drives all negotiation. -> -> **Harness layer**: Protocols -- structured handshakes between models. - -## Problem - -In s09, teammates work and communicate but lack structured coordination: - -**Shutdown**: Killing a thread leaves files half-written and config.json stale. You need a handshake: the lead requests, the teammate approves (finish and exit) or rejects (keep working). - -**Plan approval**: When the lead says "refactor the auth module," the teammate starts immediately. For high-risk changes, the lead should review the plan first. - -Both share the same structure: one side sends a request with a unique ID, the other responds referencing that ID. - -## Solution - -``` -Shutdown Protocol Plan Approval Protocol -================== ====================== - -Lead Teammate Teammate Lead - | | | | - |--shutdown_req-->| |--plan_req------>| - | {req_id:"abc"} | | {req_id:"xyz"} | - | | | | - |<--shutdown_resp-| |<--plan_resp-----| - | {req_id:"abc", | | {req_id:"xyz", | - | approve:true} | | approve:true} | - -Shared FSM: - [pending] --approve--> [approved] - [pending] --reject---> [rejected] - -Trackers: - shutdown_requests = {req_id: {target, status}} - plan_requests = {req_id: {from, plan, status}} -``` - -## How It Works - -1. The lead initiates shutdown by generating a request_id and sending through the inbox. - -```python -shutdown_requests = {} - -def handle_shutdown_request(teammate: str) -> str: - req_id = str(uuid.uuid4())[:8] - shutdown_requests[req_id] = {"target": teammate, "status": "pending"} - BUS.send("lead", teammate, "Please shut down gracefully.", - "shutdown_request", {"request_id": req_id}) - return f"Shutdown request {req_id} sent (status: pending)" -``` - -2. The teammate receives the request and responds with approve/reject. - -```python -if tool_name == "shutdown_response": - req_id = args["request_id"] - approve = args["approve"] - shutdown_requests[req_id]["status"] = "approved" if approve else "rejected" - BUS.send(sender, "lead", args.get("reason", ""), - "shutdown_response", - {"request_id": req_id, "approve": approve}) -``` - -3. Plan approval follows the identical pattern. The teammate submits a plan (generating a request_id), the lead reviews (referencing the same request_id). - -```python -plan_requests = {} - -def handle_plan_review(request_id, approve, feedback=""): - req = plan_requests[request_id] - req["status"] = "approved" if approve else "rejected" - BUS.send("lead", req["from"], feedback, - "plan_approval_response", - {"request_id": request_id, "approve": approve}) -``` - -One FSM, two applications. The same `pending -> approved | rejected` state machine handles any request-response protocol. - -## What Changed From s09 - -| Component | Before (s09) | After (s10) | -|----------------|------------------|------------------------------| -| Tools | 9 | 12 (+shutdown_req/resp +plan)| -| Shutdown | Natural exit only| Request-response handshake | -| Plan gating | None | Submit/review with approval | -| Correlation | None | request_id per request | -| FSM | None | pending -> approved/rejected | - -## Try It - -```sh -cd learn-claude-code -python agents/s10_team_protocols.py -``` - -1. `Spawn alice as a coder. Then request her shutdown.` -2. `List teammates to see alice's status after shutdown approval` -3. `Spawn bob with a risky refactoring task. Review and reject his plan.` -4. `Spawn charlie, have him submit a plan, then approve it.` -5. Type `/team` to monitor statuses diff --git a/docs/en/s10a-message-prompt-pipeline.md b/docs/en/s10a-message-prompt-pipeline.md new file mode 100644 index 000000000..6143537db --- /dev/null +++ b/docs/en/s10a-message-prompt-pipeline.md @@ -0,0 +1,188 @@ +# s10a: Message & Prompt Pipeline + +> **Deep Dive** -- Best read alongside s10. It shows why the system prompt is only one piece of the model's full input. + +### When to Read This + +When you're working on prompt assembly and want to see the complete input pipeline. + +--- + +> This bridge document extends `s10`. +> +> It exists to make one crucial idea explicit: +> +> **the system prompt matters, but it is not the whole model input.** + +## Why This Document Exists + +`s10` already upgrades the system prompt from one giant string into a maintainable assembly process. + +That is important. + +But a higher-completion system goes one step further and treats the whole model input as a pipeline made from multiple sources: + +- system prompt blocks +- normalized messages +- memory attachments +- reminder injections +- dynamic runtime context + +So the true structure is: + +**a prompt pipeline, not only a prompt builder.** + +## Terms First + +### Prompt block + +A structured piece inside the system prompt, such as: + +- core identity +- tool instructions +- memory section +- CLAUDE.md section + +### Normalized message + +A message that has already been converted into a stable shape suitable for the model API. + +This is necessary because the raw system may contain: + +- user messages +- assistant replies +- tool results +- reminder injections +- attachment-like content + +Normalization ensures all of these fit the same structural contract before they reach the API. + +### System reminder + +A small temporary instruction injected for the current turn or current mode. + +Unlike a long-lived prompt block, a reminder is usually short-lived and situational -- for example, telling the model it is currently in "plan mode" or that a certain tool is temporarily unavailable. + +## The Smallest Useful Mental Model + +Think of the full input as a pipeline: + +```text +multiple sources + | + +-- system prompt blocks + +-- messages + +-- attachments + +-- reminders + | + v +normalize + | + v +final API payload +``` + +The key teaching point is: + +**separate the sources first, then normalize them into one stable input.** + +## Why System Prompt Is Not Everything + +The system prompt is the right place for: + +- identity +- stable rules +- long-lived constraints +- tool capability descriptions + +But it is usually the wrong place for: + +- the latest `tool_result` +- one-turn hook injections +- temporary reminders +- dynamic memory attachments + +Those belong in the message stream or in adjacent input surfaces. + +## Core Structures + +### `SystemPromptBlock` + +```python +block = { + "text": "...", + "cache_scope": None, +} +``` + +### `PromptParts` + +```python +parts = { + "core": "...", + "tools": "...", + "skills": "...", + "memory": "...", + "claude_md": "...", + "dynamic": "...", +} +``` + +### `NormalizedMessage` + +```python +message = { + "role": "user" | "assistant", + "content": [...], +} +``` + +Treat `content` as a list of blocks, not just one string. + +### `ReminderMessage` + +```python +reminder = { + "role": "system", + "content": "Current mode: plan", +} +``` + +Even if your teaching implementation does not literally use `role="system"` here, you should still keep the mental split: + +- long-lived prompt block +- short-lived reminder + +## Minimal Implementation Path + +### 1. Keep a `SystemPromptBuilder` + +Do not throw away the prompt-builder step. + +### 2. Make messages a separate pipeline + +```python +def build_messages(raw_messages, attachments, reminders): + messages = normalize_messages(raw_messages) + messages = attach_memory(messages, attachments) + messages = append_reminders(messages, reminders) + return messages +``` + +### 3. Assemble the final payload only at the end + +```python +payload = { + "system": build_system_prompt(), + "messages": build_messages(...), + "tools": build_tools(...), +} +``` + +This is the important mental upgrade: + +**system prompt, messages, and tools are parallel input surfaces, not replacements for one another.** + +## Key Takeaway + +**The model input is a pipeline of sources that are normalized late, not one mystical prompt blob. System prompt, messages, and tools are parallel surfaces that converge only at send time.** diff --git a/docs/en/s11-autonomous-agents.md b/docs/en/s11-autonomous-agents.md deleted file mode 100644 index a3c283675..000000000 --- a/docs/en/s11-autonomous-agents.md +++ /dev/null @@ -1,142 +0,0 @@ -# s11: Autonomous Agents - -`s01 > s02 > s03 > s04 > s05 > s06 | s07 > s08 > s09 > s10 > [ s11 ] s12` - -> *"Teammates scan the board and claim tasks themselves"* -- no need for the lead to assign each one. -> -> **Harness layer**: Autonomy -- models that find work without being told. - -## Problem - -In s09-s10, teammates only work when explicitly told to. The lead must spawn each one with a specific prompt. 10 unclaimed tasks on the board? The lead assigns each one manually. Doesn't scale. - -True autonomy: teammates scan the task board themselves, claim unclaimed tasks, work on them, then look for more. - -One subtlety: after context compression (s06), the agent might forget who it is. Identity re-injection fixes this. - -## Solution - -``` -Teammate lifecycle with idle cycle: - -+-------+ -| spawn | -+---+---+ - | - v -+-------+ tool_use +-------+ -| WORK | <------------- | LLM | -+---+---+ +-------+ - | - | stop_reason != tool_use (or idle tool called) - v -+--------+ -| IDLE | poll every 5s for up to 60s -+---+----+ - | - +---> check inbox --> message? ----------> WORK - | - +---> scan .tasks/ --> unclaimed? -------> claim -> WORK - | - +---> 60s timeout ----------------------> SHUTDOWN - -Identity re-injection after compression: - if len(messages) <= 3: - messages.insert(0, identity_block) -``` - -## How It Works - -1. The teammate loop has two phases: WORK and IDLE. When the LLM stops calling tools (or calls `idle`), the teammate enters IDLE. - -```python -def _loop(self, name, role, prompt): - while True: - # -- WORK PHASE -- - messages = [{"role": "user", "content": prompt}] - for _ in range(50): - response = client.messages.create(...) - if response.stop_reason != "tool_use": - break - # execute tools... - if idle_requested: - break - - # -- IDLE PHASE -- - self._set_status(name, "idle") - resume = self._idle_poll(name, messages) - if not resume: - self._set_status(name, "shutdown") - return - self._set_status(name, "working") -``` - -2. The idle phase polls inbox and task board in a loop. - -```python -def _idle_poll(self, name, messages): - for _ in range(IDLE_TIMEOUT // POLL_INTERVAL): # 60s / 5s = 12 - time.sleep(POLL_INTERVAL) - inbox = BUS.read_inbox(name) - if inbox: - messages.append({"role": "user", - "content": f"{inbox}"}) - return True - unclaimed = scan_unclaimed_tasks() - if unclaimed: - claim_task(unclaimed[0]["id"], name) - messages.append({"role": "user", - "content": f"Task #{unclaimed[0]['id']}: " - f"{unclaimed[0]['subject']}"}) - return True - return False # timeout -> shutdown -``` - -3. Task board scanning: find pending, unowned, unblocked tasks. - -```python -def scan_unclaimed_tasks() -> list: - unclaimed = [] - for f in sorted(TASKS_DIR.glob("task_*.json")): - task = json.loads(f.read_text()) - if (task.get("status") == "pending" - and not task.get("owner") - and not task.get("blockedBy")): - unclaimed.append(task) - return unclaimed -``` - -4. Identity re-injection: when context is too short (compression happened), insert an identity block. - -```python -if len(messages) <= 3: - messages.insert(0, {"role": "user", - "content": f"You are '{name}', role: {role}, " - f"team: {team_name}. Continue your work."}) - messages.insert(1, {"role": "assistant", - "content": f"I am {name}. Continuing."}) -``` - -## What Changed From s10 - -| Component | Before (s10) | After (s11) | -|----------------|------------------|----------------------------| -| Tools | 12 | 14 (+idle, +claim_task) | -| Autonomy | Lead-directed | Self-organizing | -| Idle phase | None | Poll inbox + task board | -| Task claiming | Manual only | Auto-claim unclaimed tasks | -| Identity | System prompt | + re-injection after compress| -| Timeout | None | 60s idle -> auto shutdown | - -## Try It - -```sh -cd learn-claude-code -python agents/s11_autonomous_agents.py -``` - -1. `Create 3 tasks on the board, then spawn alice and bob. Watch them auto-claim.` -2. `Spawn a coder teammate and let it find work from the task board itself` -3. `Create tasks with dependencies. Watch teammates respect the blocked order.` -4. Type `/tasks` to see the task board with owners -5. Type `/team` to monitor who is working vs idle diff --git a/docs/en/s11-error-recovery.md b/docs/en/s11-error-recovery.md new file mode 100644 index 000000000..9fe7dcaaf --- /dev/null +++ b/docs/en/s11-error-recovery.md @@ -0,0 +1,204 @@ +# s11: Error Recovery + +`s01 > s02 > s03 > s04 > s05 > s06 > s07 > s08 > s09 > s10 > [ s11 ] > s12 > s13 > s14 > s15 > s16 > s17 > s18 > s19` + +## What You'll Learn + +- Three categories of recoverable failure: truncation, context overflow, and transient transport errors +- How to route each failure to the right recovery branch (continuation, compaction, or backoff) +- Why retry budgets prevent infinite loops +- How recovery state keeps the "why" visible instead of burying it in a catch block + +Your agent is doing real work now -- reading files, writing code, calling tools across multiple turns. And real work produces real failures. Output gets cut off mid-sentence. The prompt grows past the model's context window. The API times out or hits a rate limit. If every one of these failures ends the run immediately, your system feels brittle and your users learn not to trust it. But here is the key insight: most of these failures are not true task failure. They are signals that the next step needs a different continuation path. + +## The Problem + +Your user asks the agent to refactor a large file. The model starts writing the new version, but the output hits `max_tokens` and stops mid-function. Without recovery, the agent just halts with a half-written file. The user has to notice, re-prompt, and hope the model picks up where it left off. + +Or: the conversation has been running for 40 turns. The accumulated messages push the prompt past the model's context limit. The API returns an error. Without recovery, the entire session is lost. + +Or: a momentary network hiccup drops the connection. Without recovery, the agent crashes even though the same request would succeed one second later. + +Each of these is a different kind of failure, and each needs a different recovery action. A single catch-all retry cannot handle all three correctly. + +## The Solution + +Classify the failure first, choose the recovery branch second, and enforce a retry budget so the system cannot loop forever. + +```text +LLM call + | + +-- stop_reason == "max_tokens" + | -> append continuation reminder + | -> retry + | + +-- prompt too long + | -> compact context + | -> retry + | + +-- timeout / rate limit / connection error + -> back off + -> retry +``` + +## How It Works + +**Step 1. Track recovery state.** Before you can recover, you need to know how many times you have already tried. A simple counter per category prevents infinite loops: + +```python +recovery_state = { + "continuation_attempts": 0, + "compact_attempts": 0, + "transport_attempts": 0, +} +``` + +**Step 2. Classify the failure.** Each failure maps to exactly one recovery kind. The classifier examines the stop reason and error text, then returns a structured decision: + +```python +def choose_recovery(stop_reason: str | None, error_text: str | None) -> dict: + if stop_reason == "max_tokens": + return {"kind": "continue", "reason": "output truncated"} + + if error_text and "prompt" in error_text and "long" in error_text: + return {"kind": "compact", "reason": "context too large"} + + if error_text and any(word in error_text for word in [ + "timeout", "rate", "unavailable", "connection" + ]): + return {"kind": "backoff", "reason": "transient transport failure"} + + return {"kind": "fail", "reason": "unknown or non-recoverable error"} +``` + +The separation matters: classify first, act second. That way the recovery reason stays visible in state instead of disappearing inside a catch block. + +**Step 3. Handle continuation (truncated output).** When the model runs out of output space, the task did not fail -- the turn just ended too early. You inject a continuation reminder and retry: + +```python +CONTINUE_MESSAGE = ( + "Output limit hit. Continue directly from where you stopped. " + "Do not restart or repeat." +) +``` + +Without this reminder, models tend to restart from the beginning or repeat what they already wrote. The explicit instruction to "continue directly" keeps the output flowing forward. + +**Step 4. Handle compaction (context overflow).** When the prompt becomes too large, the problem is not the task itself -- the accumulated context needs to shrink before the next turn can proceed. You call the same `auto_compact` mechanism from s06 to summarize history, then retry: + +```python +if decision["kind"] == "compact": + messages = auto_compact(messages) + continue +``` + +**Step 5. Handle backoff (transient errors).** When the error is probably temporary -- a timeout, a rate limit, a brief outage -- you wait and try again. Exponential backoff (doubling the delay each attempt, plus random jitter to avoid thundering-herd problems where many clients retry at the same instant) keeps the system from hammering a struggling server: + +```python +def backoff_delay(attempt: int) -> float: + delay = min(BACKOFF_BASE_DELAY * (2 ** attempt), BACKOFF_MAX_DELAY) + jitter = random.uniform(0, 1) + return delay + jitter +``` + +**Step 6. Wire it into the loop.** The recovery logic sits right inside the agent loop. Each branch either adjusts the messages and continues, or gives up: + +```python +while True: + try: + response = client.messages.create(...) + decision = choose_recovery(response.stop_reason, None) + except Exception as e: + response = None + decision = choose_recovery(None, str(e).lower()) + + if decision["kind"] == "continue": + messages.append({"role": "user", "content": CONTINUE_MESSAGE}) + continue + + if decision["kind"] == "compact": + messages = auto_compact(messages) + continue + + if decision["kind"] == "backoff": + time.sleep(backoff_delay(...)) + continue + + if decision["kind"] == "fail": + break +``` + +The point is not clever code. The point is: classify, choose, retry with a budget. + +## What Changed from s10 + +| Aspect | s10: System Prompt | s11: Error Recovery | +|--------|--------------------|--------------------| +| Core concern | Assemble model input from sections | Handle failures without crashing | +| Loop behavior | Runs until end_turn or tool_use | Adds recovery branches before giving up | +| Compaction | Not addressed | Triggered reactively on context overflow | +| Retry logic | Not addressed | Budgeted per failure category | +| State tracking | Prompt sections | Recovery counters | + +## A Note on Real Systems + +Real agent systems also persist session state to disk, so that a crash does not destroy a long-running conversation. Session persistence, checkpointing, and resumption are separate concerns from error recovery -- but they complement it. Recovery handles the failures you can retry in-process; persistence handles the failures you cannot. This teaching harness focuses on the in-process recovery paths, but keep in mind that production systems need both layers. + +## Read Together + +- If you start losing track of why the current query is still continuing, go back to [`s00c-query-transition-model.md`](./s00c-query-transition-model.md). +- If context compaction and error recovery are starting to look like the same mechanism, reread [`s06-context-compact.md`](./s06-context-compact.md) to separate "shrink context" from "recover after failure." +- If you are about to move into `s12`, keep [`data-structures.md`](./data-structures.md) nearby because the task system adds a new durable work layer on top of recovery state. + +## Common Beginner Mistakes + +**Mistake 1: using one retry rule for every error.** Different failures need different recovery actions. Retrying a context-overflow error without compacting first will just produce the same error again. + +**Mistake 2: no retry budget.** Without budgets, the system can loop forever. Each recovery category needs its own counter and its own maximum. + +**Mistake 3: hiding the recovery reason.** The system should know *why* it is retrying. That reason should stay visible in state -- as a structured decision object -- not disappear inside a catch block. + +## Try It + +```sh +cd learn-claude-code +python agents/s11_error_recovery.py +``` + +Try forcing: + +- a long response (to trigger max_tokens continuation) +- a large context (to trigger compaction) +- a temporary timeout (to trigger backoff) + +Then observe which recovery branch the system chooses and how the retry counter increments. + +## What You've Mastered + +At this point, you can: + +- Classify agent failures into three recoverable categories and one terminal category +- Route each failure to the correct recovery branch: continuation, compaction, or backoff +- Enforce retry budgets so the system never loops forever +- Keep recovery decisions visible as structured state instead of burying them in exception handlers +- Explain why different failure types need different recovery actions + +## Stage 2 Complete + +You have finished Stage 2 of the harness. Look at what you have built since Stage 1: + +- **s07 Permission System** -- the harness asks before acting, and the user controls what gets auto-approved +- **s08 Hook System** -- external scripts run at lifecycle points without touching the agent loop +- **s09 Memory System** -- durable facts survive across sessions +- **s10 System Prompt** -- the prompt is an assembly pipeline with clear sections, not one big string +- **s11 Error Recovery** -- failures route to the right recovery path instead of crashing + +Your agent started Stage 2 as a working loop that could call tools and manage context. It finishes Stage 2 as a system that governs itself: it checks permissions, runs hooks, remembers what matters, assembles its own instructions, and recovers from failures without human intervention. + +That is a real agent harness. If you stopped here and built a product on top of it, you would have something genuinely useful. + +But there is more to build. Stage 3 introduces structured work management -- task lists, background execution, and scheduled jobs. The agent stops being purely reactive and starts organizing its own work across time. See you in [s12: Task System](./s12-task-system.md). + +## Key Takeaway + +> Most agent failures are not true task failure -- they are signals to try a different continuation path, and the harness should classify them and recover automatically. diff --git a/docs/en/s12-task-system.md b/docs/en/s12-task-system.md new file mode 100644 index 000000000..3be263481 --- /dev/null +++ b/docs/en/s12-task-system.md @@ -0,0 +1,149 @@ +# s12: Task System + +`s01 > s02 > s03 > s04 > s05 > s06 > s07 > s08 > s09 > s10 > s11 > [ s12 ] > s13 > s14 > s15 > s16 > s17 > s18 > s19` + +## What You'll Learn + +- How to promote a flat checklist into a task graph with explicit dependencies +- How `blockedBy` and `blocks` edges express ordering and parallelism +- How status transitions (`pending` -> `in_progress` -> `completed`) drive automatic unblocking +- How persisting tasks to disk makes them survive compression and restarts + +Back in s03 you gave the agent a TodoWrite tool -- a flat checklist that tracks what is done and what is not. That works well for a single focused session. But real work has structure. Task B depends on task A. Tasks C and D can run in parallel. Task E waits for both C and D. A flat list cannot express any of that. And because the checklist lives only in memory, context compression (s06) wipes it clean. In this chapter you will replace the checklist with a proper task graph that understands dependencies, persists to disk, and becomes the coordination backbone for everything that follows. + +## The Problem + +Imagine you ask your agent to refactor a codebase: parse the AST, transform the nodes, emit the new code, and run the tests. The parse step must finish before transform and emit can begin. Transform and emit can run in parallel. Tests must wait for both. With s03's flat TodoWrite, the agent has no way to express these relationships. It might attempt the transform before the parse is done, or run the tests before anything is ready. There is no ordering, no dependency tracking, and no status beyond "done or not." Worse, if the context window fills up and compression kicks in, the entire plan vanishes. + +## The Solution + +Promote the checklist into a task graph persisted to disk. Each task is a JSON file with status, dependencies (`blockedBy`), and dependents (`blocks`). The graph answers three questions at any moment: what is ready, what is blocked, and what is done. + +``` +.tasks/ + task_1.json {"id":1, "status":"completed"} + task_2.json {"id":2, "blockedBy":[1], "status":"pending"} + task_3.json {"id":3, "blockedBy":[1], "status":"pending"} + task_4.json {"id":4, "blockedBy":[2,3], "status":"pending"} + +Task graph (DAG): + +----------+ + +--> | task 2 | --+ + | | pending | | ++----------+ +----------+ +--> +----------+ +| task 1 | | task 4 | +| completed| --> +----------+ +--> | blocked | ++----------+ | task 3 | --+ +----------+ + | pending | + +----------+ + +Ordering: task 1 must finish before 2 and 3 +Parallelism: tasks 2 and 3 can run at the same time +Dependencies: task 4 waits for both 2 and 3 +Status: pending -> in_progress -> completed +``` + +The structure above is a DAG -- a directed acyclic graph, meaning tasks flow forward and never loop back. This task graph becomes the coordination backbone for the later chapters: background execution (s13), agent teams (s15+), and worktree isolation (s18) all build on the same durable task structure. + +## How It Works + +**Step 1.** Create a `TaskManager` that stores one JSON file per task, with CRUD operations and a dependency graph. + +```python +class TaskManager: + def __init__(self, tasks_dir: Path): + self.dir = tasks_dir + self.dir.mkdir(exist_ok=True) + self._next_id = self._max_id() + 1 + + def create(self, subject, description=""): + task = {"id": self._next_id, "subject": subject, + "status": "pending", "blockedBy": [], + "blocks": [], "owner": ""} + self._save(task) + self._next_id += 1 + return json.dumps(task, indent=2) +``` + +**Step 2.** Implement dependency resolution. When a task completes, clear its ID from every other task's `blockedBy` list, automatically unblocking dependents. + +```python +def _clear_dependency(self, completed_id): + for f in self.dir.glob("task_*.json"): + task = json.loads(f.read_text()) + if completed_id in task.get("blockedBy", []): + task["blockedBy"].remove(completed_id) + self._save(task) +``` + +**Step 3.** Wire up status transitions and dependency edges in the `update` method. When a task's status changes to `completed`, the dependency-clearing logic from Step 2 fires automatically. + +```python +def update(self, task_id, status=None, + add_blocked_by=None, add_blocks=None): + task = self._load(task_id) + if status: + task["status"] = status + if status == "completed": + self._clear_dependency(task_id) + self._save(task) +``` + +**Step 4.** Register four task tools in the dispatch map, giving the agent full control over creating, updating, listing, and inspecting tasks. + +```python +TOOL_HANDLERS = { + # ...base tools... + "task_create": lambda **kw: TASKS.create(kw["subject"]), + "task_update": lambda **kw: TASKS.update(kw["task_id"], kw.get("status")), + "task_list": lambda **kw: TASKS.list_all(), + "task_get": lambda **kw: TASKS.get(kw["task_id"]), +} +``` + +From s12 onward, the task graph becomes the default for durable multi-step work. s03's Todo remains useful for quick single-session checklists, but anything that needs ordering, parallelism, or persistence belongs here. + +## Read Together + +- If you are coming straight from s03, revisit [`data-structures.md`](./data-structures.md) to separate `TodoItem` / `PlanState` from `TaskRecord` -- they look similar but serve different purposes. +- If object boundaries start to blur, reset with [`entity-map.md`](./entity-map.md) before you mix messages, tasks, runtime tasks, and teammates into one layer. +- If you plan to continue into s13, keep [`s13a-runtime-task-model.md`](./s13a-runtime-task-model.md) beside this chapter because durable tasks and runtime tasks are the easiest pair to confuse next. + +## What Changed + +| Component | Before (s06) | After (s12) | +|---|---|---| +| Tools | 5 | 8 (`task_create/update/list/get`) | +| Planning model | Flat checklist (in-memory) | Task graph with dependencies (on disk) | +| Relationships | None | `blockedBy` + `blocks` edges | +| Status tracking | Done or not | `pending` -> `in_progress` -> `completed` | +| Persistence | Lost on compression | Survives compression and restarts | + +## Try It + +```sh +cd learn-claude-code +python agents/s12_task_system.py +``` + +1. `Create 3 tasks: "Setup project", "Write code", "Write tests". Make them depend on each other in order.` +2. `List all tasks and show the dependency graph` +3. `Complete task 1 and then list tasks to see task 2 unblocked` +4. `Create a task board for refactoring: parse -> transform -> emit -> test, where transform and emit can run in parallel after parse` + +## What You've Mastered + +At this point, you can: + +- Build a file-based task graph where each task is a self-contained JSON record +- Express ordering and parallelism through `blockedBy` and `blocks` dependency edges +- Implement automatic unblocking when upstream tasks complete +- Persist planning state so it survives context compression and process restarts + +## What's Next + +Tasks now have structure and live on disk. But every tool call still blocks the main loop -- if a task involves a slow subprocess like `npm install` or `pytest`, the agent sits idle waiting. In s13 you will add background execution so slow work runs in parallel while the agent keeps thinking. + +## Key Takeaway + +> A task graph with explicit dependencies turns a flat checklist into a coordination structure that knows what is ready, what is blocked, and what can run in parallel. diff --git a/docs/en/s12-worktree-task-isolation.md b/docs/en/s12-worktree-task-isolation.md deleted file mode 100644 index a54282aca..000000000 --- a/docs/en/s12-worktree-task-isolation.md +++ /dev/null @@ -1,121 +0,0 @@ -# s12: Worktree + Task Isolation - -`s01 > s02 > s03 > s04 > s05 > s06 | s07 > s08 > s09 > s10 > s11 > [ s12 ]` - -> *"Each works in its own directory, no interference"* -- tasks manage goals, worktrees manage directories, bound by ID. -> -> **Harness layer**: Directory isolation -- parallel execution lanes that never collide. - -## Problem - -By s11, agents can claim and complete tasks autonomously. But every task runs in one shared directory. Two agents refactoring different modules at the same time will collide: agent A edits `config.py`, agent B edits `config.py`, unstaged changes mix, and neither can roll back cleanly. - -The task board tracks *what to do* but has no opinion about *where to do it*. The fix: give each task its own git worktree directory. Tasks manage goals, worktrees manage execution context. Bind them by task ID. - -## Solution - -``` -Control plane (.tasks/) Execution plane (.worktrees/) -+------------------+ +------------------------+ -| task_1.json | | auth-refactor/ | -| status: in_progress <------> branch: wt/auth-refactor -| worktree: "auth-refactor" | task_id: 1 | -+------------------+ +------------------------+ -| task_2.json | | ui-login/ | -| status: pending <------> branch: wt/ui-login -| worktree: "ui-login" | task_id: 2 | -+------------------+ +------------------------+ - | - index.json (worktree registry) - events.jsonl (lifecycle log) - -State machines: - Task: pending -> in_progress -> completed - Worktree: absent -> active -> removed | kept -``` - -## How It Works - -1. **Create a task.** Persist the goal first. - -```python -TASKS.create("Implement auth refactor") -# -> .tasks/task_1.json status=pending worktree="" -``` - -2. **Create a worktree and bind to the task.** Passing `task_id` auto-advances the task to `in_progress`. - -```python -WORKTREES.create("auth-refactor", task_id=1) -# -> git worktree add -b wt/auth-refactor .worktrees/auth-refactor HEAD -# -> index.json gets new entry, task_1.json gets worktree="auth-refactor" -``` - -The binding writes state to both sides: - -```python -def bind_worktree(self, task_id, worktree): - task = self._load(task_id) - task["worktree"] = worktree - if task["status"] == "pending": - task["status"] = "in_progress" - self._save(task) -``` - -3. **Run commands in the worktree.** `cwd` points to the isolated directory. - -```python -subprocess.run(command, shell=True, cwd=worktree_path, - capture_output=True, text=True, timeout=300) -``` - -4. **Close out.** Two choices: - - `worktree_keep(name)` -- preserve the directory for later. - - `worktree_remove(name, complete_task=True)` -- remove directory, complete the bound task, emit event. One call handles teardown + completion. - -```python -def remove(self, name, force=False, complete_task=False): - self._run_git(["worktree", "remove", wt["path"]]) - if complete_task and wt.get("task_id") is not None: - self.tasks.update(wt["task_id"], status="completed") - self.tasks.unbind_worktree(wt["task_id"]) - self.events.emit("task.completed", ...) -``` - -5. **Event stream.** Every lifecycle step emits to `.worktrees/events.jsonl`: - -```json -{ - "event": "worktree.remove.after", - "task": {"id": 1, "status": "completed"}, - "worktree": {"name": "auth-refactor", "status": "removed"}, - "ts": 1730000000 -} -``` - -Events emitted: `worktree.create.before/after/failed`, `worktree.remove.before/after/failed`, `worktree.keep`, `task.completed`. - -After a crash, state reconstructs from `.tasks/` + `.worktrees/index.json` on disk. Conversation memory is volatile; file state is durable. - -## What Changed From s11 - -| Component | Before (s11) | After (s12) | -|--------------------|----------------------------|----------------------------------------------| -| Coordination | Task board (owner/status) | Task board + explicit worktree binding | -| Execution scope | Shared directory | Task-scoped isolated directory | -| Recoverability | Task status only | Task status + worktree index | -| Teardown | Task completion | Task completion + explicit keep/remove | -| Lifecycle visibility | Implicit in logs | Explicit events in `.worktrees/events.jsonl` | - -## Try It - -```sh -cd learn-claude-code -python agents/s12_worktree_task_isolation.py -``` - -1. `Create tasks for backend auth and frontend login page, then list tasks.` -2. `Create worktree "auth-refactor" for task 1, then bind task 2 to a new worktree "ui-login".` -3. `Run "git status --short" in worktree "auth-refactor".` -4. `Keep worktree "ui-login", then list worktrees and inspect events.` -5. `Remove worktree "auth-refactor" with complete_task=true, then list tasks/worktrees/events.` diff --git a/docs/en/s13-background-tasks.md b/docs/en/s13-background-tasks.md new file mode 100644 index 000000000..b2ce326dc --- /dev/null +++ b/docs/en/s13-background-tasks.md @@ -0,0 +1,139 @@ +# s13: Background Tasks + +`s01 > s02 > s03 > s04 > s05 > s06 > s07 > s08 > s09 > s10 > s11 > s12 > [ s13 ] > s14 > s15 > s16 > s17 > s18 > s19` + +## What You'll Learn + +- How to run slow commands in background threads while the main loop stays responsive +- How a thread-safe notification queue delivers results back to the agent +- How daemon threads keep the process clean on exit +- How the drain-before-call pattern injects background results at exactly the right moment + +You have a task graph now, and every task can express what it depends on. But there is a practical problem: some tasks involve commands that take minutes. `npm install`, `pytest`, `docker build` -- these block the main loop, and while the agent waits, the user waits too. If the user says "install dependencies and while that runs, create the config file," your agent from s12 does them sequentially because it has no way to start something and come back to it later. This chapter fixes that by adding background execution. + +## The Problem + +Consider a realistic workflow: the user asks the agent to run a full test suite (which takes 90 seconds) and then set up a configuration file. With a blocking loop, the agent submits the test command, stares at a spinning subprocess for 90 seconds, gets the result, and only then starts the config file. The user watches all of this happen serially. Worse, if there are three slow commands, total wall-clock time is the sum of all three -- even though they could have run in parallel. The agent needs a way to start slow work, give control back to the main loop immediately, and pick up the results later. + +## The Solution + +Keep the main loop single-threaded, but run slow subprocesses on background daemon threads. When a background command finishes, its result goes into a thread-safe notification queue. Before each LLM call, the main loop drains that queue and injects any completed results into the conversation. + +``` +Main thread Background thread ++-----------------+ +-----------------+ +| agent loop | | subprocess runs | +| ... | | ... | +| [LLM call] <---+------- | enqueue(result) | +| ^drain queue | +-----------------+ ++-----------------+ + +Timeline: +Agent --[spawn A]--[spawn B]--[other work]---- + | | + v v + [A runs] [B runs] (parallel) + | | + +-- results injected before next LLM call --+ +``` + +## How It Works + +**Step 1.** Create a `BackgroundManager` that tracks running tasks with a thread-safe notification queue. The lock ensures that the main thread and background threads never corrupt the queue simultaneously. + +```python +class BackgroundManager: + def __init__(self): + self.tasks = {} + self._notification_queue = [] + self._lock = threading.Lock() +``` + +**Step 2.** The `run()` method starts a daemon thread and returns immediately. A daemon thread is one that the Python runtime kills automatically when the main program exits -- you do not need to join it or clean it up. + +```python +def run(self, command: str) -> str: + task_id = str(uuid.uuid4())[:8] + self.tasks[task_id] = {"status": "running", "command": command} + thread = threading.Thread( + target=self._execute, args=(task_id, command), daemon=True) + thread.start() + return f"Background task {task_id} started" +``` + +**Step 3.** When the subprocess finishes, the background thread puts its result into the notification queue. The lock makes this safe even if the main thread is draining the queue at the same time. + +```python +def _execute(self, task_id, command): + try: + r = subprocess.run(command, shell=True, cwd=WORKDIR, + capture_output=True, text=True, timeout=300) + output = (r.stdout + r.stderr).strip()[:50000] + except subprocess.TimeoutExpired: + output = "Error: Timeout (300s)" + with self._lock: + self._notification_queue.append({ + "task_id": task_id, "result": output[:500]}) +``` + +**Step 4.** The agent loop drains notifications before each LLM call. This is the drain-before-call pattern: right before you ask the model to think, sweep up any background results and add them to the conversation so the model sees them in its next turn. + +```python +def agent_loop(messages: list): + while True: + notifs = BG.drain_notifications() + if notifs: + notif_text = "\n".join( + f"[bg:{n['task_id']}] {n['result']}" for n in notifs) + messages.append({"role": "user", + "content": f"\n{notif_text}\n" + f""}) + messages.append({"role": "assistant", + "content": "Noted background results."}) + response = client.messages.create(...) +``` + +This teaching demo keeps the core loop single-threaded; only subprocess waiting is parallelized. A production system would typically split background work into several runtime lanes, but starting with one clean pattern makes the mechanics easy to follow. + +## Read Together + +- If you have not fully separated "task goal" from "running execution slot," read [`s13a-runtime-task-model.md`](./s13a-runtime-task-model.md) first -- it clarifies why a task record and a runtime record are different objects. +- If you are unsure which state belongs in `RuntimeTaskRecord` and which still belongs on the task board, keep [`data-structures.md`](./data-structures.md) nearby. +- If background execution starts to feel like "another main loop," go back to [`s02b-tool-execution-runtime.md`](./s02b-tool-execution-runtime.md) and reset the boundary: execution and waiting can run in parallel, but the main loop is still one mainline. + +## What Changed + +| Component | Before (s12) | After (s13) | +|----------------|------------------|----------------------------| +| Tools | 8 | 6 (base + background_run + check)| +| Execution | Blocking only | Blocking + background threads| +| Notification | None | Queue drained per loop | +| Concurrency | None | Daemon threads | + +## Try It + +```sh +cd learn-claude-code +python agents/s13_background_tasks.py +``` + +1. `Run "sleep 5 && echo done" in the background, then create a file while it runs` +2. `Start 3 background tasks: "sleep 2", "sleep 4", "sleep 6". Check their status.` +3. `Run pytest in the background and keep working on other things` + +## What You've Mastered + +At this point, you can: + +- Run slow subprocesses on daemon threads without blocking the main agent loop +- Collect results through a thread-safe notification queue +- Inject background results into the conversation using the drain-before-call pattern +- Let the agent work on other things while long-running commands finish in parallel + +## What's Next + +Background tasks solve the problem of slow work that starts now. But what about work that should start later -- "run this every night" or "remind me in 30 minutes"? In s14 you will add a cron scheduler that stores future intent and triggers it when the time comes. + +## Key Takeaway + +> Background execution is a runtime lane, not a second main loop -- slow work runs on daemon threads and feeds results back through a single notification queue. diff --git a/docs/en/s13a-runtime-task-model.md b/docs/en/s13a-runtime-task-model.md new file mode 100644 index 000000000..7ae7cf850 --- /dev/null +++ b/docs/en/s13a-runtime-task-model.md @@ -0,0 +1,273 @@ +# s13a: Runtime Task Model + +> **Deep Dive** -- Best read between s12 and s13. It prevents the most common confusion in Stage 3. + +### When to Read This + +Right after s12 (Task System), before you start s13 (Background Tasks). This note separates two meanings of "task" that beginners frequently collapse into one. + +--- + +> This bridge note resolves one confusion that becomes expensive very quickly: +> +> **the task in the work graph is not the same thing as the task that is currently running** + +## How to Read This with the Mainline + +This note works best between these documents: + +- read [`s12-task-system.md`](./s12-task-system.md) first to lock in the durable work graph +- then read [`s13-background-tasks.md`](./s13-background-tasks.md) to see background execution +- if the terms begin to blur, you might find it helpful to revisit [`glossary.md`](./glossary.md) +- if you want the fields to line up exactly, you might find it helpful to revisit [`data-structures.md`](./data-structures.md) and [`entity-map.md`](./entity-map.md) + +## Why This Deserves Its Own Bridge Note + +The mainline is still correct: + +- `s12` teaches the task system +- `s13` teaches background tasks + +But without one more bridge layer, you can easily start collapsing two different meanings of "task" into one bucket. + +For example: + +- a work-graph task such as "implement auth module" +- a background execution such as "run pytest" +- a teammate execution such as "alice is editing files" + +All three can be casually called tasks, but they do not live on the same layer. + +## Two Very Different Kinds of Task + +### 1. Work-graph task + +This is the durable node introduced in `s12`. + +It answers: + +- what should be done +- which work depends on which other work +- who owns it +- what the progress status is + +It is best understood as: + +> a durable unit of planned work + +### 2. Runtime task + +This layer answers: + +- what execution unit is alive right now +- what kind of execution it is +- whether it is running, completed, failed, or killed +- where its output lives + +It is best understood as: + +> a live execution slot inside the runtime + +## The Minimum Mental Model + +Treat these as two separate tables: + +```text +work-graph task + - durable + - goal and dependency oriented + - longer lifecycle + +runtime task + - execution oriented + - output and status oriented + - shorter lifecycle +``` + +Their relationship is not "pick one." + +It is: + +```text +one work-graph task + can spawn +one or more runtime tasks +``` + +For example: + +```text +work-graph task: + "Implement auth module" + +runtime tasks: + 1. run tests in the background + 2. launch a coder teammate + 3. monitor an external service +``` + +## Why the Distinction Matters + +If you do not keep these layers separate, the later chapters start tangling together: + +- `s13` background execution blurs into the `s12` task board +- `s15-s17` teammate work has nowhere clean to attach +- `s18` worktrees become unclear because you no longer know what layer they belong to + +The shortest correct summary is: + +**work-graph tasks manage goals; runtime tasks manage execution** + +## Core Records + +### 1. `WorkGraphTaskRecord` + +This is the durable task from `s12`. + +```python +task = { + "id": 12, + "subject": "Implement auth module", + "status": "in_progress", + "blockedBy": [], + "blocks": [13], + "owner": "alice", + "worktree": "auth-refactor", +} +``` + +### 2. `RuntimeTaskState` + +A minimal teaching shape can look like this: + +```python +runtime_task = { + "id": "b8k2m1qz", + "type": "local_bash", + "status": "running", + "description": "Run pytest", + "start_time": 1710000000.0, + "end_time": None, + "output_file": ".task_outputs/b8k2m1qz.txt", + "notified": False, +} +``` + +The key fields are: + +- `type`: what execution unit this is +- `status`: whether it is active or terminal +- `output_file`: where the result is stored +- `notified`: whether the system already surfaced the result + +### 3. `RuntimeTaskType` + +You do not need to implement every type in the teaching repo immediately. + +But you should still know that runtime task is a family, not just one shell command type. + +A minimal table: + +```text +local_bash +local_agent +remote_agent +in_process_teammate +monitor +workflow +``` + +## Minimum Implementation Steps + +### Step 1: keep the `s12` task board intact + +Do not overload it. + +### Step 2: add a separate runtime task manager + +```python +class RuntimeTaskManager: + def __init__(self): + self.tasks = {} +``` + +### Step 3: create runtime tasks when background work starts + +```python +def spawn_bash_task(command: str): + task_id = new_runtime_id() + runtime_tasks[task_id] = { + "id": task_id, + "type": "local_bash", + "status": "running", + "description": command, + } +``` + +### Step 4: optionally link runtime execution back to the work graph + +```python +runtime_tasks[task_id]["work_graph_task_id"] = 12 +``` + +You do not need that field on day one, but it becomes increasingly important once the system reaches teams and worktrees. + +## The Picture You Should Hold + +```text +Work Graph + task #12: Implement auth module + | + +-- runtime task A: local_bash (pytest) + +-- runtime task B: local_agent (coder worker) + +-- runtime task C: monitor (watch service status) + +Runtime Task Layer + A/B/C each have: + - their own runtime ID + - their own status + - their own output + - their own lifecycle +``` + +## How This Connects to Later Chapters + +Once this layer is clear, the rest of the runtime and platform chapters become much easier: + +- `s13` background commands are runtime tasks +- `s15-s17` teammates can also be understood as runtime task variants +- `s18` worktrees mostly bind to durable work, but still affect runtime execution +- `s19` some monitoring or async external work can also land in the runtime layer + +Whenever you see "something is alive in the background and advancing work," ask two questions: + +- is this a durable goal from the work graph? +- or is this a live execution slot in the runtime? + +## Common Beginner Mistakes + +### 1. Putting background shell state directly into the task board + +That mixes durable task state and runtime execution state. + +### 2. Assuming one work-graph task can only have one runtime task + +In real systems, one goal often spawns multiple execution units. + +### 3. Reusing the same status vocabulary for both layers + +For example: + +- durable tasks: `pending / in_progress / completed` +- runtime tasks: `running / completed / failed / killed` + +Those should stay distinct when possible. + +### 4. Ignoring runtime-only fields such as `output_file` and `notified` + +The durable task board does not care much about them. +The runtime layer cares a lot. + +## Key Takeaway + +**"Task" means two different things: a durable goal in the work graph (what should be done) and a live execution slot in the runtime (what is running right now). Keep them on separate layers.** diff --git a/docs/en/s14-cron-scheduler.md b/docs/en/s14-cron-scheduler.md new file mode 100644 index 000000000..97b03fbf6 --- /dev/null +++ b/docs/en/s14-cron-scheduler.md @@ -0,0 +1,158 @@ +# s14: Cron Scheduler + +`s01 > s02 > s03 > s04 > s05 > s06 > s07 > s08 > s09 > s10 > s11 > s12 > s13 > [ s14 ] > s15 > s16 > s17 > s18 > s19` + +## What You'll Learn + +- How schedule records store future intent as durable data +- How a time-based checker turns cron expressions into triggered notifications +- The difference between durable jobs (survive restarts) and session-only jobs (die with the process) +- How scheduled work re-enters the agent system through the same notification queue from s13 + +In s13 you learned to run slow work in the background so the agent does not block. But that work still starts immediately -- the user says "run this" and it runs now. Real workflows often need work that starts later: "run this every night," "generate the report every Monday morning," "remind me to check this again in 30 minutes." Without scheduling, the user has to re-issue the same request every time. This chapter adds one new idea: store future intent now, trigger it later. And it closes out Stage 3 by completing the progression from durable tasks (s12) to background execution (s13) to time-based triggers (s14). + +## The Problem + +Your agent can now manage a task graph and run commands in the background. But every piece of work begins with the user explicitly asking for it. If the user wants a nightly test run, they have to remember to type "run the tests" every evening. If they want a weekly status report, they have to open a session every Monday morning. The agent has no concept of future time -- it reacts to what you say right now, and it cannot act on something you want to happen tomorrow. You need a way to record "do X at time Y" and have the system trigger it automatically. + +## The Solution + +Add three moving parts: schedule records that describe when and what, a time checker that runs in the background and tests whether any schedule matches the current time, and the same notification queue from s13 to feed triggered work back into the main loop. + +```text +schedule_create(...) + -> +write a durable schedule record + -> +time checker wakes up and tests "does this rule match now?" + -> +if yes, enqueue a scheduled notification + -> +main loop injects that notification as new work +``` + +The key insight is that the scheduler is not a second agent loop. It feeds triggered prompts into the same system the agent already uses. The main loop does not know or care whether a piece of work came from the user typing it or from a cron trigger -- it processes both the same way. + +## How It Works + +**Step 1.** Define the schedule record. Each job stores a cron expression (a compact time-matching syntax like `0 9 * * 1` meaning "9:00 AM every Monday"), the prompt to execute, whether it recurs or fires once, and a `last_fired_at` timestamp to prevent double-firing. + +```python +schedule = { + "id": "job_001", + "cron": "0 9 * * 1", + "prompt": "Run the weekly status report.", + "recurring": True, + "durable": True, + "created_at": 1710000000.0, + "last_fired_at": None, +} +``` + +A durable job is written to disk and survives process restarts. A session-only job lives in memory and dies when the agent exits. One-shot jobs (`recurring: False`) fire once and then delete themselves. + +**Step 2.** Create a schedule through a tool call. The method stores the record and returns it so the model can confirm what was scheduled. + +```python +def create(self, cron_expr: str, prompt: str, recurring: bool = True): + job = { + "id": new_id(), + "cron": cron_expr, + "prompt": prompt, + "recurring": recurring, + "created_at": time.time(), + "last_fired_at": None, + } + self.jobs.append(job) + return job +``` + +**Step 3.** Run a background checker loop that wakes up every 60 seconds and tests each schedule against the current time. + +```python +def check_loop(self): + while True: + now = datetime.now() + self.check_jobs(now) + time.sleep(60) +``` + +**Step 4.** When a schedule matches, enqueue a notification. The `last_fired_at` field is updated to prevent the same minute from triggering the job twice. + +```python +def check_jobs(self, now): + for job in self.jobs: + if cron_matches(job["cron"], now): + self.queue.put({ + "type": "scheduled_prompt", + "schedule_id": job["id"], + "prompt": job["prompt"], + }) + job["last_fired_at"] = now.timestamp() +``` + +**Step 5.** Feed scheduled notifications back into the main loop using the same drain pattern from s13. From the agent's perspective, a scheduled prompt looks just like a user message. + +```python +notifications = scheduler.drain() +for item in notifications: + messages.append({ + "role": "user", + "content": f"[scheduled:{item['schedule_id']}] {item['prompt']}", + }) +``` + +## Read Together + +- If `schedule`, `task`, and `runtime task` still feel like the same object, reread [`s13a-runtime-task-model.md`](./s13a-runtime-task-model.md) -- it draws the boundary between planning records, execution records, and schedule records. +- If you want to see how one trigger eventually returns to the mainline, pair this chapter with [`s00b-one-request-lifecycle.md`](./s00b-one-request-lifecycle.md). +- If future triggers start to feel like a whole second execution system, reset with [`data-structures.md`](./data-structures.md) and separate schedule records from runtime records. + +## What Changed + +| Mechanism | Main question | +|---|---| +| Background tasks (s13) | "How does slow work continue without blocking?" | +| Scheduling (s14) | "When should future work begin?" | + +| Component | Before (s13) | After (s14) | +|---|---|---| +| Tools | 6 (base + background) | 8 (+ schedule_create, schedule_list, schedule_delete) | +| Time awareness | None | Cron-based future triggers | +| Persistence | Background tasks in memory | Durable schedules survive restarts | +| Trigger model | User-initiated only | User-initiated + time-triggered | + +## Try It + +```sh +cd learn-claude-code +python agents/s14_cron_scheduler.py +``` + +1. Create a repeating schedule: `Schedule "echo hello" to run every 2 minutes` +2. Create a one-shot reminder: `Remind me in 1 minute to check the build` +3. Create a delayed follow-up: `In 5 minutes, run the test suite and report results` + +## What You've Mastered + +At this point, you can: + +- Define schedule records that store future intent as durable data +- Run a background time checker that matches cron expressions to the current clock +- Distinguish durable jobs (persist to disk) from session-only jobs (in-memory) +- Feed scheduled triggers back into the main loop through the same notification queue used by background tasks +- Prevent double-firing with `last_fired_at` tracking + +## Stage 3 Complete + +You have finished Stage 3: the execution and scheduling layer. Looking back at the three chapters together: + +- **s12** gave the agent a task graph with dependencies and persistence -- it can plan structured work that survives restarts. +- **s13** added background execution -- slow work runs in parallel instead of blocking the loop. +- **s14** added time-based triggers -- the agent can schedule future work without the user having to remember. + +Together, these three chapters transform the agent from something that only reacts to what you type right now into something that can plan ahead, work in parallel, and act on its own schedule. In Stage 4 (s15-s18), you will use this foundation to coordinate multiple agents working as a team. + +## Key Takeaway + +> A scheduler stores future intent as a record, checks it against the clock in a background loop, and feeds triggered work back into the same agent system -- no second loop needed. diff --git a/docs/en/s15-agent-teams.md b/docs/en/s15-agent-teams.md new file mode 100644 index 000000000..61075a198 --- /dev/null +++ b/docs/en/s15-agent-teams.md @@ -0,0 +1,192 @@ +# s15: Agent Teams + +`s01 > s02 > s03 > s04 > s05 > s06 > s07 > s08 > s09 > s10 > s11 > s12 > s13 > s14 > [ s15 ] > s16 > s17 > s18 > s19` + +## What You'll Learn +- How persistent teammates differ from disposable subagents +- How JSONL-based inboxes give agents a durable communication channel +- How the team lifecycle moves through spawn, working, idle, and shutdown +- How file-based coordination lets multiple agent loops run side by side + +Sometimes one agent is not enough. A complex project -- say, building a feature that involves frontend, backend, and tests -- needs multiple workers running in parallel, each with its own identity and memory. In this chapter you will build a team system where agents persist beyond a single prompt, communicate through file-based mailboxes, and coordinate without sharing a single conversation thread. + +## The Problem + +Subagents from s04 are disposable: you spawn one, it works, it returns a summary, and it dies. It has no identity and no memory between invocations. Background tasks from s13 can keep work running in the background, but they are not persistent teammates making their own LLM-guided decisions. + +Real teamwork needs three things: (1) persistent agents that outlive a single prompt, (2) identity and lifecycle management so you know who is doing what, and (3) a communication channel between agents so they can exchange information without the lead manually relaying every message. + +## The Solution + +The harness maintains a team roster in a shared config file and gives each teammate an append-only JSONL inbox. When one agent sends a message to another, it simply appends a JSON line to the recipient's inbox file. The recipient drains that file before every LLM call. + +``` +Teammate lifecycle: + spawn -> WORKING -> IDLE -> WORKING -> ... -> SHUTDOWN + +Communication: + .team/ + config.json <- team roster + statuses + inbox/ + alice.jsonl <- append-only, drain-on-read + bob.jsonl + lead.jsonl + + +--------+ send("alice","bob","...") +--------+ + | alice | -----------------------------> | bob | + | loop | bob.jsonl << {json_line} | loop | + +--------+ +--------+ + ^ | + | BUS.read_inbox("alice") | + +---- alice.jsonl -> read + drain ---------+ +``` + +## How It Works + +**Step 1.** `TeammateManager` maintains `config.json` with the team roster. It tracks every teammate's name, role, and current status. + +```python +class TeammateManager: + def __init__(self, team_dir: Path): + self.dir = team_dir + self.dir.mkdir(exist_ok=True) + self.config_path = self.dir / "config.json" + self.config = self._load_config() + self.threads = {} +``` + +**Step 2.** `spawn()` creates a teammate entry in the roster and starts its agent loop in a separate thread. From this point on, the teammate runs independently -- it has its own conversation history, its own tool calls, and its own LLM interactions. + +```python +def spawn(self, name: str, role: str, prompt: str) -> str: + member = {"name": name, "role": role, "status": "working"} + self.config["members"].append(member) + self._save_config() + thread = threading.Thread( + target=self._teammate_loop, + args=(name, role, prompt), daemon=True) + thread.start() + return f"Spawned teammate '{name}' (role: {role})" +``` + +**Step 3.** `MessageBus` provides append-only JSONL inboxes. `send()` appends a single JSON line to the recipient's file; `read_inbox()` reads all accumulated messages and then empties the file ("drains" it). The storage format is intentionally simple -- the teaching focus here is the mailbox boundary, not storage cleverness. + +```python +class MessageBus: + def send(self, sender, to, content, msg_type="message", extra=None): + msg = {"type": msg_type, "from": sender, + "content": content, "timestamp": time.time()} + if extra: + msg.update(extra) + with open(self.dir / f"{to}.jsonl", "a") as f: + f.write(json.dumps(msg) + "\n") + + def read_inbox(self, name): + path = self.dir / f"{name}.jsonl" + if not path.exists(): return "[]" + msgs = [json.loads(l) for l in path.read_text().strip().splitlines() if l] + path.write_text("") # drain + return json.dumps(msgs, indent=2) +``` + +**Step 4.** Each teammate checks its inbox before every LLM call. Any received messages get injected into the conversation context so the model can see and respond to them. + +```python +def _teammate_loop(self, name, role, prompt): + messages = [{"role": "user", "content": prompt}] + for _ in range(50): + inbox = BUS.read_inbox(name) + if inbox != "[]": + messages.append({"role": "user", + "content": f"{inbox}"}) + messages.append({"role": "assistant", + "content": "Noted inbox messages."}) + response = client.messages.create(...) + if response.stop_reason != "tool_use": + break + # execute tools, append results... + self._find_member(name)["status"] = "idle" +``` + +## Read Together + +- If you still treat a teammate like s04's disposable subagent, revisit [`entity-map.md`](./entity-map.md) to see how they differ. +- If you plan to continue into s16-s18, keep [`team-task-lane-model.md`](./team-task-lane-model.md) open -- it separates teammate, protocol request, task, runtime slot, and worktree lane into distinct concepts. +- If you are unsure how a long-lived teammate differs from a live runtime slot, pair this chapter with [`s13a-runtime-task-model.md`](./s13a-runtime-task-model.md). + +## How It Plugs Into The Earlier System + +This chapter is not just "more model calls." It adds durable executors on top of work structures you already built in s12-s14. + +```text +lead identifies work that needs a long-lived worker + -> +spawn teammate + -> +write roster entry in .team/config.json + -> +send inbox message / task hint + -> +teammate drains inbox before its next loop + -> +teammate runs its own agent loop and tools + -> +result returns through team messages or task updates +``` + +Keep the boundary straight: + +- s12-s14 gave you tasks, runtime slots, and schedules +- s15 adds durable named workers +- s15 is still mostly lead-assigned work +- structured protocols arrive in s16 +- autonomous claiming arrives in s17 + +## Teammate vs Subagent vs Runtime Slot + +| Mechanism | Think of it as | Lifecycle | Main boundary | +|---|---|---|---| +| subagent | a disposable helper | spawn -> work -> summary -> gone | isolates one exploratory branch | +| runtime slot | a live execution slot | exists while background work is running | tracks long-running execution, not identity | +| teammate | a durable worker | can go idle, resume, and keep receiving work | has a name, inbox, and independent loop | + +## What Changed From s14 + +| Component | Before (s14) | After (s15) | +|----------------|------------------|----------------------------| +| Tools | 6 | 9 (+spawn/send/read_inbox) | +| Agents | Single | Lead + N teammates | +| Persistence | None | config.json + JSONL inboxes| +| Threads | Background cmds | Full agent loops per thread| +| Lifecycle | Fire-and-forget | idle -> working -> idle | +| Communication | None | message + broadcast | + +## Try It + +```sh +cd learn-claude-code +python agents/s15_agent_teams.py +``` + +1. `Spawn alice (coder) and bob (tester). Have alice send bob a message.` +2. `Broadcast "status update: phase 1 complete" to all teammates` +3. `Check the lead inbox for any messages` +4. Type `/team` to see the team roster with statuses +5. Type `/inbox` to manually check the lead's inbox + +## What You've Mastered + +At this point, you can: + +- Spawn persistent teammates that each run their own independent agent loop +- Send messages between agents through durable JSONL inboxes +- Track teammate status through a shared config file +- Coordinate multiple agents without funneling everything through a single conversation + +## What's Next + +Your teammates can now communicate freely, but they lack coordination rules. What happens when you need to shut a teammate down cleanly, or review a risky plan before it executes? In s16, you will add structured protocols -- request-response handshakes that bring order to multi-agent negotiation. + +## Key Takeaway + +> Teammates persist beyond one prompt, each with identity, lifecycle, and a durable mailbox -- coordination is no longer limited to a single parent loop. diff --git a/docs/en/s16-team-protocols.md b/docs/en/s16-team-protocols.md new file mode 100644 index 000000000..8b1ab1f7d --- /dev/null +++ b/docs/en/s16-team-protocols.md @@ -0,0 +1,173 @@ +# s16: Team Protocols + +`s01 > s02 > s03 > s04 > s05 > s06 > s07 > s08 > s09 > s10 > s11 > s12 > s13 > s14 > s15 > [ s16 ] > s17 > s18 > s19` + +## What You'll Learn +- How a request-response pattern with a tracking ID structures multi-agent negotiation +- How the shutdown protocol lets a lead gracefully stop a teammate +- How plan approval gates risky work behind a review step +- How one reusable FSM (a simple status tracker with defined transitions) covers both protocols + +In s15 your teammates can send messages freely, but that freedom comes with chaos. One agent tells another "please stop," and the other ignores it. A teammate starts a risky database migration without asking first. The problem is not communication itself -- you solved that with inboxes -- but the lack of coordination rules. In this chapter you will add structured protocols: a standardized message wrapper with a tracking ID that turns loose messages into reliable handshakes. + +## The Problem + +Two coordination gaps become obvious once your team grows past toy examples: + +**Shutdown.** Killing a teammate's thread leaves files half-written and the config roster stale. You need a handshake: the lead requests shutdown, and the teammate approves (finishes current work and exits cleanly) or rejects (keeps working because it has unfinished obligations). + +**Plan approval.** When the lead says "refactor the auth module," the teammate starts immediately. But for high-risk changes, the lead should review the plan before any code gets written. + +Both scenarios share an identical structure: one side sends a request carrying a unique ID, the other side responds referencing that same ID. That single pattern is enough to build any coordination protocol you need. + +## The Solution + +Both shutdown and plan approval follow one shape: send a request with a `request_id`, receive a response referencing that same `request_id`, and track the outcome through a simple status machine (`pending -> approved` or `pending -> rejected`). + +``` +Shutdown Protocol Plan Approval Protocol +================== ====================== + +Lead Teammate Teammate Lead + | | | | + |--shutdown_req-->| |--plan_req------>| + | {req_id:"abc"} | | {req_id:"xyz"} | + | | | | + |<--shutdown_resp-| |<--plan_resp-----| + | {req_id:"abc", | | {req_id:"xyz", | + | approve:true} | | approve:true} | + +Shared FSM: + [pending] --approve--> [approved] + [pending] --reject---> [rejected] + +Trackers: + shutdown_requests = {req_id: {target, status}} + plan_requests = {req_id: {from, plan, status}} +``` + +## How It Works + +**Step 1.** The lead initiates shutdown by generating a unique `request_id` and sending the request through the teammate's inbox. The request is tracked in a dictionary so the lead can check its status later. + +```python +shutdown_requests = {} + +def handle_shutdown_request(teammate: str) -> str: + req_id = str(uuid.uuid4())[:8] + shutdown_requests[req_id] = {"target": teammate, "status": "pending"} + BUS.send("lead", teammate, "Please shut down gracefully.", + "shutdown_request", {"request_id": req_id}) + return f"Shutdown request {req_id} sent (status: pending)" +``` + +**Step 2.** The teammate receives the request in its inbox and responds with approve or reject. The response carries the same `request_id` so the lead can match it to the original request -- this is the correlation that makes the protocol reliable. + +```python +if tool_name == "shutdown_response": + req_id = args["request_id"] + approve = args["approve"] + shutdown_requests[req_id]["status"] = "approved" if approve else "rejected" + BUS.send(sender, "lead", args.get("reason", ""), + "shutdown_response", + {"request_id": req_id, "approve": approve}) +``` + +**Step 3.** Plan approval follows the identical pattern but in the opposite direction. The teammate submits a plan (generating a `request_id`), and the lead reviews it (referencing the same `request_id` to approve or reject). + +```python +plan_requests = {} + +def handle_plan_review(request_id, approve, feedback=""): + req = plan_requests[request_id] + req["status"] = "approved" if approve else "rejected" + BUS.send("lead", req["from"], feedback, + "plan_approval_response", + {"request_id": request_id, "approve": approve}) +``` + +In this teaching demo, one FSM shape covers both protocols. A production system might treat different protocol families differently, but the teaching version intentionally keeps one reusable template so you can see the shared structure clearly. + +## Read Together + +- If plain messages and protocol requests are starting to blur together, revisit [`glossary.md`](./glossary.md) and [`entity-map.md`](./entity-map.md) to see how they differ. +- If you plan to continue into s17 and s18, read [`team-task-lane-model.md`](./team-task-lane-model.md) first so autonomy and worktree lanes do not collapse into one idea. +- If you want to trace how a protocol request returns to the main system, pair this chapter with [`s00b-one-request-lifecycle.md`](./s00b-one-request-lifecycle.md). + +## How It Plugs Into The Team System + +The real upgrade in s16 is not "two new message types." It is a durable coordination path: + +```text +requester starts a protocol action + -> +write RequestRecord + -> +send ProtocolEnvelope through inbox + -> +receiver drains inbox on its next loop + -> +update request status by request_id + -> +send structured response + -> +requester continues based on approved / rejected +``` + +That is the missing layer between "agents can chat" and "agents can coordinate reliably." + +## Message vs Protocol vs Request vs Task + +| Object | What question it answers | Typical fields | +|---|---|---| +| `MessageEnvelope` | who said what to whom | `from`, `to`, `content` | +| `ProtocolEnvelope` | is this a structured request / response | `type`, `request_id`, `payload` | +| `RequestRecord` | where is this coordination flow now | `kind`, `status`, `from`, `to` | +| `TaskRecord` | what actual work item is being advanced | `subject`, `status`, `blockedBy`, `owner` | + +Do not collapse them: + +- a protocol request is not the task itself +- the request store is not the task board +- protocols track coordination flow +- tasks track work progression + +## What Changed From s15 + +| Component | Before (s15) | After (s16) | +|----------------|------------------|------------------------------| +| Tools | 9 | 12 (+shutdown_req/resp +plan)| +| Shutdown | Natural exit only| Request-response handshake | +| Plan gating | None | Submit/review with approval | +| Correlation | None | request_id per request | +| FSM | None | pending -> approved/rejected | + +## Try It + +```sh +cd learn-claude-code +python agents/s16_team_protocols.py +``` + +1. `Spawn alice as a coder. Then request her shutdown.` +2. `List teammates to see alice's status after shutdown approval` +3. `Spawn bob with a risky refactoring task. Review and reject his plan.` +4. `Spawn charlie, have him submit a plan, then approve it.` +5. Type `/team` to monitor statuses + +## What You've Mastered + +At this point, you can: + +- Build request-response protocols that use a unique ID for correlation +- Implement graceful shutdown through a two-step handshake +- Gate risky work behind a plan approval step +- Reuse a single FSM pattern (`pending -> approved/rejected`) for any new protocol you invent + +## What's Next + +Your team now has structure and rules, but the lead still has to babysit every teammate -- assigning tasks one by one, nudging idle workers. In s17, you will make teammates autonomous: they scan the task board themselves, claim unclaimed work, and resume after context compression without losing their identity. + +## Key Takeaway + +> A protocol request is a structured message with a tracking ID, and the response must reference that same ID -- that single pattern is enough to build any coordination handshake. diff --git a/docs/en/s17-autonomous-agents.md b/docs/en/s17-autonomous-agents.md new file mode 100644 index 000000000..e39a3e36f --- /dev/null +++ b/docs/en/s17-autonomous-agents.md @@ -0,0 +1,171 @@ +# s17: Autonomous Agents + +`s01 > s02 > s03 > s04 > s05 > s06 > s07 > s08 > s09 > s10 > s11 > s12 > s13 > s14 > s15 > s16 > [ s17 ] > s18 > s19` + +## What You'll Learn +- How idle polling lets a teammate find new work without being told +- How auto-claim turns the task board into a self-service work queue +- How identity re-injection restores a teammate's sense of self after context compression +- How a timeout-based shutdown prevents idle agents from running forever + +Manual assignment does not scale. With ten unclaimed tasks on the board, the lead has to pick one, find an idle teammate, craft a prompt, and hand it off -- ten times. The lead becomes a bottleneck, spending more time dispatching than thinking. In this chapter you will remove that bottleneck by making teammates autonomous: they scan the task board themselves, claim unclaimed work, and shut down gracefully when there is nothing left to do. + +## The Problem + +In s15-s16, teammates only work when explicitly told to. The lead must spawn each one with a specific prompt. If ten tasks sit unclaimed on the board, the lead assigns each one manually. This creates a coordination bottleneck that gets worse as the team grows. + +True autonomy means teammates scan the task board themselves, claim unclaimed tasks, work on them, then look for more -- all without the lead lifting a finger. + +One subtlety makes this harder than it sounds: after context compression (which you built in s06), an agent's conversation history gets truncated. The agent might forget who it is. Identity re-injection fixes this by restoring the agent's name and role when its context gets too short. + +## The Solution + +Each teammate alternates between two phases: WORK (calling the LLM and executing tools) and IDLE (polling for new messages or unclaimed tasks). If the idle phase times out with nothing to do, the teammate shuts itself down. + +``` +Teammate lifecycle with idle cycle: + ++-------+ +| spawn | ++---+---+ + | + v ++-------+ tool_use +-------+ +| WORK | <------------- | LLM | ++---+---+ +-------+ + | + | stop_reason != tool_use (or idle tool called) + v ++--------+ +| IDLE | poll every 5s for up to 60s ++---+----+ + | + +---> check inbox --> message? ----------> WORK + | + +---> scan .tasks/ --> unclaimed? -------> claim -> WORK + | + +---> 60s timeout ----------------------> SHUTDOWN + +Identity re-injection after compression: + if len(messages) <= 3: + messages.insert(0, identity_block) +``` + +## How It Works + +**Step 1.** The teammate loop has two phases: WORK and IDLE. During the work phase, the teammate calls the LLM repeatedly and executes tools. When the LLM stops calling tools (or the teammate explicitly calls the `idle` tool), it transitions to the idle phase. + +```python +def _loop(self, name, role, prompt): + while True: + # -- WORK PHASE -- + messages = [{"role": "user", "content": prompt}] + for _ in range(50): + response = client.messages.create(...) + if response.stop_reason != "tool_use": + break + # execute tools... + if idle_requested: + break + + # -- IDLE PHASE -- + self._set_status(name, "idle") + resume = self._idle_poll(name, messages) + if not resume: + self._set_status(name, "shutdown") + return + self._set_status(name, "working") +``` + +**Step 2.** The idle phase polls for two things in a loop: inbox messages and unclaimed tasks. It checks every 5 seconds for up to 60 seconds. If a message arrives, the teammate wakes up. If an unclaimed task appears on the board, the teammate claims it and gets back to work. If neither happens within the timeout window, the teammate shuts itself down. + +```python +def _idle_poll(self, name, messages): + for _ in range(IDLE_TIMEOUT // POLL_INTERVAL): # 60s / 5s = 12 + time.sleep(POLL_INTERVAL) + inbox = BUS.read_inbox(name) + if inbox: + messages.append({"role": "user", + "content": f"{inbox}"}) + return True + unclaimed = scan_unclaimed_tasks() + if unclaimed: + claim_task(unclaimed[0]["id"], name) + messages.append({"role": "user", + "content": f"Task #{unclaimed[0]['id']}: " + f"{unclaimed[0]['subject']}"}) + return True + return False # timeout -> shutdown +``` + +**Step 3.** Task board scanning finds pending, unowned, unblocked tasks. The scan reads task files from disk and filters for tasks that are available to claim -- no owner, no blocking dependencies, and still in `pending` status. + +```python +def scan_unclaimed_tasks() -> list: + unclaimed = [] + for f in sorted(TASKS_DIR.glob("task_*.json")): + task = json.loads(f.read_text()) + if (task.get("status") == "pending" + and not task.get("owner") + and not task.get("blockedBy")): + unclaimed.append(task) + return unclaimed +``` + +**Step 4.** Identity re-injection handles a subtle problem. After context compression (s06), the conversation history might shrink to just a few messages -- and the agent forgets who it is. When the message list is suspiciously short (3 or fewer messages), the harness inserts an identity block at the beginning so the agent knows its name, role, and team. + +```python +if len(messages) <= 3: + messages.insert(0, {"role": "user", + "content": f"You are '{name}', role: {role}, " + f"team: {team_name}. Continue your work."}) + messages.insert(1, {"role": "assistant", + "content": f"I am {name}. Continuing."}) +``` + +## Read Together + +- If teammate, task, and runtime slot are starting to blur into one layer, revisit [`team-task-lane-model.md`](./team-task-lane-model.md) to separate them clearly. +- If auto-claim makes you wonder where the live execution slot actually lives, keep [`s13a-runtime-task-model.md`](./s13a-runtime-task-model.md) nearby. +- If you are starting to forget the core difference between a persistent teammate and a one-shot subagent, revisit [`entity-map.md`](./entity-map.md). + +## What Changed From s16 + +| Component | Before (s16) | After (s17) | +|----------------|------------------|----------------------------| +| Tools | 12 | 14 (+idle, +claim_task) | +| Autonomy | Lead-directed | Self-organizing | +| Idle phase | None | Poll inbox + task board | +| Task claiming | Manual only | Auto-claim unclaimed tasks | +| Identity | System prompt | + re-injection after compress| +| Timeout | None | 60s idle -> auto shutdown | + +## Try It + +```sh +cd learn-claude-code +python agents/s17_autonomous_agents.py +``` + +1. `Create 3 tasks on the board, then spawn alice and bob. Watch them auto-claim.` +2. `Spawn a coder teammate and let it find work from the task board itself` +3. `Create tasks with dependencies. Watch teammates respect the blocked order.` +4. Type `/tasks` to see the task board with owners +5. Type `/team` to monitor who is working vs idle + +## What You've Mastered + +At this point, you can: + +- Build teammates that find and claim work from a shared task board without lead intervention +- Implement an idle polling loop that balances responsiveness with resource efficiency +- Restore agent identity after context compression so long-running teammates stay coherent +- Use timeout-based shutdown to prevent abandoned agents from running indefinitely + +## What's Next + +Your teammates now organize themselves, but they all share the same working directory. When two agents edit the same file at the same time, things break. In s18, you will give each teammate its own isolated worktree -- a separate copy of the codebase where it can work without stepping on anyone else's changes. + +## Key Takeaway + +> Autonomous teammates scan the task board, claim unclaimed work, and shut down when idle -- removing the lead as a coordination bottleneck. diff --git a/docs/en/s18-worktree-task-isolation.md b/docs/en/s18-worktree-task-isolation.md new file mode 100644 index 000000000..529cbea67 --- /dev/null +++ b/docs/en/s18-worktree-task-isolation.md @@ -0,0 +1,151 @@ +# s18: Worktree + Task Isolation + +`s01 > s02 > s03 > s04 > s05 > s06 > s07 > s08 > s09 > s10 > s11 > s12 > s13 > s14 > s15 > s16 > s17 > [ s18 ] > s19` + +## What You'll Learn +- How git worktrees (isolated copies of your project directory, managed by git) prevent file conflicts between parallel agents +- How to bind a task to a dedicated worktree so that "what to do" and "where to do it" stay cleanly separated +- How lifecycle events give you an observable record of every create, keep, and remove action +- How parallel execution lanes let multiple agents work on different tasks without ever stepping on each other's files + +When two agents both need to edit the same codebase at the same time, you have a problem. Everything you have built so far -- task boards, autonomous agents, team protocols -- assumes that agents work in a single shared directory. That works fine until it does not. This chapter gives every task its own directory, so parallel work stays parallel. + +## The Problem + +By s17, your agents can claim tasks, coordinate through team protocols, and complete work autonomously. But all of them run in the same project directory. Imagine agent A is refactoring the authentication module, and agent B is building a new login page. Both need to touch `config.py`. Agent A stages its changes, agent B stages different changes to the same file, and now you have a tangled mess of unstaged edits that neither agent can roll back cleanly. + +The task board tracks *what to do* but has no opinion about *where to do it*. You need a way to give each task its own isolated working directory, so that file-level operations never collide. The fix is straightforward: pair each task with a git worktree -- a separate checkout of the same repository on its own branch. Tasks manage goals; worktrees manage execution context. Bind them by task ID. + +## Read Together + +- If task, runtime slot, and worktree lane are blurring together in your head, [`team-task-lane-model.md`](./team-task-lane-model.md) separates them clearly. +- If you want to confirm which fields belong on task records versus worktree records, [`data-structures.md`](./data-structures.md) has the full schema. +- If you want to see why this chapter comes after tasks and teams in the overall curriculum, [`s00e-reference-module-map.md`](./s00e-reference-module-map.md) has the ordering rationale. + +## The Solution + +The system splits into two planes: a control plane (`.tasks/`) that tracks goals, and an execution plane (`.worktrees/`) that manages isolated directories. Each task points to its worktree by name, and each worktree points back to its task by ID. + +``` +Control plane (.tasks/) Execution plane (.worktrees/) ++------------------+ +------------------------+ +| task_1.json | | auth-refactor/ | +| status: in_progress <------> branch: wt/auth-refactor +| worktree: "auth-refactor" | task_id: 1 | ++------------------+ +------------------------+ +| task_2.json | | ui-login/ | +| status: pending <------> branch: wt/ui-login +| worktree: "ui-login" | task_id: 2 | ++------------------+ +------------------------+ + | + index.json (worktree registry) + events.jsonl (lifecycle log) + +State machines: + Task: pending -> in_progress -> completed + Worktree: absent -> active -> removed | kept +``` + +## How It Works + +**Step 1.** Create a task. The goal is recorded first, before any directory exists. + +```python +TASKS.create("Implement auth refactor") +# -> .tasks/task_1.json status=pending worktree="" +``` + +**Step 2.** Create a worktree and bind it to the task. Passing `task_id` automatically advances the task to `in_progress` -- you do not need to update the status separately. + +```python +WORKTREES.create("auth-refactor", task_id=1) +# -> git worktree add -b wt/auth-refactor .worktrees/auth-refactor HEAD +# -> index.json gets new entry, task_1.json gets worktree="auth-refactor" +``` + +The binding writes state to both sides so you can traverse the relationship from either direction: + +```python +def bind_worktree(self, task_id, worktree): + task = self._load(task_id) + task["worktree"] = worktree + if task["status"] == "pending": + task["status"] = "in_progress" + self._save(task) +``` + +**Step 3.** Run commands in the worktree. The key detail: `cwd` points to the isolated directory, not your main project root. Every file operation happens in a sandbox that cannot collide with other worktrees. + +```python +subprocess.run(command, shell=True, cwd=worktree_path, + capture_output=True, text=True, timeout=300) +``` + +**Step 4.** Close out the worktree. You have two choices, depending on whether the work is done: + +- `worktree_keep(name)` -- preserve the directory for later (useful when a task is paused or needs review). +- `worktree_remove(name, complete_task=True)` -- remove the directory, mark the bound task as completed, and emit an event. One call handles teardown and completion together. + +```python +def remove(self, name, force=False, complete_task=False): + self._run_git(["worktree", "remove", wt["path"]]) + if complete_task and wt.get("task_id") is not None: + self.tasks.update(wt["task_id"], status="completed") + self.tasks.unbind_worktree(wt["task_id"]) + self.events.emit("task.completed", ...) +``` + +**Step 5.** Observe the event stream. Every lifecycle step emits a structured event to `.worktrees/events.jsonl`, giving you a complete audit trail of what happened and when: + +```json +{ + "event": "worktree.remove.after", + "task": {"id": 1, "status": "completed"}, + "worktree": {"name": "auth-refactor", "status": "removed"}, + "ts": 1730000000 +} +``` + +Events emitted: `worktree.create.before/after/failed`, `worktree.remove.before/after/failed`, `worktree.keep`, `task.completed`. + +In the teaching version, `.tasks/` plus `.worktrees/index.json` are enough to reconstruct the visible control-plane state after a crash. The important lesson is not every production edge case. The important lesson is that goal state and execution-lane state must both stay legible on disk. + +## What Changed From s17 + +| Component | Before (s17) | After (s18) | +|--------------------|----------------------------|----------------------------------------------| +| Coordination | Task board (owner/status) | Task board + explicit worktree binding | +| Execution scope | Shared directory | Task-scoped isolated directory | +| Recoverability | Task status only | Task status + worktree index | +| Teardown | Task completion | Task completion + explicit keep/remove | +| Lifecycle visibility | Implicit in logs | Explicit events in `.worktrees/events.jsonl` | + +## Try It + +```sh +cd learn-claude-code +python agents/s18_worktree_task_isolation.py +``` + +1. `Create tasks for backend auth and frontend login page, then list tasks.` +2. `Create worktree "auth-refactor" for task 1, then bind task 2 to a new worktree "ui-login".` +3. `Run "git status --short" in worktree "auth-refactor".` +4. `Keep worktree "ui-login", then list worktrees and inspect events.` +5. `Remove worktree "auth-refactor" with complete_task=true, then list tasks/worktrees/events.` + +## What You've Mastered + +At this point, you can: + +- Create isolated git worktrees so that parallel agents never produce file conflicts +- Bind tasks to worktrees with a two-way reference (task points to worktree name, worktree points to task ID) +- Choose between keeping and removing a worktree at closeout, with automatic task status updates +- Read the event stream in `events.jsonl` to understand the full lifecycle of every worktree + +## What's Next + +You now have agents that can work in complete isolation, each in its own directory with its own branch. But every capability they use -- bash, read, write, edit -- is hard-coded into your Python harness. In s19, you will learn how external programs can provide new capabilities through MCP (Model Context Protocol), so your agent can grow without changing its core code. + +## Key Takeaway + +> Tasks answer *what work is being done*; worktrees answer *where that work runs*; keeping them separate makes parallel systems far easier to reason about and recover from. diff --git a/docs/en/s19-mcp-plugin.md b/docs/en/s19-mcp-plugin.md new file mode 100644 index 000000000..628c7ef11 --- /dev/null +++ b/docs/en/s19-mcp-plugin.md @@ -0,0 +1,267 @@ +# s19: MCP & Plugin + +`s01 > s02 > s03 > s04 > s05 > s06 > s07 > s08 > s09 > s10 > s11 > s12 > s13 > s14 > s15 > s16 > s17 > s18 > [ s19 ]` + +## What You'll Learn +- How MCP (Model Context Protocol -- a standard way for the agent to talk to external capability servers) lets your agent gain new tools without changing its core code +- How tool name normalization with a `mcp__{server}__{tool}` prefix keeps external tools from colliding with native ones +- How a unified router dispatches tool calls to local handlers or remote servers through the same path +- How plugin manifests let external capability servers be discovered and launched automatically + +Up to this point, every tool your agent uses -- bash, read, write, edit, tasks, worktrees -- lives inside your Python harness. You wrote each one by hand. That works well for a teaching codebase, but a real agent needs to talk to databases, browsers, cloud services, and tools that do not exist yet. Hard-coding every possible capability is not sustainable. This chapter shows how external programs can join your agent through the same tool-routing plane you already built. + +## The Problem + +Your agent is powerful, but its capabilities are frozen at build time. If you want it to query a Postgres database, you write a new Python handler. If you want it to control a browser, you write another handler. Every new capability means changing the core harness, re-testing the tool router, and redeploying. Meanwhile, other teams are building specialized servers that already know how to talk to these systems. You need a standard protocol so those external servers can expose their tools to your agent, and your agent can call them as naturally as it calls its own native tools -- without rewriting the core loop every time. + +## The Solution + +MCP gives your agent a standard way to connect to external capability servers over stdio. The agent starts a server process, asks what tools it provides, normalizes their names with a prefix, and routes calls to that server -- all through the same tool pipeline that handles native tools. + +```text +LLM + | + | asks to call a tool + v +Agent tool router + | + +-- native tool -> local Python handler + | + +-- MCP tool -> external MCP server + | + v + return result +``` + +## Read Together + +- If you want to understand how MCP fits into the broader capability surface beyond just tools (resources, prompts, plugin discovery), [`s19a-mcp-capability-layers.md`](./s19a-mcp-capability-layers.md) covers the full platform boundary. +- If you want to confirm that external capabilities still return through the same execution surface as native tools, pair this chapter with [`s02b-tool-execution-runtime.md`](./s02b-tool-execution-runtime.md). +- If query control and external capability routing are drifting apart in your mental model, [`s00a-query-control-plane.md`](./s00a-query-control-plane.md) ties them together. + +## How It Works + +There are three essential pieces. Once you understand them, MCP stops being mysterious. + +**Step 1.** Build an `MCPClient` that manages the connection to one external server. It starts the server process over stdio, sends a handshake, and caches the list of available tools. + +```python +class MCPClient: + def __init__(self, server_name, command, args=None, env=None): + self.server_name = server_name + self.command = command + self.args = args or [] + self.process = None + self._tools = [] + + def connect(self): + self.process = subprocess.Popen( + [self.command] + self.args, + stdin=subprocess.PIPE, stdout=subprocess.PIPE, + stderr=subprocess.PIPE, text=True, + ) + self._send({"method": "initialize", "params": { + "protocolVersion": "2024-11-05", + "capabilities": {}, + "clientInfo": {"name": "teaching-agent", "version": "1.0"}, + }}) + response = self._recv() + if response and "result" in response: + self._send({"method": "notifications/initialized"}) + return True + return False + + def list_tools(self): + self._send({"method": "tools/list", "params": {}}) + response = self._recv() + if response and "result" in response: + self._tools = response["result"].get("tools", []) + return self._tools + + def call_tool(self, tool_name, arguments): + self._send({"method": "tools/call", "params": { + "name": tool_name, "arguments": arguments, + }}) + response = self._recv() + if response and "result" in response: + content = response["result"].get("content", []) + return "\n".join(c.get("text", str(c)) for c in content) + return "MCP Error: no response" +``` + +**Step 2.** Normalize external tool names with a prefix so they never collide with native tools. The convention is simple: `mcp__{server}__{tool}`. + +```text +mcp__postgres__query +mcp__browser__open_tab +``` + +This prefix serves double duty: it prevents name collisions, and it tells the router exactly which server should handle the call. + +```python +def get_agent_tools(self): + agent_tools = [] + for tool in self._tools: + prefixed_name = f"mcp__{self.server_name}__{tool['name']}" + agent_tools.append({ + "name": prefixed_name, + "description": tool.get("description", ""), + "input_schema": tool.get("inputSchema", { + "type": "object", "properties": {} + }), + }) + return agent_tools +``` + +**Step 3.** Build one unified router. The router does not care whether a tool is native or external beyond the dispatch decision. If the name starts with `mcp__`, route to the MCP server; otherwise, call the local handler. This keeps the agent loop untouched -- it just sees a flat list of tools. + +```python +if tool_name.startswith("mcp__"): + return mcp_router.call(tool_name, arguments) +else: + return native_handler(arguments) +``` + +**Step 4.** Add plugin discovery. If MCP answers "how does the agent talk to an external capability server," plugins answer "how are those servers discovered and configured?" A minimal plugin is a manifest file that tells the harness which servers to launch: + +```json +{ + "name": "my-db-tools", + "version": "1.0.0", + "mcpServers": { + "postgres": { + "command": "npx", + "args": ["-y", "@modelcontextprotocol/server-postgres"] + } + } +} +``` + +This lives in `.claude-plugin/plugin.json`. The `PluginLoader` scans for these manifests, extracts the server configs, and hands them to the `MCPToolRouter` for connection. + +**Step 5.** Enforce the safety boundary. This is the most important rule of the entire chapter: external tools must still pass through the same permission gate as native tools. If MCP tools bypass permission checks, you have created a security backdoor at the edge of your system. + +```python +decision = permission_gate.check(block.name, block.input or {}) +# Same check for "bash", "read_file", and "mcp__postgres__query" +``` + +## How It Plugs Into The Full Harness + +MCP gets confusing when it is treated like a separate universe. The cleaner model is: + +```text +startup + -> +plugin loader finds manifests + -> +server configs are extracted + -> +MCP clients connect and list tools + -> +external tools are normalized into the same tool pool + +runtime + -> +LLM emits tool_use + -> +shared permission gate + -> +native route or MCP route + -> +result normalization + -> +tool_result returns to the same loop +``` + +Different entry point, same control plane and execution plane. + +## Plugin vs Server vs Tool + +| Layer | What it is | What it is for | +|---|---|---| +| plugin manifest | a config declaration | tells the harness which servers to discover and launch | +| MCP server | an external process / connection | exposes a set of capabilities | +| MCP tool | one callable capability from that server | the concrete thing the model invokes | + +Shortest memory aid: + +- plugin = discovery +- server = connection +- tool = invocation + +## Key Data Structures + +### Server config + +```python +{ + "command": "npx", + "args": ["-y", "..."], + "env": {} +} +``` + +### Normalized external tool definition + +```python +{ + "name": "mcp__postgres__query", + "description": "Run a SQL query", + "input_schema": {...} +} +``` + +### Client registry + +```python +clients = { + "postgres": mcp_client_instance +} +``` + +## What Changed From s18 + +| Component | Before (s18) | After (s19) | +|--------------------|-----------------------------------|--------------------------------------------------| +| Tool sources | All native (local Python) | Native + external MCP servers | +| Tool naming | Flat names (`bash`, `read_file`) | Prefixed for externals (`mcp__postgres__query`) | +| Routing | Single handler map | Unified router: native dispatch + MCP dispatch | +| Capability growth | Edit harness code for each tool | Add a plugin manifest or connect a server | +| Permission scope | Native tools only | Native + external tools through same gate | + +## Try It + +```sh +cd learn-claude-code +python agents/s19_mcp_plugin.py +``` + +1. Watch how external tools are discovered from plugin manifests at startup. +2. Type `/tools` to see native and MCP tools listed side by side in one flat pool. +3. Type `/mcp` to see which MCP servers are connected and how many tools each provides. +4. Ask the agent to use a tool and notice how results return through the same loop as local tools. + +## What You've Mastered + +At this point, you can: + +- Connect to external capability servers using the MCP stdio protocol +- Normalize external tool names with a `mcp__{server}__{tool}` prefix to prevent collisions +- Route tool calls through a unified dispatcher that handles both native and MCP tools +- Discover and launch MCP servers automatically through plugin manifests +- Enforce the same permission checks on external tools as on native ones + +## The Full Picture + +You have now walked through the complete design backbone of a production coding agent, from s01 to s19. + +You started with a bare agent loop that calls an LLM and appends tool results. You added tool use, then a persistent task list, then subagents, skill loading, and context compaction. You built a permission system, a hook system, and a memory system. You constructed the system prompt pipeline, added error recovery, and gave agents a full task board with background execution and cron scheduling. You organized agents into teams with coordination protocols, made them autonomous, gave each task its own isolated worktree, and finally opened the door to external capabilities through MCP. + +Each chapter added exactly one idea to the system. None of them required you to throw away what came before. The agent you have now is not a toy -- it is a working model of the same architectural decisions that shape real production agents. + +If you want to test your understanding, try rebuilding the complete system from scratch. Start with the agent loop. Add tools. Add tasks. Keep going until you reach MCP. If you can do that without looking back at the chapters, you understand the design. And if you get stuck somewhere in the middle, the chapter that covers that idea will be waiting for you. + +## Key Takeaway + +> External capabilities should enter the same tool pipeline as native ones -- same naming, same routing, same permissions -- so the agent loop never needs to know the difference. diff --git a/docs/en/s19a-mcp-capability-layers.md b/docs/en/s19a-mcp-capability-layers.md new file mode 100644 index 000000000..cb094fe0a --- /dev/null +++ b/docs/en/s19a-mcp-capability-layers.md @@ -0,0 +1,265 @@ +# s19a: MCP Capability Layers + +> **Deep Dive** -- Best read alongside s19. It shows that MCP is more than just external tools. + +### When to Read This + +After reading s19's tools-first approach, when you're ready to see the full MCP capability stack. + +--- + +> `s19` should still keep a tools-first mainline. +> This bridge note adds the second mental model: +> +> **MCP is not only external tool access. It is a stack of capability layers.** + +## How to Read This with the Mainline + +If you want to study MCP without drifting away from the teaching goal: + +- read [`s19-mcp-plugin.md`](./s19-mcp-plugin.md) first and keep the tools-first path clear +- then you might find it helpful to revisit [`s02a-tool-control-plane.md`](./s02a-tool-control-plane.md) to see how external capability routes back into the unified tool bus +- if state records begin to blur, you might find it helpful to revisit [`data-structures.md`](./data-structures.md) +- if concept boundaries blur, you might find it helpful to revisit [`glossary.md`](./glossary.md) and [`entity-map.md`](./entity-map.md) + +## Why This Deserves a Separate Bridge Note + +For a teaching repo, keeping the mainline focused on external tools first is correct. + +That is the easiest entry: + +- connect an external server +- receive tool definitions +- call a tool +- bring the result back into the agent + +But if you want the system shape to approach real high-completion behavior, you quickly meet deeper questions: + +- is the server connected through stdio, HTTP, SSE, or WebSocket +- why are some servers `connected`, while others are `pending` or `needs-auth` +- where do resources and prompts fit relative to tools +- why does elicitation become a special kind of interaction +- where should OAuth or other auth flows be placed conceptually + +Without a capability-layer map, MCP starts to feel scattered. + +## Terms First + +### What capability layers means + +A capability layer is simply: + +> one responsibility slice in a larger system + +The point is to avoid mixing every MCP concern into one bag. + +### What transport means + +Transport is the connection channel between your agent and an MCP server: + +- stdio (standard input/output, good for local processes) +- HTTP +- SSE (Server-Sent Events, a one-way streaming protocol over HTTP) +- WebSocket + +### What elicitation means + +This is one of the less familiar terms. + +A simple teaching definition is: + +> an interaction where the MCP server asks the user for more input before it can continue + +So the system is no longer only: + +> agent calls tool -> tool returns result + +The server can also say: + +> I need more information before I can finish + +This turns a simple call-and-return into a multi-step conversation between the agent and the server. + +## The Minimum Mental Model + +A clear six-layer picture: + +```text +1. Config Layer + what the server configuration looks like + +2. Transport Layer + how the server connection is carried + +3. Connection State Layer + connected / pending / failed / needs-auth + +4. Capability Layer + tools / resources / prompts / elicitation + +5. Auth Layer + whether authentication is required and what state it is in + +6. Router Integration Layer + how MCP routes back into tool routing, permissions, and notifications +``` + +The key lesson is: + +**tools are one layer, not the whole MCP story** + +## Why the Mainline Should Still Stay Tools-First + +This matters a lot for teaching. + +Even though MCP contains multiple layers, the chapter mainline should still teach: + +### Step 1: external tools first + +Because that connects most naturally to everything you already learned: + +- local tools +- external tools +- one shared router + +### Step 2: show that more capability layers exist + +For example: + +- resources +- prompts +- elicitation +- auth + +### Step 3: decide which advanced layers the repo should actually implement + +That matches the teaching goal: + +**build the similar system first, then add the heavier platform layers** + +## Core Records + +### 1. `ScopedMcpServerConfig` + +Even a minimal teaching version should expose this idea: + +```python +config = { + "name": "postgres", + "type": "stdio", + "command": "npx", + "args": ["-y", "..."], + "scope": "project", +} +``` + +`scope` matters because server configuration may come from different places (global user settings, project-level settings, or even per-workspace overrides). + +### 2. MCP connection state + +```python +server_state = { + "name": "postgres", + "status": "connected", # pending / failed / needs-auth / disabled + "config": {...}, +} +``` + +### 3. `MCPToolSpec` + +```python +tool = { + "name": "mcp__postgres__query", + "description": "...", + "input_schema": {...}, +} +``` + +### 4. `ElicitationRequest` + +```python +request = { + "server_name": "some-server", + "message": "Please provide additional input", + "requested_schema": {...}, +} +``` + +The teaching point is not that you need to implement elicitation immediately. + +The point is: + +**MCP is not guaranteed to stay a one-way tool invocation forever** + +## The Cleaner Platform Picture + +```text +MCP Config + | + v +Transport + | + v +Connection State + | + +-- connected + +-- pending + +-- needs-auth + +-- failed + | + v +Capabilities + +-- tools + +-- resources + +-- prompts + +-- elicitation + | + v +Router / Permission / Notification Integration +``` + +## Why Auth Should Not Dominate the Chapter Mainline + +Auth is a real layer in the full platform. + +But if the mainline falls into OAuth or vendor-specific auth flow details too early, beginners lose the actual system shape. + +A better teaching order is: + +- first explain that an auth layer exists +- then explain that `connected` and `needs-auth` are different connection states +- only later, in advanced platform work, expand the full auth state machine + +That keeps the repo honest without derailing your learning path. + +## How This Relates to `s19` and `s02a` + +- the `s19` chapter keeps teaching the tools-first external capability path +- this note supplies the broader platform map +- `s02a` explains how MCP capability eventually reconnects to the unified tool control plane + +Together, they teach the actual idea: + +**MCP is an external capability platform, and tools are only the first face of it that enters the mainline** + +## Common Beginner Mistakes + +### 1. Treating MCP as only an external tool catalog + +That makes resources, prompts, auth, and elicitation feel surprising later. + +### 2. Diving into transport or OAuth details too early + +That breaks the teaching mainline. + +### 3. Letting MCP tools bypass permission checks + +That opens a dangerous side door in the system boundary. + +### 4. Mixing server config, connection state, and exposed capabilities into one blob + +Those layers should stay conceptually separate. + +## Key Takeaway + +**MCP is a six-layer capability platform. Tools are the first layer you build, but resources, prompts, elicitation, auth, and router integration are all part of the full picture.** diff --git a/docs/en/teaching-scope.md b/docs/en/teaching-scope.md new file mode 100644 index 000000000..f86abd8d2 --- /dev/null +++ b/docs/en/teaching-scope.md @@ -0,0 +1,155 @@ +# Teaching Scope + +This document explains what you will learn in this repo, what is deliberately left out, and how each chapter stays aligned with your mental model as it grows. + +## The Goal Of This Repo + +This is not a line-by-line commentary on some upstream production codebase. + +The real goal is: + +**teach you how to build a high-completion coding-agent harness from scratch.** + +That implies three obligations: + +1. you can actually rebuild it +2. you keep the mainline clear instead of drowning in side detail +3. you do not absorb mechanisms that do not really exist + +## What Every Chapter Should Cover + +Every mainline chapter should make these things explicit: + +- what problem the mechanism solves +- which module or layer it belongs to +- what state it owns +- what data structures it introduces +- how it plugs back into the loop +- what changes in the runtime flow after it appears + +If you finish a chapter and still cannot say where the mechanism lives or what state it owns, the chapter is not done yet. + +## What We Deliberately Keep Simple + +These topics are not forbidden, but they should not dominate your learning path: + +- packaging, build, and release flow +- cross-platform compatibility glue +- telemetry and enterprise policy wiring +- historical compatibility branches +- product-specific naming accidents +- line-by-line upstream code matching + +Those belong in appendices, maintainer notes, or later productization notes, not at the center of the beginner path. + +## What "High Fidelity" Really Means Here + +High fidelity in a teaching repo does not mean reproducing every edge detail 1:1. + +It means staying close to the true system backbone: + +- core runtime model +- module boundaries +- key records +- state transitions +- cooperation between major subsystems + +In short: + +**be highly faithful to the trunk, and deliberate about teaching simplifications at the edges.** + +## Who This Is For + +You do not need to be an expert in agent platforms. + +A better assumption about you: + +- basic Python is familiar +- functions, classes, lists, and dictionaries are familiar +- agent systems may be completely new + +That means the chapters should: + +- explain new concepts before using them +- keep one concept complete in one main place +- move from "what it is" to "why it exists" to "how to build it" + +## Recommended Chapter Structure + +Mainline chapters should roughly follow this order: + +1. what problem appears without this mechanism +2. first explain the new terms +3. give the smallest useful mental model +4. show the core records / data structures +5. show the smallest correct implementation +6. show how it plugs into the main loop +7. show common beginner mistakes +8. show what a higher-completion version would add later + +## Terminology Guideline + +If a chapter introduces a term from these categories, it should explain it: + +- design pattern +- data structure +- concurrency term +- protocol / networking term +- uncommon engineering vocabulary + +Examples: + +- state machine +- scheduler +- queue +- worktree +- DAG +- protocol envelope + +Do not drop the name without the explanation. + +## Minimal Correct Version Principle + +Real mechanisms are often complex, but teaching works best when it does not start with every branch at once. + +Prefer this sequence: + +1. show the smallest correct version +2. explain what core problem it already solves +3. show what later iterations would add + +Examples: + +- permission system: first `deny -> mode -> allow -> ask` +- error recovery: first three major recovery branches +- task system: first task records, dependencies, and unlocks +- team protocols: first request / response plus `request_id` + +## Checklist For Rewriting A Chapter + +- Does the first screen explain why the mechanism exists? +- Are new terms explained before they are used? +- Is there a small mental model or flow picture? +- Are key records listed explicitly? +- Is the plug-in point back into the loop explained? +- Are core mechanisms separated from peripheral product detail? +- Are the easiest confusion points called out? +- Does the chapter avoid inventing mechanisms not supported by the repo? + +## How To Use Reverse-Engineered Source Material + +Reverse-engineered source should be used as: + +**maintainer calibration material** + +Use it to: + +- verify the mainline mechanism is described correctly +- verify important boundaries and records are not missing +- verify the teaching implementation did not drift into fiction + +It should never become a prerequisite for understanding the teaching docs. + +## Key Takeaway + +**The quality of a teaching repo is decided less by how many details it mentions and more by whether the important details are fully explained and the unimportant details are safely omitted.** diff --git a/docs/en/team-task-lane-model.md b/docs/en/team-task-lane-model.md new file mode 100644 index 000000000..6f49b65fc --- /dev/null +++ b/docs/en/team-task-lane-model.md @@ -0,0 +1,316 @@ +# Team Task Lane Model + +> **Deep Dive** -- Best read at the start of Stage 4 (s15-s18). It separates five concepts that look similar but live on different layers. + +### When to Read This + +Before you start the team chapters. Keep it open as a reference during s15-s18. + +--- + +> By the time you reach `s15-s18`, the easiest thing to blur is not a function name. +> +> It is this: +> +> **Who is working, who is coordinating, what records the goal, and what provides the execution lane.** + +## What This Bridge Doc Fixes + +Across `s15-s18`, you will encounter these words that can easily blur into one vague idea: + +- teammate +- protocol request +- task +- runtime task +- worktree + +They all relate to work getting done, but they do **not** live on the same layer. + +If you do not separate them, the later chapters start to feel tangled: + +- Is a teammate the same thing as a task? +- What is the difference between `request_id` and `task_id`? +- Is a worktree just another runtime task? +- Why can a task be complete while a worktree is still kept? + +This document exists to separate those layers cleanly. + +## Recommended Reading Order + +1. Read [`s15-agent-teams.md`](./s15-agent-teams.md) for long-lived teammates. +2. Read [`s16-team-protocols.md`](./s16-team-protocols.md) for tracked request-response coordination. +3. Read [`s17-autonomous-agents.md`](./s17-autonomous-agents.md) for self-claiming teammates. +4. Read [`s18-worktree-task-isolation.md`](./s18-worktree-task-isolation.md) for isolated execution lanes. + +If the vocabulary starts to blur, you might find it helpful to revisit: + +- [`entity-map.md`](./entity-map.md) +- [`data-structures.md`](./data-structures.md) +- [`s13a-runtime-task-model.md`](./s13a-runtime-task-model.md) + +## The Core Separation + +```text +teammate + = who participates over time + +protocol request + = one tracked coordination request inside the team + +task + = what should be done + +runtime task / execution slot + = what is actively running right now + +worktree + = where the work executes without colliding with other lanes +``` + +The most common confusion is between the last three: + +- `task` +- `runtime task` +- `worktree` + +Ask three separate questions every time: + +- Is this the goal? +- Is this the running execution unit? +- Is this the isolated execution directory? + +## The Smallest Clean Diagram + +```text +Team Layer + teammate: alice (frontend) + +Protocol Layer + request_id=req_01 + kind=plan_approval + status=pending + +Work Graph Layer + task_id=12 + subject="Implement login page" + owner="alice" + status="in_progress" + +Runtime Layer + runtime_id=rt_01 + type=in_process_teammate + status=running + +Execution Lane Layer + worktree=login-page + path=.worktrees/login-page + status=active +``` + +Only one of those records the work goal itself: + +> `task_id=12` + +The others support coordination, execution, or isolation around that goal. + +## 1. Teammate: Who Is Collaborating + +Introduced in `s15`. + +This layer answers: + +- what the long-lived worker is called +- what role it has +- whether it is `working`, `idle`, or `shutdown` +- whether it has its own inbox + +Example: + +```python +member = { + "name": "alice", + "role": "frontend", + "status": "idle", +} +``` + +The point is not "another agent instance." + +The point is: + +> a persistent identity that can repeatedly receive work. + +## 2. Protocol Request: What Is Being Coordinated + +Introduced in `s16`. + +This layer answers: + +- who asked whom +- what kind of request this is +- whether it is still pending or already resolved + +Example: + +```python +request = { + "request_id": "a1b2c3d4", + "kind": "plan_approval", + "from": "alice", + "to": "lead", + "status": "pending", +} +``` + +This is not ordinary chat. + +It is: + +> a coordination record whose state can continue to evolve. + +## 3. Task: What Should Be Done + +This is the durable work-graph task from `s12`, and it is what `s17` teammates claim. + +It answers: + +- what the goal is +- who owns it +- what blocks it +- what progress state it is in + +Example: + +```python +task = { + "id": 12, + "subject": "Implement login page", + "status": "in_progress", + "owner": "alice", + "blockedBy": [], +} +``` + +Keyword: + +**goal** + +Not directory. Not protocol. Not process. + +## 4. Runtime Task / Execution Slot: What Is Running + +This layer was already clarified in the `s13a` bridge doc, but it matters even more in `s15-s18`. + +Examples: + +- a background shell command +- a long-lived teammate currently working +- a monitor process watching an external state + +These are best understood as: + +> active execution slots + +Example: + +```python +runtime = { + "id": "rt_01", + "type": "in_process_teammate", + "status": "running", + "work_graph_task_id": 12, +} +``` + +Important boundary: + +- one work-graph task may spawn multiple runtime tasks +- a runtime task is an execution instance, not the durable goal itself + +## 5. Worktree: Where the Work Happens + +Introduced in `s18`. + +This layer answers: + +- which isolated directory is used +- which task it is bound to +- whether that lane is `active`, `kept`, or `removed` + +Example: + +```python +worktree = { + "name": "login-page", + "path": ".worktrees/login-page", + "task_id": 12, + "status": "active", +} +``` + +Keyword: + +**execution boundary** + +It is not the task goal itself. It is the isolated lane where that goal is executed. + +## How The Layers Connect + +```text +teammate + coordinates through protocol requests + claims a task + runs as an execution slot + works inside a worktree lane +``` + +In a more concrete sentence: + +> `alice` claims `task #12` and progresses it inside the `login-page` worktree lane. + +That sentence is much cleaner than saying: + +> "alice is doing the login-page worktree task" + +because the shorter sentence incorrectly merges: + +- the teammate +- the task +- the worktree + +## Common Mistakes + +### 1. Treating teammate and task as the same object + +The teammate executes. The task expresses the goal. + +### 2. Treating `request_id` and `task_id` as interchangeable + +One tracks coordination. The other tracks work goals. + +### 3. Treating the runtime slot as the durable task + +The running execution may end while the durable task still exists. + +### 4. Treating the worktree as the task itself + +The worktree is only the execution lane. + +### 5. Saying "the system works in parallel" without naming the layers + +Good teaching does not stop at "there are many agents." + +It can say clearly: + +> teammates provide long-lived collaboration, requests track coordination, tasks record goals, runtime slots carry execution, and worktrees isolate the execution directory. + +## What You Should Be Able to Say After Reading This + +1. `s17` autonomy claims `s12` work-graph tasks, not `s13` runtime slots. +2. `s18` worktrees bind execution lanes to tasks; they do not turn tasks into directories. +3. A teammate can be idle while the task still exists and while the worktree is still kept. +4. A protocol request tracks a coordination exchange, not a work goal. + +## Key Takeaway + +**Five things that sound alike -- teammate, protocol request, task, runtime slot, worktree -- live on five separate layers. Naming which layer you mean is how you keep the team chapters from collapsing into confusion.** diff --git a/docs/ja/data-structures.md b/docs/ja/data-structures.md new file mode 100644 index 000000000..65f993cbd --- /dev/null +++ b/docs/ja/data-structures.md @@ -0,0 +1,1191 @@ +# Core Data Structures (主要データ構造マップ) + +> agent 学習でいちばん迷いやすいのは、機能の多さそのものではなく、 +> **「今の状態がどの record に入っているのか」が見えなくなること**です。 +> この文書は、主線章と bridge doc に繰り返し出てくる record をひとつの地図として並べ直し、 +> 読者が system 全体を「機能一覧」ではなく「状態の配置図」として理解できるようにするための資料です。 + +## どう使うか + +この資料は辞書というより、`state map` として使ってください。 + +- 単語の意味が怪しくなったら [`glossary.md`](./glossary.md) へ戻る +- object 同士の境界が混ざったら [`entity-map.md`](./entity-map.md) を開く +- `TaskRecord` と `RuntimeTaskState` が混ざったら [`s13a-runtime-task-model.md`](./s13a-runtime-task-model.md) を読む +- MCP で tools 以外の layer が混ざったら [`s19a-mcp-capability-layers.md`](./s19a-mcp-capability-layers.md) を併読する + +## 最初にこの 2 本だけは覚える + +### 原則 1: 内容状態と制御状態を分ける + +内容状態とは、system が「何を扱っているか」を表す状態です。 + +例: + +- `messages` +- `tool_result` +- memory の本文 +- task の title や description + +制御状態とは、system が「次にどう進むか」を表す状態です。 + +例: + +- `turn_count` +- `transition` +- `has_attempted_compact` +- `max_output_tokens_override` +- `pending_classifier_check` + +この 2 つを混ぜると、読者はすぐに次の疑問で詰まります。 + +- なぜ `messages` だけでは足りないのか +- なぜ control plane が必要なのか +- なぜ recovery や compact が別 state を持つのか + +### 原則 2: durable state と runtime state を分ける + +`durable state` は、session をまたいでも残す価値がある状態です。 + +例: + +- task +- memory +- schedule +- team roster + +`runtime state` は、system が動いている間だけ意味を持つ状態です。 + +例: + +- 現在の permission decision +- 今走っている runtime task +- active MCP connection +- 今回の query の continuation reason + +この区別が曖昧だと、task・runtime slot・notification・schedule・worktree が全部同じ層に見えてしまいます。 + +## 1. Query と会話制御の状態 + +この層の核心は: + +> 会話内容を持つ record と、query の進行理由を持つ record は別物である + +です。 + +### `Message` + +役割: + +- user と assistant の会話履歴を持つ +- tool 呼び出し前後の往復も保存する + +最小形: + +```python +message = { + "role": "user" | "assistant", + "content": "...", +} +``` + +agent が tool を使い始めると、`content` は単なる文字列では足りなくなり、次のような block list になることがあります。 + +- text block +- `tool_use` +- `tool_result` + +この record の本質は、**会話内容の記録**です。 +「なぜ次ターンへ進んだか」は `Message` の責務ではありません。 + +関連章: + +- `s01` +- `s02` +- `s06` +- `s10` + +### `NormalizedMessage` + +役割: + +- さまざまな内部 message を、model API に渡せる統一形式へ揃える + +最小形: + +```python +message = { + "role": "user" | "assistant", + "content": [ + {"type": "text", "text": "..."}, + ], +} +``` + +`Message` と `NormalizedMessage` の違い: + +- `Message`: system 内部の履歴 record に近い +- `NormalizedMessage`: model 呼び出し直前の入力形式に近い + +つまり、前者は「何を覚えているか」、後者は「何を送るか」です。 + +関連章: + +- `s10` +- [`s10a-message-prompt-pipeline.md`](./s10a-message-prompt-pipeline.md) + +### `CompactSummary` + +役割: + +- context が長くなり過ぎたとき、古い会話を要約へ置き換える + +最小形: + +```python +summary = { + "task_overview": "...", + "current_state": "...", + "key_decisions": ["..."], + "next_steps": ["..."], +} +``` + +重要なのは、compact が「ログ削除」ではないことです。 +compact summary は次の query 継続に必要な最小構造を残す record です。 + +最低でも次の 4 つは落とさないようにします。 + +- task の大枠 +- ここまで終わったこと +- 重要な判断 +- 次にやるべきこと + +関連章: + +- `s06` +- `s11` + +### `SystemPromptBlock` + +役割: + +- system prompt を section 単位で管理する + +最小形: + +```python +block = { + "text": "...", + "cache_scope": None, +} +``` + +この record を持つ意味: + +- prompt を一枚岩の巨大文字列にしない +- どの section が何の役割か説明できる +- 後から block 単位で差し替えや検査ができる + +`cache_scope` は最初は不要でも構いません。 +ただ、「この block は比較的安定」「この block は毎ターン変わる」という発想は早めに持っておくと、system prompt の理解が崩れにくくなります。 + +関連章: + +- `s10` +- [`s10a-message-prompt-pipeline.md`](./s10a-message-prompt-pipeline.md) + +### `PromptParts` + +役割: + +- system prompt を最終連結する前に、構成 source ごとに分けて持つ + +最小形: + +```python +parts = { + "core": "...", + "tools": "...", + "skills": "...", + "memory": "...", + "dynamic": "...", +} +``` + +この record は、読者に次のことを教えます。 + +- prompt は「書かれている」のではなく「組み立てられている」 +- stable policy と volatile runtime data は同じ section ではない +- input source ごとに責務を分けた方が debug しやすい + +関連章: + +- `s10` + +### `QueryParams` + +役割: + +- query 開始時点で外部から受け取る入口入力 + +最小形: + +```python +params = { + "messages": [...], + "system_prompt": "...", + "user_context": {...}, + "system_context": {...}, + "tool_use_context": {...}, + "fallback_model": None, + "max_output_tokens_override": None, + "max_turns": None, +} +``` + +ここで大切なのは: + +- これは query の**入口入力**である +- query の途中でどんどん変わる内部状態とは別である + +つまり `QueryParams` は「入る前に決まっているもの」、`QueryState` は「入ってから変わるもの」です。 + +関連章: + +- [`s00a-query-control-plane.md`](./s00a-query-control-plane.md) + +### `QueryState` + +役割: + +- 1 本の query が複数ターンにわたって進む間の制御状態を持つ + +最小形: + +```python +state = { + "messages": [...], + "tool_use_context": {...}, + "turn_count": 1, + "max_output_tokens_recovery_count": 0, + "has_attempted_reactive_compact": False, + "max_output_tokens_override": None, + "pending_tool_use_summary": None, + "stop_hook_active": False, + "transition": None, +} +``` + +この record に入るものの共通点: + +- 対話内容そのものではない +- 「次をどう続けるか」を決める情報である + +初心者がよく詰まる点: + +- `messages` が入っているので「全部 conversation state に見える」 +- しかし `turn_count` や `transition` は会話ではなく control state + +この record を理解できると、 + +- recovery +- compact +- hook continuation +- token budget continuation + +がすべて「同じ query を継続する理由の差分」として読めるようになります。 + +関連章: + +- [`s00a-query-control-plane.md`](./s00a-query-control-plane.md) +- `s11` + +### `TransitionReason` + +役割: + +- 前ターンが終わらず、次ターンへ続いた理由を明示する + +最小形: + +```python +transition = { + "reason": "next_turn", +} +``` + +より実用的には次のような値が入ります。 + +- `next_turn` +- `tool_result_continuation` +- `reactive_compact_retry` +- `max_output_tokens_recovery` +- `stop_hook_continuation` + +これを別 record として持つ利点: + +- log が読みやすい +- test が書きやすい +- recovery の分岐理由を説明しやすい + +つまりこれは「高度な最適化」ではなく、 +**継続理由を見える状態へ変えるための最小構造**です。 + +関連章: + +- [`s00a-query-control-plane.md`](./s00a-query-control-plane.md) +- `s11` + +## 2. Tool 実行・権限・hook の状態 + +この層の核心は: + +> tool は `name -> handler` だけで完結せず、その前後に permission / runtime / hook の状態が存在する + +です。 + +### `ToolSpec` + +役割: + +- model に「どんな tool があり、どんな入力を受け取るか」を見せる + +最小形: + +```python +tool = { + "name": "read_file", + "description": "Read file contents.", + "input_schema": {...}, +} +``` + +これは execution 実装そのものではありません。 +あくまで **model に見せる contract** です。 + +関連章: + +- `s02` +- `s19` + +### `ToolDispatchMap` + +役割: + +- tool 名を実際の handler 関数へ引く + +最小形: + +```python +dispatch = { + "read_file": run_read_file, + "write_file": run_write_file, +} +``` + +この record の仕事は単純です。 + +- 正しい handler を見つける + +ただし実システムではこれだけで足りません。 +本当に難しいのは: + +- いつ実行するか +- 並列にしてよいか +- permission を通すか +- 結果をどう loop へ戻すか + +です。 + +関連章: + +- `s02` +- [`s02a-tool-control-plane.md`](./s02a-tool-control-plane.md) + +### `ToolUseContext` + +役割: + +- tool が共有状態へ触るための窓口を持つ + +最小形: + +```python +context = { + "workspace": "...", + "permission_system": perms, + "notifications": queue, + "memory_store": memory, +} +``` + +この record がないと、各 tool が勝手に global state を触り始め、system 全体の境界が崩れます。 + +つまり `ToolUseContext` は、 + +> tool が system とどこで接続するか + +を見える形にするための record です。 + +関連章: + +- `s02` +- `s07` +- `s09` +- `s13` + +### `ToolResultEnvelope` + +役割: + +- tool 実行結果を loop が扱える統一形式で包む + +最小形: + +```python +result = { + "tool_use_id": "toolu_123", + "content": "...", +} +``` + +大切なのは、tool 結果が「ただの文字列」ではないことです。 +最低でも: + +- どの tool call に対する結果か +- loop にどう書き戻すか + +を持たせる必要があります。 + +関連章: + +- `s02` + +### `PermissionRule` + +役割: + +- 特定 tool / path / content に対する allow / deny / ask 条件を表す + +最小形: + +```python +rule = { + "tool": "bash", + "behavior": "deny", + "path": None, + "content": "sudo *", +} +``` + +この record があることで、permission system は次を言えるようになります。 + +- どの tool に対する rule か +- 何にマッチしたら発火するか +- 発火後に何を返すか + +関連章: + +- `s07` + +### `PermissionDecision` + +役割: + +- 今回の tool 実行に対する permission 結果を表す + +最小形: + +```python +decision = { + "behavior": "allow" | "deny" | "ask", + "reason": "...", +} +``` + +これを独立 record にする意味: + +- deny 理由を model が見える +- ask を loop に戻して次アクションを組み立てられる +- log や UI にも同じ object を流せる + +関連章: + +- `s07` + +### `HookEvent` + +役割: + +- pre_tool / post_tool / on_error などの lifecycle event を統一形で渡す + +最小形: + +```python +event = { + "kind": "post_tool", + "tool_name": "edit_file", + "input": {...}, + "result": "...", + "error": None, + "duration_ms": 42, +} +``` + +hook が安定して増やせるかどうかは、この record の形が揃っているかに大きく依存します。 + +もし毎回適当な文字列だけを hook に渡すと: + +- audit hook +- metrics hook +- policy hook + +のたびに payload 形式がばらけます。 + +関連章: + +- `s08` + +### `ToolExecutionBatch` + +役割: + +- 同じ execution lane でまとめて調度してよい tool block の束を表す + +最小形: + +```python +batch = { + "is_concurrency_safe": True, + "blocks": [tool_use_1, tool_use_2], +} +``` + +この record を導入すると、読者は: + +- tool を常に 1 個ずつ実行する必要はない +- ただし何でも並列にしてよいわけでもない + +という 2 本の境界を同時に理解しやすくなります。 + +関連章: + +- [`s02b-tool-execution-runtime.md`](./s02b-tool-execution-runtime.md) + +### `TrackedTool` + +役割: + +- 各 tool の lifecycle を個別に追う + +最小形: + +```python +tracked = { + "id": "toolu_01", + "name": "read_file", + "status": "queued", + "is_concurrency_safe": True, + "pending_progress": [], + "results": [], + "context_modifiers": [], +} +``` + +これがあると runtime は次のことを説明できます。 + +- 何が待機中か +- 何が実行中か +- 何が progress を出したか +- 何が完了したか + +関連章: + +- [`s02b-tool-execution-runtime.md`](./s02b-tool-execution-runtime.md) + +### `queued_context_modifiers` + +役割: + +- 並列 tool が生んだ共有 state 変更を、先に queue し、後で安定順に merge する + +最小形: + +```python +queued = { + "toolu_01": [modifier_a], + "toolu_02": [modifier_b], +} +``` + +ここで守りたい境界: + +- 並列実行してよい +- しかし共有 state を完了順でそのまま書き換えてよいとは限らない + +この record は、parallel execution と stable merge を切り分けるための最小構造です。 + +関連章: + +- [`s02b-tool-execution-runtime.md`](./s02b-tool-execution-runtime.md) + +## 3. Skill・memory・prompt source の状態 + +この層の核心は: + +> model input の材料は、その場でひとつの文字列に溶けているのではなく、複数の source record として存在する + +です。 + +### `SkillRegistry` + +役割: + +- 利用可能な skill の索引を持つ + +最小形: + +```python +registry = [ + {"name": "agent-browser", "path": "...", "description": "..."}, +] +``` + +これは「何があるか」を示す record であり、skill 本文そのものではありません。 + +関連章: + +- `s05` + +### `SkillContent` + +役割: + +- 実際に読み込んだ skill の本文や補助資料を持つ + +最小形: + +```python +skill = { + "name": "agent-browser", + "body": "...markdown...", +} +``` + +`SkillRegistry` と `SkillContent` を分ける理由: + +- registry は discovery 用 +- content は injection 用 + +つまり「見つける record」と「使う record」を分けるためです。 + +関連章: + +- `s05` + +### `MemoryEntry` + +役割: + +- 長期に残すべき事実を 1 件ずつ持つ + +最小形: + +```python +entry = { + "key": "package_manager_preference", + "value": "pnpm", + "scope": "user", + "reason": "user explicit preference", +} +``` + +memory の重要境界: + +- 会話全文を残す record ではない +- durable fact を残す record である + +関連章: + +- `s09` + +### `MemoryWriteCandidate` + +役割: + +- 今回のターンから「long-term memory に昇格させる候補」を一時的に保持する + +最小形: + +```python +candidate = { + "fact": "Use pnpm by default", + "scope": "user", + "confidence": "high", +} +``` + +教学 repo では必須ではありません。 +ただし reader が「memory はいつ書くのか」で混乱しやすい場合、この record を挟むと + +- その場の conversation detail +- durable fact candidate +- 実際に保存された memory + +の 3 層を分けやすくなります。 + +関連章: + +- `s09` + +## 4. Todo・task・runtime・team の状態 + +この層が一番混ざりやすいです。 +理由は、全部が「仕事っぽい object」に見えるからです。 + +### `TodoItem` + +役割: + +- 今の session 内での短期的な進行メモ + +最小形: + +```python +todo = { + "content": "Inspect auth tests", + "status": "pending", +} +``` + +これは durable work graph ではありません。 +今ターンの認知負荷を軽くするための session-local 補助構造です。 + +関連章: + +- `s03` + +### `PlanState` + +役割: + +- 複数の `TodoItem` と current focus をまとめる + +最小形: + +```python +plan = { + "todos": [...], + "current_focus": "Inspect auth tests", +} +``` + +これも基本は session-local です。 +`TaskRecord` と違って、再起動しても必ず復元したい durable board とは限りません。 + +関連章: + +- `s03` + +### `TaskRecord` + +役割: + +- durable work goal を表す + +最小形: + +```python +task = { + "id": "task-auth-migrate", + "title": "Migrate auth layer", + "status": "pending", + "dependencies": [], +} +``` + +この record が持つべき心智: + +- 何を達成したいか +- 依存関係は何か +- 今どの状態か + +ここで大切なのは、**task は goal node であって、今まさに走っている process ではない**ことです。 + +関連章: + +- `s12` + +### `RuntimeTaskState` + +役割: + +- いま動いている 1 回の execution slot を表す + +最小形: + +```python +runtime_task = { + "id": "rt_42", + "task_id": "task-auth-migrate", + "status": "running", + "preview": "...", + "output_file": ".runtime-tasks/rt_42.log", +} +``` + +`TaskRecord` との違い: + +- `TaskRecord`: 何を達成するか +- `RuntimeTaskState`: その goal に向かう今回の実行は今どうなっているか + +関連章: + +- `s13` +- [`s13a-runtime-task-model.md`](./s13a-runtime-task-model.md) + +### `NotificationRecord` + +役割: + +- background 実行や外部 capability から main loop へ戻る preview を持つ + +最小形: + +```python +note = { + "source": "runtime_task", + "task_id": "rt_42", + "preview": "3 tests failing...", +} +``` + +この record は全文ログの保存先ではありません。 +役割は: + +- main loop に「戻ってきた事実」を知らせる +- prompt space を全文ログで埋めない + +ことです。 + +関連章: + +- `s13` + +### `ScheduleRecord` + +役割: + +- いつ何を trigger するかを表す + +最小形: + +```python +schedule = { + "name": "nightly-health-check", + "cron": "0 2 * * *", + "task_template": "repo_health_check", +} +``` + +重要な境界: + +- `ScheduleRecord` は時間規則 +- `TaskRecord` は work goal +- `RuntimeTaskState` は live execution + +この 3 つを一緒にしないことが `s14` の核心です。 + +関連章: + +- `s14` + +### `TeamMember` + +役割: + +- 長期に存在する teammate の身元を表す + +最小形: + +```python +member = { + "name": "alice", + "role": "test-specialist", + "status": "working", +} +``` + +`TeamMember` は task ではありません。 +「誰が長く system 内に存在しているか」を表す actor record です。 + +関連章: + +- `s15` + +### `TeamConfig` + +役割: + +- team roster 全体をまとめる + +最小形: + +```python +config = { + "team_name": "default", + "members": [member1, member2], +} +``` + +この record を durable に持つことで、 + +- team に誰がいるか +- 役割が何か +- 次回起動時に何を復元するか + +が見えるようになります。 + +関連章: + +- `s15` + +### `MessageEnvelope` + +役割: + +- teammate 間の message を、本文とメタ情報込みで包む + +最小形: + +```python +envelope = { + "type": "message", + "from": "lead", + "to": "alice", + "content": "Review retry tests", + "timestamp": 1710000000.0, +} +``` + +`envelope` を使う理由: + +- 誰から誰へ送ったか分かる +- 普通の会話と protocol request を区別しやすい +- mailbox を durable channel として扱える + +関連章: + +- `s15` +- `s16` + +### `RequestRecord` + +役割: + +- approval や shutdown のような構造化 protocol state を持つ + +最小形: + +```python +request = { + "request_id": "req_91", + "kind": "plan_approval", + "status": "pending", + "payload": {...}, +} +``` + +これを別 record にすることで、 + +- ただの chat message +- 追跡可能な coordination request + +を明確に分けられます。 + +関連章: + +- `s16` + +### `ClaimPolicy` + +役割: + +- autonomous worker が何を self-claim してよいかを表す + +最小形: + +```python +policy = { + "role": "test-specialist", + "may_claim": ["retry-related"], +} +``` + +この record がないと autonomy は「空いている worker が勝手に全部取りに行く」設計になりやすく、 +race condition と重複実行を呼び込みます。 + +関連章: + +- `s17` + +### `WorktreeRecord` + +役割: + +- isolated execution lane を表す + +最小形: + +```python +worktree = { + "path": ".worktrees/wt-auth-migrate", + "task_id": "task-auth-migrate", + "status": "active", +} +``` + +この record の核心: + +- task は goal +- runtime slot は live execution +- worktree は「どこで走るか」の lane + +関連章: + +- `s18` + +## 5. MCP・plugin・外部 capability の状態 + +この層の核心は: + +> 外部 capability も「ただの tool list」ではなく、接続状態と routing を持つ platform object である + +です。 + +### `MCPServerConfig` + +役割: + +- 外部 server の設定を表す + +最小形: + +```python +config = { + "name": "figma", + "transport": "stdio", + "command": "...", +} +``` + +これは capability そのものではなく、接続の入口設定です。 + +関連章: + +- `s19` + +### `ConnectionState` + +役割: + +- remote capability の現在状態を表す + +最小形: + +```python +state = { + "status": "connected", + "needs_auth": False, + "last_error": None, +} +``` + +この record が必要な理由: + +- 外部 capability は常に使えるとは限らない +- 問題が tool schema なのか connection なのか区別する必要がある + +関連章: + +- `s19` +- [`s19a-mcp-capability-layers.md`](./s19a-mcp-capability-layers.md) + +### `CapabilityRoute` + +役割: + +- native tool / plugin / MCP server のどこへ解決されたかを表す + +最小形: + +```python +route = { + "source": "mcp", + "target": "figma.inspect", +} +``` + +この record があると、 + +- 発見 +- routing +- permission +- 実行 +- result normalization + +が同じ capability bus 上で説明できます。 + +関連章: + +- `s19` + +## 最後に、特に混同しやすい組み合わせ + +### `TodoItem` vs `TaskRecord` + +- `TodoItem`: 今 session で何を見るか +- `TaskRecord`: durable work goal と dependency をどう持つか + +### `TaskRecord` vs `RuntimeTaskState` + +- `TaskRecord`: 何を達成したいか +- `RuntimeTaskState`: 今回の実行は今どう進んでいるか + +### `RuntimeTaskState` vs `ScheduleRecord` + +- `RuntimeTaskState`: live execution +- `ScheduleRecord`: いつ trigger するか + +### `SubagentContext` vs `TeamMember` + +- `SubagentContext`: 一回きりの delegation branch +- `TeamMember`: 長期に残る actor identity + +### `TeamMember` vs `RequestRecord` + +- `TeamMember`: 誰が存在するか +- `RequestRecord`: どんな coordination request が進行中か + +### `TaskRecord` vs `WorktreeRecord` + +- `TaskRecord`: 何をやるか +- `WorktreeRecord`: どこでやるか + +### `ToolSpec` vs `CapabilityRoute` + +- `ToolSpec`: model に見せる contract +- `CapabilityRoute`: 実際にどこへ routing するか + +## 読み終えたら言えるべきこと + +少なくとも次の 3 文を、自分の言葉で説明できる状態を目指してください。 + +1. `messages` は内容状態であり、`transition` は制御状態である。 +2. `TaskRecord` は goal node であり、`RuntimeTaskState` は live execution slot である。 +3. `TeamMember`、`RequestRecord`、`WorktreeRecord` は全部「仕事っぽい」が、それぞれ actor、protocol、lane という別層の object である。 + +## 一文で覚える + +**どの record が内容を持ち、どの record が流れを持ち、どれが durable でどれが runtime かを分けられれば、agent system の複雑さは急に読める形になります。** diff --git a/docs/ja/entity-map.md b/docs/ja/entity-map.md new file mode 100644 index 000000000..b21a0471c --- /dev/null +++ b/docs/ja/entity-map.md @@ -0,0 +1,117 @@ +# エンティティ地図 + +> この文書は「単語が似て見えるが、同じものではない」という混乱をほどくための地図です。 + +## 何を分けるための文書か + +- [`glossary.md`](./glossary.md) は「この言葉は何か」を説明します +- [`data-structures.md`](./data-structures.md) は「コードではどんな形か」を説明します +- この文書は「どの層に属するか」を分けます + +## まず層を見る + +```text +conversation layer + - message + - prompt block + - reminder + +action layer + - tool call + - tool result + - hook event + +work layer + - work-graph task + - runtime task + - protocol request + +execution layer + - subagent + - teammate + - worktree lane + +platform layer + - MCP server + - memory record + - capability router +``` + +## 混同しやすい組 + +### `Message` vs `PromptBlock` + +| エンティティ | 何か | 何ではないか | +|---|---|---| +| `Message` | 会話履歴の内容 | 安定した system rule ではない | +| `PromptBlock` | system instruction の断片 | 直近の会話イベントではない | + +### `Todo / Plan` vs `Task` + +| エンティティ | 何か | 何ではないか | +|---|---|---| +| `todo / plan` | セッション内の進行ガイド | durable work graph ではない | +| `task` | durable な work node | その場の思いつきではない | + +### `Work-Graph Task` vs `RuntimeTaskState` + +| エンティティ | 何か | 何ではないか | +|---|---|---| +| work-graph task | 仕事目標と依存関係の node | 今動いている executor ではない | +| runtime task | live execution slot | durable dependency node ではない | + +### `Subagent` vs `Teammate` + +| エンティティ | 何か | 何ではないか | +|---|---|---| +| subagent | 一回きりの委譲 worker | 長期に存在する team member ではない | +| teammate | identity を持つ persistent collaborator | 使い捨て summary worker ではない | + +### `ProtocolRequest` vs normal message + +| エンティティ | 何か | 何ではないか | +|---|---|---| +| normal message | 自由文のやり取り | 追跡可能な approval workflow ではない | +| protocol request | `request_id` を持つ構造化要求 | 雑談テキストではない | + +### `Task` vs `Worktree` + +| エンティティ | 何か | 何ではないか | +|---|---|---| +| task | 何をするか | ディレクトリではない | +| worktree | どこで分離実行するか | 仕事目標そのものではない | + +### `Memory` vs `CLAUDE.md` + +| エンティティ | 何か | 何ではないか | +|---|---|---| +| memory | 後の session でも価値がある事実 | project rule file ではない | +| `CLAUDE.md` | 安定した local rule / instruction surface | user 固有の long-term fact store ではない | + +### `MCPServer` vs `MCPTool` + +| エンティティ | 何か | 何ではないか | +|---|---|---| +| MCP server | 外部 capability provider | 1 個の tool 定義ではない | +| MCP tool | server が公開する 1 つの capability | 接続面全体ではない | + +## 速見表 + +| エンティティ | 主な役割 | 典型的な置き場 | +|---|---|---| +| `Message` | 会話履歴 | `messages[]` | +| `PromptParts` | 入力 assembly の断片 | prompt builder | +| `PermissionRule` | 実行可否の判断 | settings / session state | +| `HookEvent` | lifecycle extension point | hook layer | +| `MemoryEntry` | durable fact | memory store | +| `TaskRecord` | durable work goal | task board | +| `RuntimeTaskState` | live execution slot | runtime manager | +| `TeamMember` | persistent actor | team config | +| `MessageEnvelope` | teammate 間の構造化 message | inbox | +| `RequestRecord` | protocol workflow state | request tracker | +| `WorktreeRecord` | isolated execution lane | worktree index | +| `MCPServerConfig` | 外部 capability provider 設定 | plugin / settings | + +## 一文で覚える + +**システムが複雑になるほど、単語を増やすことよりも、境界を混ぜないことの方が重要です。** diff --git a/docs/ja/glossary.md b/docs/ja/glossary.md new file mode 100644 index 000000000..9aa621b24 --- /dev/null +++ b/docs/ja/glossary.md @@ -0,0 +1,516 @@ +# 用語集 + +> この用語集は、教材主線で特に重要で、初学者が混ぜやすい言葉だけを集めたものです。 +> 何となく見覚えはあるのに、「結局これは何を指すのか」が言えなくなったら、まずここへ戻ってください。 + +## いっしょに見ると整理しやすい文書 + +- [`entity-map.md`](./entity-map.md): それぞれの言葉がどの層に属するかを見る +- [`data-structures.md`](./data-structures.md): 実際にどんな record 形へ落ちるかを見る +- [`s13a-runtime-task-model.md`](./s13a-runtime-task-model.md): `task` という語が 2 種類に分かれ始めたときに戻る +- [`s19a-mcp-capability-layers.md`](./s19a-mcp-capability-layers.md): MCP が tool list だけに見えなくなったときに戻る + +## Agent + +この教材での `agent` は、 + +> 入力を読み、判断し、必要なら tool を呼び出して仕事を進める model + +を指します。 + +簡単に言えば、 + +- model が考える +- harness が作業環境を与える + +という分担の、考える側です。 + +## Harness + +`harness` は agent の周囲に置く作業環境です。 + +たとえば次を含みます。 + +- tools +- filesystem +- permission system +- prompt assembly +- memory +- task runtime + +model そのものは harness ではありません。 +harness そのものも model ではありません。 + +## Agent Loop + +`agent loop` は agent system の主循環です。 + +最小形は次の 5 手順です。 + +1. 現在の context を model に渡す +2. response が普通の返答か tool_use かを見る +3. tool を実行する +4. result を context に戻す +5. 次の turn へ続くか止まるかを決める + +この loop がなければ、system は単発の chat で終わります。 + +## Message / `messages[]` + +`message` は 1 件の message、`messages[]` はその一覧です。 + +多くの章では次を含みます。 + +- user message +- assistant message +- tool_result + +これは agent の main working memory にあたります。 +ただし permanent memory ではありません。 + +## Tool + +`tool` は model が要求できる動作です。 + +たとえば、 + +- file を読む +- file を書く +- shell command を走らせる +- text を検索する + +などです。 + +重要なのは、 + +> model が直接 OS command を叩くのではなく、tool 名と引数を宣言し、実際の実行は harness 側の code が行う + +という点です。 + +## Tool Schema + +`tool schema` は tool の使い方を model に説明する構造です。 + +普通は次を含みます。 + +- tool 名 +- 何をするか +- 必要な parameter +- parameter の型 + +初心者向けに言えば、tool の説明書です。 + +## Dispatch Map + +`dispatch map` は、 + +> tool 名から実際の handler 関数へつなぐ対応表 + +です。 + +たとえば次のような形です。 + +```python +{ + "read_file": read_file_handler, + "write_file": write_file_handler, + "bash": bash_handler, +} +``` + +## Stop Reason + +`stop_reason` は、model のこの turn がなぜ止まったかを示す理由です。 + +代表例: + +- `end_turn`: 返答を終えた +- `tool_use`: tool を要求した +- `max_tokens`: 出力が token 上限で切れた + +main loop はこの値を見て次の動きを決めます。 + +## Context + +`context` は model が今見えている情報全体です。 + +ふつうは次を含みます。 + +- `messages` +- system prompt +- dynamic reminder +- tool_result + +context は permanent storage ではなく、 + +> 今この turn の机の上に出ている情報 + +と考えると分かりやすいです。 + +## Compact / Compaction + +`compact` は active context を縮めることです。 + +狙いは、 + +- 本当に必要な流れを残す +- 重複や雑音を削る +- 後続 turn のための space を作る + +ことです。 + +大事なのは「削ること」そのものではなく、 + +**次の turn に必要な構造を保ったまま薄くすること** + +です。 + +## Subagent + +`subagent` は親 agent から切り出された、一回限りの delegated worker です。 + +価値は次です。 + +- 親 context を汚さずに subtask を処理できる +- 結果だけを summary として返せる + +`teammate` とは違い、長く system に残る actor ではありません。 + +## Fork + +この教材での `fork` は、 + +> 子 agent を空白から始めるのではなく、親の context を引き継いで始める方式 + +を指します。 + +subtask が親の議論背景を理解している必要があるときに使います。 + +## Permission + +`permission` は、 + +> model が要求した操作を実行してよいか判定する層 + +です。 + +良い permission system は少なくとも次を分けます。 + +- すぐ拒否すべきもの +- 自動許可してよいもの +- user に確認すべきもの + +## Permission Mode + +`permission mode` は permission system の動作方針です。 + +例: + +- `default` +- `plan` +- `auto` + +つまり個々の request の判定規則ではなく、 + +> 判定の全体方針 + +です。 + +## Hook + +`hook` は主 loop を書き換えずに、特定の timing で追加動作を差し込む拡張点です。 + +たとえば、 + +- tool 実行前に検査する +- tool 実行後に監査 log を書く + +のようなことを行えます。 + +## Memory + +`memory` は session をまたいで残す価値のある情報です。 + +向いているもの: + +- user の長期的 preference +- 何度も再登場する重要事実 +- 将来の session でも役に立つ feedback + +向いていないもの: + +- その場限りの冗長な chat 履歴 +- すぐ再導出できる一時情報 + +## System Prompt + +`system prompt` は system-level の instruction surface です。 + +ここでは model に対して、 + +- あなたは何者か +- 何を守るべきか +- どのように協力すべきか + +を与えます。 + +普通の user message より安定して効く層です。 + +## System Reminder + +`system reminder` は毎 turn 動的に差し込まれる短い補助情報です。 + +たとえば、 + +- current working directory +- 現在日付 +- この turn だけ必要な補足 + +などです。 + +stable な system prompt とは役割が違います。 + +## Query + +この教材での `query` は、 + +> 1 つの user request を完了させるまで続く多 turn の処理全体 + +を指します。 + +単発の 1 回応答ではなく、 + +- model 呼び出し +- tool 実行 +- continuation +- recovery + +を含んだまとまりです。 + +## Transition Reason + +`transition reason` は、 + +> なぜこの system が次の turn へ続いたのか + +を説明する理由です。 + +これが見えるようになると、 + +- 普通の tool continuation +- retry +- compact 後の再開 +- recovery path + +を混ぜずに見られるようになります。 + +## Task + +`task` は durable work graph の中にある仕事目標です。 + +ふつう次を持ちます。 + +- subject +- status +- owner +- dependency + +ここでの task は「いま実行中の command」ではなく、 + +> system が長く持ち続ける work goal + +です。 + +## Dependency Graph + +`dependency graph` は task 間の依存関係です。 + +たとえば、 + +- A が終わってから B +- C と D は並行可 +- E は C と D の両方待ち + +のような関係を表します。 + +これにより system は、 + +- 今できる task +- まだ blocked な task +- 並行可能な task + +を判断できます。 + +## Runtime Task / Runtime Slot + +`runtime task` または `runtime slot` は、 + +> いま実行中、待機中、または直前まで動いていた live execution unit + +を指します。 + +例: + +- background の `pytest` +- 走っている teammate +- monitor process + +`task` との違いはここです。 + +- `task`: goal +- `runtime slot`: live execution + +## Teammate + +`teammate` は multi-agent system 内で長く存在する collaborator です。 + +`subagent` との違い: + +- `subagent`: 一回限りの委譲 worker +- `teammate`: 長く残り、繰り返し仕事を受ける actor + +## Protocol + +`protocol` は、事前に決めた協調ルールです。 + +答える内容は次です。 + +- message はどんな shape か +- response はどう返すか +- approve / reject / expire をどう記録するか + +team 章では多くの場合、 + +```text +request -> response -> status update +``` + +という骨格で現れます。 + +## Envelope + +`envelope` は、 + +> 本文に加えてメタデータも一緒に包んだ構造化 record + +です。 + +たとえば message 本文に加えて、 + +- `from` +- `to` +- `request_id` +- `timestamp` + +を一緒に持つものです。 + +## State Machine + +`state machine` は難しい理論名に見えますが、ここでは + +> 状態がどう変化してよいかを書いた規則表 + +です。 + +たとえば、 + +```text +pending -> approved +pending -> rejected +pending -> expired +``` + +だけでも最小の state machine です。 + +## Router + +`router` は分配器です。 + +役割は、 + +- request がどの種類かを見る +- 正しい処理経路へ送る + +ことです。 + +tool system では、 + +- local handler +- MCP client +- plugin bridge + +のどこへ送るかを決める層として現れます。 + +## Control Plane + +`control plane` は、 + +> 自分で本仕事をするというより、誰がどう実行するかを調整する層 + +です。 + +たとえば、 + +- permission 判定 +- prompt assembly +- continuation 理由 +- lane 選択 + +などがここに寄ります。 + +初見では怖く見えるかもしれませんが、この教材ではまず + +> 実作業そのものではなく、作業の進め方を調整する層 + +と覚えれば十分です。 + +## Capability + +`capability` は能力項目です。 + +MCP の文脈では、capability は tool だけではありません。 + +たとえば、 + +- tools +- resources +- prompts +- elicitation + +のように複数層があります。 + +## Worktree + +`worktree` は同じ repository の別 working copy です。 + +この教材では、 + +> task ごとに割り当てる isolated execution directory + +として使います。 + +価値は次です。 + +- 並行作業が互いの未コミット変更を汚染しない +- task と execution lane の対応が見える +- review や closeout がしやすい + +## MCP + +`MCP` は Model Context Protocol です。 + +この教材では単なる remote tool list より広く、 + +> 外部 capability を統一的に接続する surface + +として扱います。 + +つまり「外部 tool を呼べる」だけではなく、 + +- connection +- auth +- resources +- prompts +- capability routing + +まで含む層です。 diff --git a/docs/ja/s00-architecture-overview.md b/docs/ja/s00-architecture-overview.md new file mode 100644 index 000000000..b3f740d05 --- /dev/null +++ b/docs/ja/s00-architecture-overview.md @@ -0,0 +1,341 @@ +# s00: アーキテクチャ全体図 + +> この章は教材全体の地図です。 +> 「結局この repository は何を教えようとしていて、なぜこの順番で章が並んでいるのか」を先に掴みたいなら、まずここから読むのがいちばん安全です。 + +## 先に結論 + +この教材の章順は妥当です。 + +大事なのは章数の多さではありません。 +大事なのは、初学者が無理なく積み上がる順番で system を育てていることです。 + +全体は次の 4 段階に分かれています。 + +1. まず本当に動く単一 agent を作る +2. その上に安全性、拡張点、memory、prompt、recovery を足す +3. 会話中の一時的 progress を durable work system へ押し上げる +4. 最後に teams、protocols、autonomy、worktree、MCP / plugin へ広げる + +この順番が自然なのは、学習者が最初に固めるべき主線がたった 1 本だからです。 + +```text +user input + -> +model reasoning + -> +tool execution + -> +result write-back + -> +next turn or finish +``` + +この主線がまだ曖昧なまま後段の mechanism を積むと、 + +- permission +- hook +- memory +- MCP +- worktree + +のような言葉が全部ばらばらの trivia に見えてしまいます。 + +## この教材が再構成したいもの + +この教材の目標は、どこかの production code を逐行でなぞることではありません。 + +本当に再構成したいのは次の部分です。 + +- 主要 module は何か +- module 同士がどう協調するか +- 各 module の責務は何か +- 重要 state がどこに住むか +- 1 つの request が system の中をどう流れるか + +つまり狙っているのは、 + +**設計主脈への高い忠実度であって、周辺実装の 1:1 再現ではありません。** + +これはとても重要です。 + +もしあなたが本当に知りたいのが、 + +> 0 から自分で高完成度の coding agent harness を作れるようになること + +なら、優先して掴むべきなのは次です。 + +- agent loop +- tools +- planning +- context management +- permissions +- hooks +- memory +- prompt assembly +- tasks +- teams +- isolated execution lanes +- external capability routing + +逆に、最初の主線に持ち込まなくてよいものもあります。 + +- packaging / release +- cross-platform compatibility の細かな枝 +- enterprise wiring +- telemetry +- 歴史的 compatibility layer +- product 固有の naming accident + +これらが存在しうること自体は否定しません。 +ただし 0-to-1 教学の中心に置くべきではありません。 + +## 読むときの 3 つの原則 + +### 1. まず最小で正しい版を学ぶ + +たとえば subagent なら、最初に必要なのはこれだけです。 + +- 親 agent が subtask を切る +- 子 agent が自分の `messages` を持つ +- 子 agent が summary を返す + +これだけで、 + +**親 context を汚さずに探索作業を切り出せる** + +という核心は学べます。 + +そのあとでようやく、 + +- 親 context を引き継ぐ fork +- 独立 permission +- background 実行 +- worktree 隔離 + +を足せばよいです。 + +### 2. 新しい語は使う前に意味を固める + +この教材では次のような語が頻繁に出ます。 + +- state machine +- dispatch map +- dependency graph +- worktree +- protocol envelope +- capability +- control plane + +意味が曖昧なまま先へ進むと、後ろの章で一気に詰まります。 + +そのときは無理に本文を読み切ろうとせず、次の文書へ戻ってください。 + +- [`glossary.md`](./glossary.md) +- [`entity-map.md`](./entity-map.md) +- [`data-structures.md`](./data-structures.md) + +### 3. 周辺の複雑さを主線へ持ち込みすぎない + +良い教材は「全部話す教材」ではありません。 + +良い教材は、 + +- 核心は完全に話す +- 周辺で重く複雑なものは後ろへ回す + +という構造を持っています。 + +だからこの repository では、あえて主線の外に置いている内容があります。 + +- packaging / release +- enterprise policy glue +- telemetry +- client integration の細部 +- 逐行の逆向き比較 trivia + +## 先に開いておくと楽な補助文書 + +主線 chapter と一緒に、次の文書を補助地図として持っておくと理解が安定します。 + +| 文書 | 用途 | +|---|---| +| [`teaching-scope.md`](./teaching-scope.md) | 何を教え、何を意図的に省くかを見る | +| [`data-structures.md`](./data-structures.md) | system 全体の重要 record を一か所で見る | +| [`s00f-code-reading-order.md`](./s00f-code-reading-order.md) | chapter order と local code reading order をそろえる | + +さらに、後半で mechanism 間のつながりが曖昧になったら、次の bridge docs が効きます。 + +| 文書 | 補うもの | +|---|---| +| [`s00d-chapter-order-rationale.md`](./s00d-chapter-order-rationale.md) | なぜ今の順番で学ぶのか | +| [`s00e-reference-module-map.md`](./s00e-reference-module-map.md) | 参照 repository の高信号 module 群と教材章の対応 | +| [`s00a-query-control-plane.md`](./s00a-query-control-plane.md) | 高完成度 system に loop 以外の control plane が必要になる理由 | +| [`s00b-one-request-lifecycle.md`](./s00b-one-request-lifecycle.md) | 1 request が system 全体をどう流れるか | +| [`s02a-tool-control-plane.md`](./s02a-tool-control-plane.md) | tool layer が単なる `tool_name -> handler` で終わらない理由 | +| [`s10a-message-prompt-pipeline.md`](./s10a-message-prompt-pipeline.md) | message / prompt / memory がどこで合流するか | +| [`s13a-runtime-task-model.md`](./s13a-runtime-task-model.md) | durable task と live runtime slot の違い | +| [`s19a-mcp-capability-layers.md`](./s19a-mcp-capability-layers.md) | MCP を capability bus として見るための地図 | +| [`entity-map.md`](./entity-map.md) | entity の境界を徹底的に分ける | + +## 4 段階の学習パス + +### Stage 1: Core Single-Agent (`s01-s06`) + +ここでの目標は、 + +**まず本当に役に立つ単一 agent を作ること** + +です。 + +| 章 | 学ぶもの | 解く問題 | +|---|---|---| +| `s01` | Agent Loop | loop がなければ agent にならない | +| `s02` | Tool Use | model を「話すだけ」から「実際に動く」へ変える | +| `s03` | Todo / Planning | multi-step work が漂わないようにする | +| `s04` | Subagent | 探索作業で親 context を汚さない | +| `s05` | Skills | 必要な知識だけ後から載せる | +| `s06` | Context Compact | 会話が長くなっても主線を保つ | + +### Stage 2: Hardening (`s07-s11`) + +ここでの目標は、 + +**動くだけの agent を、安全で拡張可能な agent へ押し上げること** + +です。 + +| 章 | 学ぶもの | 解く問題 | +|---|---|---| +| `s07` | Permission System | 危険な操作を gate の後ろへ置く | +| `s08` | Hook System | loop 本体を書き換えず周辺拡張する | +| `s09` | Memory System | 本当に価値ある情報だけを跨 session で残す | +| `s10` | System Prompt | stable rule と runtime input を組み立てる | +| `s11` | Error Recovery | 失敗後も stop 一択にしない | + +### Stage 3: Runtime Work (`s12-s14`) + +ここでの目標は、 + +**session 中の計画を durable work graph と runtime execution に分けること** + +です。 + +| 章 | 学ぶもの | 解く問題 | +|---|---|---| +| `s12` | Task System | work goal を disk 上に持つ | +| `s13` | Background Tasks | 遅い command が前景思考を止めないようにする | +| `s14` | Cron Scheduler | 時間そのものを trigger にする | + +### Stage 4: Platform (`s15-s19`) + +ここでの目標は、 + +**single-agent harness を協調 platform へ広げること** + +です。 + +| 章 | 学ぶもの | 解く問題 | +|---|---|---| +| `s15` | Agent Teams | persistent teammate を持つ | +| `s16` | Team Protocols | 協調を自由文から structured flow へ上げる | +| `s17` | Autonomous Agents | idle teammate が自分で次の work を取れるようにする | +| `s18` | Worktree Isolation | 並行 task が同じ directory を踏み荒らさないようにする | +| `s19` | MCP & Plugin | 外部 capability を統一 surface で扱う | + +## 各章が system に足す中核構造 + +読者が中盤で混乱しやすいのは、 + +- 今の章は何を増やしているのか +- その state は system のどこに属するのか + +が曖昧になるからです。 + +そこで各章を「新しく足す構造」で見直すとこうなります。 + +| 章 | 中核構造 | 学習後に言えるべきこと | +|---|---|---| +| `s01` | `LoopState` | 最小の agent loop を自分で書ける | +| `s02` | `ToolSpec` / dispatch map | model の意図を安定して実行へ落とせる | +| `s03` | `TodoItem` / `PlanState` | 現在の progress を外部 state として持てる | +| `s04` | `SubagentContext` | 親 context を汚さず委譲できる | +| `s05` | `SkillRegistry` | 必要な knowledge を必要な時だけ注入できる | +| `s06` | compaction records | 長い対話でも主線を保てる | +| `s07` | `PermissionDecision` | 実行を gate の後ろへ置ける | +| `s08` | hook events | loop を壊さず extension を追加できる | +| `s09` | memory records | 跨 session で残すべき情報を選別できる | +| `s10` | prompt parts | 入力を section 単位で組み立てられる | +| `s11` | recovery state / transition reason | なぜ続行するのかを state として説明できる | +| `s12` | `TaskRecord` | durable work graph を作れる | +| `s13` | `RuntimeTaskState` | live execution と work goal を分けて見られる | +| `s14` | `ScheduleRecord` | time-based trigger を足せる | +| `s15` | `TeamMember` | persistent actor を持てる | +| `s16` | `ProtocolEnvelope` / `RequestRecord` | structured coordination を作れる | +| `s17` | `ClaimPolicy` / autonomy state | 自治的な claim / resume を説明できる | +| `s18` | `WorktreeRecord` / `TaskBinding` | 並行 execution lane を分離できる | +| `s19` | `MCPServerConfig` / capability route | native / plugin / MCP を同じ外側境界で見られる | + +## system 全体を 3 層で見る + +全体を最も簡単に捉えるなら、次の 3 層に分けてください。 + +```text +1. Main Loop + user input を受け、model を呼び、結果に応じて続く + +2. Control / Context Layer + permission、hook、memory、prompt、recovery が loop を支える + +3. Work / Platform Layer + tasks、teams、runtime slots、worktrees、MCP が大きな作業面を作る +``` + +図で見るとこうです。 + +```text +User + | + v +messages[] + | + v ++-------------------------+ +| Agent Loop (s01) | +| 1. 入力を組み立てる | +| 2. model を呼ぶ | +| 3. stop_reason を見る | +| 4. tool を実行する | +| 5. result を write-back | +| 6. 次 turn を決める | ++-------------------------+ + | + +------------------------------+ + | | + v v +Tool / Control Plane Context / State Layer +(s02, s07, s08, s19) (s03, s06, s09, s10, s11) + | | + v v +Tasks / Teams / Worktree / Runtime (s12-s18) +``` + +ここで大切なのは、system 全体を 1 本の巨大な file や 1 つの class として捉えないことです。 + +**chapter order とは、system をどの層の順で理解すると最も心智負荷が低いかを表したもの** + +です。 + +## この章を読み終えたら何が言えるべきか + +この章のゴールは、個々の API を覚えることではありません。 + +読み終えた時点で、少なくとも次の 3 文を自分の言葉で言える状態を目指してください。 + +1. この教材は production implementation の周辺 detail ではなく、agent harness の主設計を教えている +2. chapter order は `single agent -> hardening -> runtime work -> platform` の 4 段階で意味がある +3. 後ろの章の mechanism は前の章の上に自然に積み上がるので、順番を大きく崩すと学習心智が乱れる + +## 一文で覚える + +**良い章順とは、機能一覧ではなく、前の層から次の層が自然に育つ学習経路です。** diff --git a/docs/ja/s00a-query-control-plane.md b/docs/ja/s00a-query-control-plane.md new file mode 100644 index 000000000..f39966f2b --- /dev/null +++ b/docs/ja/s00a-query-control-plane.md @@ -0,0 +1,243 @@ +# s00a: Query Control Plane + +> これは主線章ではなく橋渡し文書です。 +> ここで答えたいのは次の問いです。 +> +> **なぜ高完成度の agent は `messages[]` と `while True` だけでは足りないのか。** + +## なぜこの文書が必要か + +`s01` では最小の loop を学びます。 + +```text +ユーザー入力 + -> +モデル応答 + -> +tool_use があれば実行 + -> +tool_result を戻す + -> +次ターン +``` + +これは正しい出発点です。 + +ただし実システムが成長すると、支えるのは loop 本体だけではなく: + +- 今どの turn か +- なぜ続行したのか +- compact を試したか +- token recovery 中か +- hook が終了条件に影響しているか + +といった **query 制御状態** です。 + +この層を明示しないと、動く demo は作れても、高完成度 harness へ育てにくくなります。 + +## まず用語を分ける + +### Query + +ここでの `query` は database query ではありません。 + +意味は: + +> 1つのユーザー要求を完了するまで続く、多ターンの処理全体 + +です。 + +### Control Plane + +`control plane` は: + +> 実際の業務動作をする層ではなく、流れをどう進めるかを管理する層 + +です。 + +ここでは: + +- model 応答や tool result は内容 +- 「次に続けるか」「なぜ続けるか」は control plane + +と考えると分かりやすいです。 + +### Transition Reason + +`transition reason` は: + +> 前のターンが終わらず、次ターンへ進んだ理由 + +です。 + +たとえば: + +- tool が終わった +- 出力が切れて続きを書く必要がある +- compact 後に再実行する +- hook が続行を要求した + +などがあります。 + +## 最小の心智モデル + +```text +1. 入力層 + - messages + - system prompt + - runtime context + +2. 制御層 + - query state + - turn count + - transition reason + - compact / recovery flags + +3. 実行層 + - model call + - tool execution + - write-back +``` + +この層は loop を置き換えるためではありません。 + +**小さな loop を、分岐と状態を扱える system に育てるため**にあります。 + +## なぜ `messages[]` だけでは足りないか + +最小 demo では、多くのことを `messages[]` に押し込めても動きます。 + +しかし次の情報は会話内容ではなく制御状態です。 + +- reactive compact を既に試したか +- 出力続行を何回したか +- 今回の続行が tool によるものか recovery によるものか +- 今だけ output budget を変えているか + +これらを全部 `messages[]` に混ぜると、状態の境界が崩れます。 + +## 主要なデータ構造 + +### `QueryParams` + +query に入るときの外部入力です。 + +```python +params = { + "messages": [...], + "system_prompt": "...", + "user_context": {...}, + "system_context": {...}, + "tool_use_context": {...}, + "max_output_tokens_override": None, + "max_turns": None, +} +``` + +これは「入口で既に分かっているもの」です。 + +### `QueryState` + +query の途中で変わり続ける制御状態です。 + +```python +state = { + "messages": [...], + "tool_use_context": {...}, + "turn_count": 1, + "continuation_count": 0, + "has_attempted_compact": False, + "max_output_tokens_override": None, + "stop_hook_active": False, + "transition": None, +} +``` + +重要なのは: + +- 内容状態と制御状態を分ける +- どの continue site も同じ state を更新する + +ことです。 + +### `TransitionReason` + +続行理由は文字列でも enum でもよいですが、明示する方がよいです。 + +```python +TRANSITIONS = ( + "tool_result_continuation", + "max_tokens_recovery", + "compact_retry", + "stop_hook_continuation", +) +``` + +これで: + +- log +- test +- debug +- 教材説明 + +がずっと分かりやすくなります。 + +## 最小実装の流れ + +### 1. 外部入力と内部状態を分ける + +```python +def query(params): + state = { + "messages": params["messages"], + "tool_use_context": params["tool_use_context"], + "turn_count": 1, + "continuation_count": 0, + "has_attempted_compact": False, + "transition": None, + } +``` + +### 2. 各ターンで state を読んで実行する + +```python +while True: + response = call_model(...) +``` + +### 3. 続行時は必ず state に理由を書き戻す + +```python +if response.stop_reason == "tool_use": + state["messages"] = append_tool_results(...) + state["transition"] = "tool_result_continuation" + state["turn_count"] += 1 + continue +``` + +大事なのは: + +**ただ `continue` するのではなく、なぜ `continue` したかを状態に残すこと** + +です。 + +## 初学者が混ぜやすいもの + +### 1. 会話内容と制御状態 + +- `messages` は内容 +- `turn_count` や `transition` は制御 + +### 2. Loop と Control Plane + +- loop は反復の骨格 +- control plane はその反復を管理する層 + +### 3. Prompt assembly と query state + +- prompt assembly は「このターンに model へ何を渡すか」 +- query state は「この query が今どういう状態か」 + +## 一文で覚える + +**高完成度の agent では、会話内容を持つ層と、続行理由を持つ層を分けた瞬間に system の見通しが良くなります。** diff --git a/docs/ja/s00b-one-request-lifecycle.md b/docs/ja/s00b-one-request-lifecycle.md new file mode 100644 index 000000000..aab6b4a57 --- /dev/null +++ b/docs/ja/s00b-one-request-lifecycle.md @@ -0,0 +1,263 @@ +# s00b: 1 リクエストのライフサイクル + +> これは橋渡し文書です。 +> 章ごとの説明を、1本の実行の流れとしてつなぎ直します。 +> +> 問いたいのは次です。 +> +> **ユーザーの一言が system に入ってから、どう流れ、どこで状態が変わり、どう loop に戻るのか。** + +## なぜ必要か + +章を順に読むと、個別の仕組みは理解できます。 + +- `s01`: loop +- `s02`: tools +- `s07`: permissions +- `s09`: memory +- `s12-s19`: tasks / teams / worktree / MCP + +しかし実装段階では、次の疑問で詰まりやすいです。 + +- 先に走るのは prompt か memory か +- tool 実行前に permissions と hooks はどこへ入るのか +- task、runtime task、teammate、worktree はどの段で関わるのか + +この文書はその縦の流れをまとめます。 + +## まず全体図 + +```text +ユーザー要求 + | + v +Query State 初期化 + | + v +system prompt / messages / reminders を組み立てる + | + v +モデル呼び出し + | + +-- 普通の応答 --------------------------> 今回の request は終了 + | + +-- tool_use + | + v + Tool Router + | + +-- permission gate + +-- hook interception + +-- native tool / task / teammate / MCP + | + v + 実行結果 + | + +-- task / runtime / memory / worktree 状態を書き換える場合がある + | + v + tool_result を messages へ write-back + | + v + Query State 更新 + | + v + 次ターン +``` + +## 第 1 段: Query State を作る + +ユーザーが: + +```text +tests/test_auth.py の失敗を直して、原因も説明して +``` + +と言ったとき、最初に起きるのは shell 実行ではありません。 + +まず「今回の request の状態」が作られます。 + +```python +query_state = { + "messages": [{"role": "user", "content": user_text}], + "turn_count": 1, + "transition": None, + "tool_use_context": {...}, +} +``` + +ポイントは: + +**1 リクエスト = 1 API call ではなく、複数ターンにまたがる処理** + +ということです。 + +## 第 2 段: モデル入力を組み立てる + +実システムは、生の `messages` だけをそのまま送らないことが多いです。 + +組み立てる対象はたとえば: + +- system prompt blocks +- normalized messages +- memory section +- reminders +- tool list + +つまりモデルが実際に見るのは: + +```text +system prompt ++ normalized messages ++ optional memory / reminders / attachments ++ tools +``` + +ここで大事なのは: + +**system prompt は入力全体ではなく、その一部** + +だということです。 + +## 第 3 段: モデルは 2 種類の出力を返す + +### 1. 普通の回答 + +結論や説明だけを返し、今回の request が終わる場合です。 + +### 2. 動作意図 + +tool call です。 + +例: + +```text +read_file(...) +bash(...) +todo_write(...) +agent(...) +mcp__server__tool(...) +``` + +ここで system が受け取るのは単なる文章ではなく: + +> モデルが「現実の動作を起こしたい」という意図 + +です。 + +## 第 4 段: Tool Router が受け取る + +`tool_use` が出たら、次は tool control plane の責任です。 + +最低でも次を決めます。 + +1. これはどの tool か +2. どの handler / capability へ送るか +3. 実行前に permission が必要か +4. hook が割り込むか +5. どの共有状態へアクセスするか + +## 第 5 段: Permission が gate をかける + +危険な動作は、そのまま実行されるべきではありません。 + +たとえば: + +- file write +- bash +- 外部 service 呼び出し +- worktree の削除 + +ここで system は: + +```text +deny + -> mode + -> allow + -> ask +``` + +のような判断経路を持ちます。 + +permission が扱うのは: + +> この動作を起こしてよいか + +です。 + +## 第 6 段: Hook が周辺ロジックを足す + +hook は permission とは別です。 + +hook は: + +- 実行前の補助チェック +- 実行後の記録 +- 補助メッセージの注入 + +など、loop の周辺で side effect を足します。 + +つまり: + +- permission は gate +- hook は extension + +です。 + +## 第 7 段: 実行結果が状態を変える + +tool は text だけを返すとは限りません。 + +実行によって: + +- task board が更新される +- runtime task が生成される +- memory 候補が増える +- worktree lane が作られる +- teammate へ request が飛ぶ +- MCP resource / tool result が返る + +といった状態変化が起きます。 + +ここでの大原則は: + +**tool result は内容を返すだけでなく、system state を進める** + +ということです。 + +## 第 8 段: tool_result を loop へ戻す + +最後に system は結果を `messages` へ戻します。 + +```python +messages.append({ + "role": "user", + "content": [ + {"type": "tool_result", ...} + ], +}) +``` + +そして query state を更新し: + +- `turn_count` +- `transition` +- compact / recovery flags + +などを整えて、次ターンへ進みます。 + +## 後半章はどこで関わるか + +| 仕組み | 1 request の中での役割 | +|---|---| +| `s09` memory | 入力 assembly の一部になる | +| `s10` prompt pipeline | 各 source を 1 つの model input へ組む | +| `s12` task | durable work goal を持つ | +| `s13` runtime task | 今動いている execution slot を持つ | +| `s15-s17` teammate / protocol / autonomy | request を actor 間で回す | +| `s18` worktree | 実行ディレクトリを分離する | +| `s19` MCP | 外部 capability provider と接続する | + +## 一文で覚える + +**1 request の本体は「モデルを 1 回呼ぶこと」ではなく、「入力を組み、動作を実行し、結果を state に戻し、必要なら次ターンへ続けること」です。** diff --git a/docs/ja/s00c-query-transition-model.md b/docs/ja/s00c-query-transition-model.md new file mode 100644 index 000000000..71a4c7dd2 --- /dev/null +++ b/docs/ja/s00c-query-transition-model.md @@ -0,0 +1,264 @@ +# s00c: Query Transition Model + +> この bridge doc は次の一点を解くためのものです。 +> +> **高完成度の agent では、なぜ query が次の turn へ続くのかを明示しなければならないのか。** + +## なぜこの資料が必要か + +主線では次を順に学びます。 + +- `s01`: 最小 loop +- `s06`: context compact +- `s11`: error recovery + +流れ自体は正しいです。 + +ただし、章ごとに別々に読むと多くの読者は次のように理解しがちです。 + +> 「とにかく `continue` したから次へ進む」 + +これは toy demo なら動きます。 + +しかし高完成度システムではすぐに破綻します。 + +なぜなら query が継続する理由は複数あり、それぞれ本質が違うからです。 + +- tool が終わり、その結果を model に戻す +- 出力が token 上限で切れて続きが必要 +- compact 後に再試行する +- transport error の後で backoff して再試行する +- stop hook がまだ終わるなと指示する +- budget policy がまだ継続を許している + +これら全部を曖昧な `continue` に潰すと、すぐに次が悪化します。 + +- log が読みにくくなる +- test が書きにくくなる +- 学習者の心智モデルが濁る + +## まず用語 + +### transition とは + +ここでの `transition` は: + +> 前の turn が次の turn へ移った理由 + +を指します。 + +message 内容そのものではなく、制御上の原因です。 + +### continuation とは + +continuation は: + +> この query がまだ終わっておらず、先へ進むべき状態 + +のことです。 + +ただし continuation は一種類ではありません。 + +### query boundary とは + +query boundary は turn と次の turn の境目です。 + +この境界を越えるたびに、システムは次を知っているべきです。 + +- なぜ続くのか +- 続く前にどの state を変えたのか +- 次の turn がその変更をどう解釈するのか + +## 最小の心智モデル + +query を一本の直線だと思わないでください。 + +より実像に近い理解は次です。 + +```text +1 本の query + = 明示された continuation reason を持つ + state transition の連鎖 +``` + +例えば: + +```text +user input + -> +model emits tool_use + -> +tool finishes + -> +tool_result_continuation + -> +model output is truncated + -> +max_tokens_recovery + -> +compact_retry + -> +final completion +``` + +重要なのは: + +> システムは while loop を漫然と回しているのではなく、 +> 明示された transition reason の列で進んでいる + +ということです。 + +## 主要 record + +### 1. query state の `transition` + +教材版でも次のような field は明示しておくべきです。 + +```python +state = { + "messages": [...], + "turn_count": 3, + "continuation_count": 1, + "has_attempted_compact": False, + "transition": None, +} +``` + +この field は飾りではありません。 + +これによって: + +- この turn がなぜ存在するか +- log がどう説明すべきか +- test がどの path を assert すべきか + +が明確になります。 + +### 2. `TransitionReason` + +教材版の最小集合は次の程度で十分です。 + +```python +TRANSITIONS = ( + "tool_result_continuation", + "max_tokens_recovery", + "compact_retry", + "transport_retry", + "stop_hook_continuation", + "budget_continuation", +) +``` + +これらは同じではありません。 + +- `tool_result_continuation` + は通常の主線継続 +- `max_tokens_recovery` + は切れた出力の回復継続 +- `compact_retry` + は context 再構成後の継続 +- `transport_retry` + は基盤失敗後の再試行継続 +- `stop_hook_continuation` + は外部制御による継続 +- `budget_continuation` + は budget policy による継続 + +### 3. continuation budget + +高完成度システムは単に続行するだけではなく、続行回数を制御します。 + +```python +state = { + "max_output_tokens_recovery_count": 2, + "has_attempted_reactive_compact": True, +} +``` + +本質は: + +> continuation は無限の抜け道ではなく、制御された資源 + +という点です。 + +## 最小実装の進め方 + +### Step 1: continue site を明示する + +初心者の loop はよくこうなります。 + +```python +continue +``` + +教材版は一歩進めます。 + +```python +state["transition"] = "tool_result_continuation" +continue +``` + +### Step 2: continuation と state patch を対にする + +```python +if response.stop_reason == "tool_use": + state["messages"] = append_tool_results(...) + state["turn_count"] += 1 + state["transition"] = "tool_result_continuation" + continue + +if response.stop_reason == "max_tokens": + state["messages"].append({ + "role": "user", + "content": CONTINUE_MESSAGE, + }) + state["max_output_tokens_recovery_count"] += 1 + state["transition"] = "max_tokens_recovery" + continue +``` + +大事なのは「1 行増えた」ことではありません。 + +大事なのは: + +> 続行する前に、理由と state mutation を必ず知っている + +ことです。 + +### Step 3: 通常継続と recovery 継続を分ける + +```python +if should_retry_transport(error): + time.sleep(backoff(...)) + state["transition"] = "transport_retry" + continue + +if should_recompact(error): + state["messages"] = compact_messages(state["messages"]) + state["transition"] = "compact_retry" + continue +``` + +ここまで来ると `continue` は曖昧な動作ではなく、型付きの control transition になります。 + +## 何を test すべきか + +教材 repo では少なくとも次を test しやすくしておくべきです。 + +- tool result が `tool_result_continuation` を書く +- truncated output が `max_tokens_recovery` を書く +- compact retry が古い reason を黙って使い回さない +- transport retry が通常 turn に見えない + +これが test しづらいなら、まだ model が暗黙的すぎます。 + +## 何を教えすぎないか + +vendor 固有の transport detail や細かすぎる enum を全部教える必要はありません。 + +教材 repo で本当に必要なのは次です。 + +> 1 本の query は明示された transition の連鎖であり、 +> 各 transition は reason・state patch・budget rule を持つ + +ここが分かれば、開発者は高完成度 agent を 0 から組み直せます。 diff --git a/docs/ja/s00d-chapter-order-rationale.md b/docs/ja/s00d-chapter-order-rationale.md new file mode 100644 index 000000000..51c727156 --- /dev/null +++ b/docs/ja/s00d-chapter-order-rationale.md @@ -0,0 +1,325 @@ +# s00d: Chapter Order Rationale + +> この資料は 1 つの仕組みを説明するためのものではありません。 +> もっと基礎的な問いに答えるための資料です: +> +> **なぜこの教材は今の順序で教えるのか。なぜ source file の並びや機能の派手さ、実装難度の順ではないのか。** + +## 先に結論 + +現在の `s01 -> s19` の順序は妥当です。 + +この順序の価値は、単に章数が多いことではなく、学習者が理解すべき依存順でシステムを育てていることです。 + +1. 最小の agent loop を作る +2. その loop の周囲に control plane と hardening を足す +3. session 内 planning を durable work と runtime state へ広げる +4. その後で teammate、isolation lane、external capability へ広げる + +つまりこの教材は: + +**mechanism の依存順** + +で構成されています。 + +## 4 本の依存線 + +この教材は大きく 4 本の依存線で並んでいます。 + +1. `core loop dependency` +2. `control-plane dependency` +3. `work-state dependency` +4. `platform-boundary dependency` + +雑に言うと: + +```text +まず agent を動かす + -> 次に安全に動かす + -> 次に長く動かす + -> 最後に platform として動かす +``` + +これが今の順序の核心です。 + +## 全体の並び + +```text +s01-s06 + 単一 agent の最小主線を作る + +s07-s11 + control plane と hardening を足す + +s12-s14 + durable work と runtime を作る + +s15-s19 + teammate・protocol・autonomy・worktree・external capability を足す +``` + +各段の終わりで、学習者は次のように言えるべきです。 + +- `s06` の後: 「動く単一 agent harness を自力で作れる」 +- `s11` の後: 「それをより安全に、安定して、拡張しやすくできる」 +- `s14` の後: 「durable task、background runtime、time trigger を整理して説明できる」 +- `s19` の後: 「高完成度 agent platform の外周境界が見えている」 + +## なぜ前半は今の順序で固定すべきか + +### `s01` は必ず最初 + +ここで定義されるのは: + +- 最小の入口 +- turn ごとの進み方 +- tool result がなぜ次の model call に戻るのか + +これがないと、後ろの章はすべて空中に浮いた feature 説明になります。 + +### `s02` は `s01` の直後でよい + +tool がない agent は、まだ「話しているだけ」で「作業している」状態ではありません。 + +`s02` で初めて: + +- model が `tool_use` を出す +- system が handler を選ぶ +- tool が実行される +- `tool_result` が loop に戻る + +という、harness の実在感が出ます。 + +### `s03` は `s04` より前であるべき + +教育上ここは重要です。 + +先に教えるべきなのは: + +- 現在の agent が自分の仕事をどう整理するか + +その後に教えるべきなのが: + +- どの仕事を subagent へ切り出すべきか + +`s04` を早くしすぎると、subagent が isolation mechanism ではなく逃げ道に見えてしまいます。 + +### `s05` は `s06` の前で正しい + +この 2 章は同じ問題の前半と後半です。 + +- `s05`: そもそも不要な知識を context へ入れすぎない +- `s06`: それでも残る context をどう compact するか + +先に膨張を減らし、その後で必要なものだけ compact する。 +この順序はとても自然です。 + +## なぜ `s07-s11` は 1 つの hardening block なのか + +この 5 章は別々に見えて、実は同じ問いに答えています: + +**loop はもう動く。では、それをどう安定した本当の system にするか。** + +### `s07` は `s08` より前で正しい + +先に必要なのは: + +- その action を実行してよいか +- deny するか +- user に ask するか + +という gate の考え方です。 + +その後で: + +- loop の周囲に何を hook するか + +を教える方が自然です。 + +つまり: + +**gate が先、extend が後** + +です。 + +### `s09` は `s10` より前で正しい + +`s09` は: + +- durable information が何か +- 何を long-term に残すべきか + +を教えます。 + +`s10` は: + +- 複数の入力源をどう model input に組み立てるか + +を教えます。 + +つまり: + +- memory は content source を定義する +- prompt assembly は source たちの組み立て順を定義する + +逆にすると、prompt pipeline が不自然で謎の文字列操作に見えやすくなります。 + +### `s11` はこの block の締めとして適切 + +error recovery は独立した機能ではありません。 + +ここで system は初めて: + +- なぜ continue するのか +- なぜ retry するのか +- なぜ stop するのか + +を明示する必要があります。 + +そのためには、input path、tool path、state path、control path が先に見えている必要があります。 + +## なぜ `s12-s14` は goal -> runtime -> schedule の順なのか + +ここは順番を崩すと一気に混乱します。 + +### `s12` は `s13` より先 + +`s12` は: + +- 仕事そのものが何か +- dependency がどう張られるか +- downstream work がいつ unlock されるか + +を教えます。 + +`s13` は: + +- 今まさに何が live execution として動いているか +- background result がどこへ戻るか +- runtime state がどう write-back されるか + +を教えます。 + +つまり: + +- `task` は durable goal +- `runtime task` は live execution slot + +です。 + +ここを逆にすると、この 2 つが一語の task に潰れてしまいます。 + +### `s14` は `s13` の後であるべき + +cron は別種の task を増やす章ではありません。 + +追加するのは: + +**time という start condition** + +です。 + +だから自然な順序は: + +`durable task graph -> runtime slot -> schedule trigger` + +になります。 + +## なぜ `s15-s19` は team -> protocol -> autonomy -> worktree -> capability bus なのか + +### `s15` で system 内に誰が持続するかを定義する + +protocol や autonomy より前に必要なのは durable actor です。 + +- teammate は誰か +- どんな identity を持つか +- どう持続するか + +### `s16` で actor 間の coordination rule を定義する + +protocol は actor より先には来ません。 + +protocol は次を構造化するために存在します。 + +- 誰が request するか +- 誰が approve するか +- 誰が respond するか +- どう trace するか + +### `s17` はその後で初めて明確になる + +autonomy は曖昧に説明しやすい概念です。 + +しかし本当に必要なのは: + +- persistent teammate がすでに存在する +- structured coordination がすでに存在する + +という前提です。 + +そうでないと autonomous claim は魔法っぽく見えてしまいます。 + +### `s18` は `s19` より前がよい + +worktree isolation は local execution boundary の問題です。 + +- 並列作業がどこで走るか +- lane 同士をどう隔離するか + +これを先に見せてから: + +- plugin +- MCP server +- external capability route + +へ進む方が、自作実装の足場が崩れません。 + +### `s19` は最後で正しい + +ここは platform の最外周です。 + +local の: + +- actor +- lane +- durable task +- runtime execution + +が見えた後で、ようやく: + +- external capability provider + +がきれいに入ってきます。 + +## コースを悪くする 5 つの誤った並べ替え + +1. `s04` を `s03` より前に動かす + local planning より先に delegation を教えてしまう。 + +2. `s10` を `s09` より前に動かす + input source の理解なしに prompt assembly を教えることになる。 + +3. `s13` を `s12` より前に動かす + durable goal と live runtime slot が混ざる。 + +4. `s17` を `s15` や `s16` より前に動かす + autonomy が曖昧な polling magic に見える。 + +5. `s19` を `s18` より前に動かす + local platform boundary より external capability が目立ってしまう。 + +## Maintainer が順序変更前に確認すべきこと + +章を動かす前に次を確認するとよいです。 + +1. 前提概念はすでに前で説明されているか +2. この変更で別の層の概念同士が混ざらないか +3. この章が主に追加するのは goal か、runtime state か、actor か、capability boundary か +4. これを早めても、学習者は最小正解版をまだ自力で作れるか +5. これは開発者理解のための変更か、それとも source file の順を真似ているだけか + +5 番目が後者なら、たいてい変更しない方がよいです。 + +## 一文で残すなら + +**良い章順とは、mechanism の一覧ではなく、各章が前章から自然に伸びた次の層として見える並びです。** diff --git a/docs/ja/s00e-reference-module-map.md b/docs/ja/s00e-reference-module-map.md new file mode 100644 index 000000000..1da5d6f70 --- /dev/null +++ b/docs/ja/s00e-reference-module-map.md @@ -0,0 +1,213 @@ +# s00e: 参照リポジトリのモジュール対応表 + +> これは保守者と本気で学ぶ読者向けの校正文書です。 +> 逆向きソースを逐行で読ませるための資料ではありません。 +> +> ここで答えたいのは、次の一点です。 +> +> **参照リポジトリの高信号なモジュール群と現在の教材の章順を突き合わせると、今のカリキュラム順は本当に妥当なのか。** + +## 結論 + +妥当です。 + +現在の `s01 -> s19` の順序は大筋で正しく、単純に「ソースツリーの並び順」に合わせるより、実際の設計主幹に近いです。 + +理由は単純です。 + +- 参照リポジトリには表層のディレクトリがたくさんある +- しかし本当に設計の重みを持つのは、制御・状態・タスク・チーム・worktree・外部 capability に関する一部のクラスタ +- それらは現在の 4 段階の教材構成ときれいに対応している + +したがって、すべきことは「教材をソース木順へ潰す」ことではありません。 + +すべきことは: + +- 今の依存関係ベースの順序を維持する +- 参照リポジトリとの対応を明文化する +- 主線に不要な製品周辺の細部を入れ過ぎない + +## この比較で見た高信号クラスタ + +主に次のようなモジュール群を見ています。 + +- `Tool.ts` +- `state/AppStateStore.ts` +- `coordinator/coordinatorMode.ts` +- `memdir/*` +- `services/SessionMemory/*` +- `services/toolUseSummary/*` +- `constants/prompts.ts` +- `tasks/*` +- `tools/TodoWriteTool/*` +- `tools/AgentTool/*` +- `tools/ScheduleCronTool/*` +- `tools/EnterWorktreeTool/*` +- `tools/ExitWorktreeTool/*` +- `tools/MCPTool/*` +- `services/mcp/*` +- `plugins/*` +- `hooks/toolPermission/*` + +これだけで、設計主脈絡の整合性は十分に判断できます。 + +## 対応関係 + +| 参照リポジトリのクラスタ | 典型例 | 対応する教材章 | この配置が妥当な理由 | +|---|---|---|---| +| Query ループと制御状態 | `Tool.ts`、`AppStateStore.ts`、query / coordinator 状態 | `s00`、`s00a`、`s00b`、`s01`、`s11` | 実システムは `messages[] + while True` だけではない。教材が最小ループから始め、後で control plane を補う流れは正しい。 | +| Tool routing と実行面 | `Tool.ts`、native tools、tool context、実行 helper | `s02`、`s02a`、`s02b` | 参照実装は tools を共有 execution plane として扱っている。教材の分け方は妥当。 | +| セッション計画 | `TodoWriteTool` | `s03` | セッション内の進行整理は小さいが重要な層で、持続タスクより先に学ぶべき。 | +| 一回きりの委譲 | `AgentTool` の最小部分 | `s04` | 参照実装の agent machinery は大きいが、教材がまず「新しい文脈 + サブタスク + 要約返却」を教えるのは正しい。 | +| Skill の発見と読み込み | `DiscoverSkillsTool`、`skills/*`、関連 prompt | `s05` | skills は飾りではなく知識注入層なので、prompt の複雑化より前に置くのが自然。 | +| Context 圧縮と collapse | `services/toolUseSummary/*`、`services/contextCollapse/*` | `s06` | 参照実装に明示的な compact 層がある以上、これを早めに学ぶ構成は正しい。 | +| Permission gate | `types/permissions.ts`、`hooks/toolPermission/*` | `s07` | 実行可否は独立した gate であり、単なる hook ではない。 | +| Hooks と周辺拡張 | `types/hooks.ts`、hook runner | `s08` | 参照実装でも gate と extend は分かれている。順序は現状のままでよい。 | +| Durable memory | `memdir/*`、`services/SessionMemory/*` | `s09` | memory は「何でも残すノート」ではなく、選択的な跨セッション層として扱われている。 | +| Prompt 組み立て | `constants/prompts.ts`、prompt sections | `s10`、`s10a` | 入力は複数 source の合成物であり、教材が pipeline として説明するのは正しい。 | +| Recovery / continuation | query transition、retry、compact retry、token recovery | `s11`、`s00c` | 続行理由は実システムで明示的に存在するため、前段の層を理解した後に学ぶのが自然。 | +| Durable work graph | task record、dependency unlock | `s12` | 会話内の plan と durable work graph を分けている点が妥当。 | +| Live runtime task | `tasks/types.ts`、`LocalShellTask`、`LocalAgentTask`、`RemoteAgentTask` | `s13`、`s13a` | 参照実装の runtime task union は、`TaskRecord` と `RuntimeTaskState` を分けるべき強い根拠になる。 | +| Scheduled trigger | `ScheduleCronTool/*`、`useScheduledTasks` | `s14` | scheduling は runtime work の上に乗る開始条件なので、この順序でよい。 | +| Persistent teammate | `InProcessTeammateTask`、team tools、agent registry | `s15` | 一回限りの subagent から durable actor へ広がる流れが参照実装にある。 | +| Structured protocol | send-message、request tracking、coordinator mode | `s16` | protocol は actor が先に存在して初めて意味を持つ。 | +| Autonomous claim / resume | task claiming、async worker lifecycle、resume logic | `s17` | autonomy は actor と task と protocol の上に成り立つ。 | +| Worktree lane | `EnterWorktreeTool`、`ExitWorktreeTool`、worktree helper | `s18` | worktree は単なる git 小技ではなく、実行レーンと closeout 状態の仕組み。 | +| External capability bus | `MCPTool`、`services/mcp/*`、`plugins/*` | `s19`、`s19a` | 参照実装でも MCP / plugin は外側の platform boundary にある。最後に置くのが正しい。 | + +## 特に強く裏付けられた 5 点 + +### 1. `s03` は `s12` より前でよい + +参照実装には: + +- セッション内の小さな計画 +- 持続する task / runtime machinery + +の両方があります。 + +これは同じものではありません。 + +### 2. `s09` は `s10` より前でよい + +prompt assembly は memory を含む複数 source を組み立てます。 + +したがって: + +- 先に memory という source を理解する +- その後で prompt pipeline を理解する + +の順が自然です。 + +### 3. `s12` は `s13` より前でなければならない + +`tasks/types.ts` に見える runtime task union は非常に重要です。 + +これは: + +- durable な仕事目標 +- 今まさに動いている実行スロット + +が別物であることをはっきり示しています。 + +### 4. `s15 -> s16 -> s17` の順は妥当 + +参照実装でも: + +- actor +- protocol +- autonomy + +の順で積み上がっています。 + +### 5. `s18` は `s19` より前でよい + +worktree はまずローカルな実行境界として理解されるべきです。 + +そのあとで: + +- plugin +- MCP server +- 外部 capability provider + +へ広げる方が、心智がねじれません。 + +## 教材主線に入れ過ぎない方がよいもの + +参照リポジトリに実在していても、主線へ入れ過ぎるべきではないものがあります。 + +- CLI command 面の広がり +- UI rendering の細部 +- telemetry / analytics 分岐 +- remote / enterprise の配線 +- compatibility layer +- ファイル名や行番号レベルの trivia + +これらは本番では意味があります。 + +ただし 0 から 1 の教材主線の中心ではありません。 + +## 教材側が特に注意すべき点 + +### 1. Subagent と Teammate を混ぜない + +参照実装の `AgentTool` は: + +- 一回きりの委譲 +- background worker +- persistent teammate +- worktree-isolated worker + +をまたいでいます。 + +だからこそ教材では: + +- `s04` +- `s15` +- `s17` +- `s18` + +に分けて段階的に教える方がよいです。 + +### 2. Worktree を「git の小技」へ縮めない + +参照実装には keep / remove、resume、cleanup、dirty check があります。 + +`s18` は今後も: + +- lane identity +- task binding +- closeout +- cleanup + +を教える章として保つべきです。 + +### 3. MCP を「外部 tool 一覧」へ縮めない + +参照実装には tools 以外にも: + +- resources +- prompts +- elicitation / connection state +- plugin mediation + +があります。 + +したがって `s19` は tools-first で入ってよいですが、capability bus という外側の境界も説明すべきです。 + +## 最終判断 + +参照リポジトリの高信号クラスタと照らす限り、現在の章順は妥当です。 + +今後の大きな加点ポイントは、さらに大規模な並べ替えではなく: + +- bridge docs の充実 +- エンティティ境界の明確化 +- 多言語の整合 +- web 側での学習導線の明快さ + +にあります。 + +## 一文で覚える + +**よい教材順は、ファイルが並んでいる順ではなく、学習者が依存関係に沿って実装を再構成できる順です。** diff --git a/docs/ja/s00f-code-reading-order.md b/docs/ja/s00f-code-reading-order.md new file mode 100644 index 000000000..f7b2b92fc --- /dev/null +++ b/docs/ja/s00f-code-reading-order.md @@ -0,0 +1,134 @@ +# s00f: このリポジトリのコード読解順 + +> このページは「もっと多くコードを読め」という話ではありません。 +> もっと狭い問題を解決します。 +> +> **章順が安定したあと、このリポジトリのコードをどんな順で読めば心智モデルを崩さずに理解できるのか。** + +## 先に結論 + +次の読み方は避けます。 + +- いちばん長いファイルから読む +- いちばん高度そうな章へ飛ぶ +- 先に `web/` を開いて主線を逆算する +- `agents/*.py` 全体を 1 つの平坦なソース群として眺める + +安定したルールは 1 つです。 + +**コードもカリキュラムと同じ順番で読む。** + +各章ファイルの中では、毎回同じ順で読みます。 + +1. 状態構造 +2. tool 定義や registry +3. 1 ターンを進める関数 +4. CLI 入口は最後 + +## なぜこのページが必要か + +読者が詰まるのは文章だけではありません。実際にコードを開いた瞬間に、間違った場所から読み始めてまた混ざることが多いからです。 + +## どの agent ファイルでも同じテンプレートで読む + +### 1. まずファイル先頭 + +最初に答えること: + +- この章は何を教えているか +- まだ何を故意に教えていないか + +### 2. 状態構造や manager class + +優先して探すもの: + +- `LoopState` +- `PlanningState` +- `CompactState` +- `TaskManager` +- `BackgroundManager` +- `TeammateManager` +- `WorktreeManager` + +### 3. tool 一覧や registry + +優先して見る入口: + +- `TOOLS` +- `TOOL_HANDLERS` +- `build_tool_pool()` +- 主要な `run_*` + +### 4. ターンを進める関数 + +たとえば: + +- `run_one_turn(...)` +- `agent_loop(...)` +- 章固有の `handle_*` + +### 5. CLI 入口は最後 + +`if __name__ == "__main__"` は大事ですが、最初に見る場所ではありません。 + +## Stage 1: `s01-s06` + +この段階は single-agent の背骨です。 + +| 章 | ファイル | 先に見るもの | 次に見るもの | 次へ進む前に確認すること | +|---|---|---|---|---| +| `s01` | `agents/s01_agent_loop.py` | `LoopState` | `TOOLS` -> `run_one_turn()` -> `agent_loop()` | `messages -> model -> tool_result -> next turn` を追える | +| `s02` | `agents/s02_tool_use.py` | `safe_path()` | handler 群 -> `TOOL_HANDLERS` -> `agent_loop()` | ループを変えずに tool が増える形が分かる | +| `s03` | `agents/s03_todo_write.py` | planning state | todo 更新経路 -> `agent_loop()` | 会話内 plan の外化が分かる | +| `s04` | `agents/s04_subagent.py` | `AgentTemplate` | `run_subagent()` -> 親 `agent_loop()` | 文脈隔離としての subagent が分かる | +| `s05` | `agents/s05_skill_loading.py` | skill registry | registry 周り -> `agent_loop()` | discover light / load deep が分かる | +| `s06` | `agents/s06_context_compact.py` | `CompactState` | compact 周辺 -> `agent_loop()` | compact の本質が分かる | + +## Stage 2: `s07-s11` + +ここは control plane を固める段階です。 + +| 章 | ファイル | 先に見るもの | 次に見るもの | 次へ進む前に確認すること | +|---|---|---|---|---| +| `s07` | `agents/s07_permission_system.py` | validator / manager | permission path -> `agent_loop()` | gate before execute | +| `s08` | `agents/s08_hook_system.py` | `HookManager` | hook dispatch -> `agent_loop()` | 固定拡張点としての hook | +| `s09` | `agents/s09_memory_system.py` | memory manager | save / prompt build -> `agent_loop()` | 長期情報層としての memory | +| `s10` | `agents/s10_system_prompt.py` | `SystemPromptBuilder` | input build -> `agent_loop()` | pipeline としての prompt | +| `s11` | `agents/s11_error_recovery.py` | compact / backoff helper | recovery 分岐 -> `agent_loop()` | 失敗後の続行 | + +## Stage 3: `s12-s14` + +ここから harness は work runtime へ広がります。 + +| 章 | ファイル | 先に見るもの | 次に見るもの | 次へ進む前に確認すること | +|---|---|---|---|---| +| `s12` | `agents/s12_task_system.py` | `TaskManager` | task create / unlock -> `agent_loop()` | durable goal | +| `s13` | `agents/s13_background_tasks.py` | `NotificationQueue` / `BackgroundManager` | background registration -> `agent_loop()` | runtime slot | +| `s14` | `agents/s14_cron_scheduler.py` | `CronLock` / `CronScheduler` | trigger path -> `agent_loop()` | 未来の開始条件 | + +## Stage 4: `s15-s19` + +ここは platform 境界を作る段階です。 + +| 章 | ファイル | 先に見るもの | 次に見るもの | 次へ進む前に確認すること | +|---|---|---|---|---| +| `s15` | `agents/s15_agent_teams.py` | `MessageBus` / `TeammateManager` | roster / inbox / loop -> `agent_loop()` | persistent teammate | +| `s16` | `agents/s16_team_protocols.py` | `RequestStore` | request handler -> `agent_loop()` | request-response + `request_id` | +| `s17` | `agents/s17_autonomous_agents.py` | claim helper / identity helper | claim -> resume -> `agent_loop()` | idle check -> safe claim -> resume | +| `s18` | `agents/s18_worktree_task_isolation.py` | manager 群 | worktree lifecycle -> `agent_loop()` | goal と execution lane の分離 | +| `s19` | `agents/s19_mcp_plugin.py` | capability 周辺 class | route / normalize -> `agent_loop()` | external capability が同じ control plane に戻ること | + +## 最良の「文書 + コード」学習ループ + +各章で次を繰り返します。 + +1. 章本文を読む +2. bridge doc を読む +3. 対応する `agents/sXX_*.py` を開く +4. 状態 -> tools -> turn driver -> CLI 入口 の順で読む +5. demo を 1 回動かす +6. 最小版を自分で書き直す + +## 一言で言うと + +**コード読解順も教学順に従うべきです。まず境界、その次に状態、最後に主ループをどう進めるかを見ます。** diff --git a/docs/ja/s01-the-agent-loop.md b/docs/ja/s01-the-agent-loop.md index ddb54b973..ef7a3fe93 100644 --- a/docs/ja/s01-the-agent-loop.md +++ b/docs/ja/s01-the-agent-loop.md @@ -1,56 +1,229 @@ # s01: The Agent Loop -`[ s01 ] s02 > s03 > s04 > s05 > s06 | s07 > s08 > s09 > s10 > s11 > s12` +`s00 > [ s01 ] > s02 > s03 > s04 > s05 > s06 > s07 > s08 > s09 > s10 > s11 > s12 > s13 > s14 > s15 > s16 > s17 > s18 > s19` -> *"One loop & Bash is all you need"* -- 1つのツール + 1つのループ = エージェント。 -> -> **Harness 層**: ループ -- モデルと現実世界を繋ぐ最初の接点。 +> *loop がなければ agent は生まれません。* +> この章では、最小だけれど正しい loop を先に作り、そのあとで「なぜ後ろの章で control plane が必要になるのか」を理解できる土台を作ります。 -## 問題 +## この章が解く問題 -言語モデルはコードについて推論できるが、現実世界に触れられない。ファイルを読めず、テストを実行できず、エラーを確認できない。ループがなければ、ツール呼び出しのたびにユーザーが手動で結果をコピーペーストする必要がある。つまりユーザー自身がループになる。 +言語 model 自体は「次にどんな文字列を出すか」を予測する存在です。 -## 解決策 +それだけでは自分で次のことはできません。 +- file を開く +- command を実行する +- error を観察する +- その観察結果を次の判断へつなぐ + +もし system 側に次の流れを繰り返す code がなければ、 + +```text +model に聞く + -> +tool を使いたいと言う + -> +本当に実行する + -> +結果を model へ戻す + -> +次の一手を考えさせる +``` + +model は「会話できる program」に留まり、「仕事を進める agent」にはなりません。 + +だからこの章の目標は 1 つです。 + +**model と tool を閉ループに接続し、仕事を継続的に前へ進める最小 agent を作ること** + +です。 + +## 先に言葉をそろえる + +### loop とは何か + +ここでの `loop` は「無意味な無限ループ」ではありません。 + +意味は、 + +> 仕事がまだ終わっていない限り、同じ処理手順を繰り返す主循環 + +です。 + +### turn とは何か + +`turn` は 1 ラウンドです。 + +最小版では 1 turn にだいたい次が入ります。 + +1. 現在の messages を model に送る +2. model response を受け取る +3. tool_use があれば tool を実行する +4. tool_result を messages に戻す + +そのあとで次の turn へ進むか、終了するかが決まります。 + +### tool_result とは何か + +`tool_result` は terminal 上の一時ログではありません。 + +正しくは、 + +> model が次の turn で読めるよう、message history へ書き戻される結果 block + +です。 + +### state とは何か + +`state` は、その loop が前へ進むために持ち続ける情報です。 + +この章の最小 state は次です。 + +- `messages` +- `turn_count` +- 次 turn に続く理由 + +## 最小心智モデル + +まず agent 全体を次の回路として見てください。 + +```text +user message + | + v +LLM + | + +-- 普通の返答 ----------> 終了 + | + +-- tool_use ----------> tool 実行 + | + v + tool_result + | + v + messages へ write-back + | + v + 次の turn +``` + +この図の中で一番重要なのは `while True` という文法ではありません。 + +最も重要なのは次の 1 文です。 + +**tool の結果は message history に戻され、次の推論入力になる** + +ここが欠けると、model は現実の観察を踏まえて次の一手を考えられません。 + +## この章の核になるデータ構造 + +### 1. Message + +最小教材版では、message はまず次の形で十分です。 + +```python +{"role": "user", "content": "..."} +{"role": "assistant", "content": [...]} +``` + +ここで忘れてはいけないのは、 + +**message history は UI 表示用の chat transcript ではなく、次 turn の作業 context** + +だということです。 + +### 2. Tool Result Block + +tool 実行後は、その出力を対応する block として messages へ戻します。 + +```python +{ + "type": "tool_result", + "tool_use_id": "...", + "content": "...", +} +``` + +`tool_use_id` は単純に、 + +> どの tool 呼び出しに対応する結果か + +を model に示すための ID です。 + +### 3. LoopState + +この章では散らばった local variable だけで済ませるより、 + +> loop が持つ state を 1 か所へ寄せて見る + +癖を作る方が後で効きます。 + +最小形は次で十分です。 + +```python +state = { + "messages": [...], + "turn_count": 1, + "transition_reason": None, +} ``` -+--------+ +-------+ +---------+ -| User | ---> | LLM | ---> | Tool | -| prompt | | | | execute | -+--------+ +---+---+ +----+----+ - ^ | - | tool_result | - +----------------+ - (loop until stop_reason != "tool_use") + +ここでの `transition_reason` はまず、 + +> なぜこの turn のあとにさらに続くのか + +を示す field とだけ理解してください。 + +この章の最小版では、理由は 1 種類でも十分です。 + +```python +"tool_result" ``` -1つの終了条件がフロー全体を制御する。モデルがツール呼び出しを止めるまでループが回り続ける。 +つまり、 + +> tool を実行したので、その結果を踏まえてもう一度 model を呼ぶ + +という continuation です。 + +## 最小実装を段階で追う -## 仕組み +### 第 1 段階: 初期 message を作る -1. ユーザーのプロンプトが最初のメッセージになる。 +まず user request を history に入れます。 ```python -messages.append({"role": "user", "content": query}) +messages = [{"role": "user", "content": query}] ``` -2. メッセージとツール定義をLLMに送信する。 +### 第 2 段階: model を呼ぶ + +messages、system prompt、tools をまとめて model に送ります。 ```python response = client.messages.create( - model=MODEL, system=SYSTEM, messages=messages, - tools=TOOLS, max_tokens=8000, + model=MODEL, + system=SYSTEM, + messages=messages, + tools=TOOLS, + max_tokens=8000, ) ``` -3. アシスタントのレスポンスを追加し、`stop_reason`を確認する。ツールが呼ばれなければ終了。 +### 第 3 段階: assistant response 自体も history へ戻す ```python -messages.append({"role": "assistant", "content": response.content}) -if response.stop_reason != "tool_use": - return +messages.append({ + "role": "assistant", + "content": response.content, +}) ``` -4. 各ツール呼び出しを実行し、結果を収集してuserメッセージとして追加。ステップ2に戻る。 +ここは初心者がとても落としやすい点です。 + +「最終答えだけ取れればいい」と思って assistant response を保存しないと、次 turn の context が切れます。 + +### 第 4 段階: tool_use があればจริง行する ```python results = [] @@ -62,55 +235,125 @@ for block in response.content: "tool_use_id": block.id, "content": output, }) -messages.append({"role": "user", "content": results}) ``` -1つの関数にまとめると: +この段階で初めて、model の意図が real execution へ落ちます。 + +### 第 5 段階: tool_result を user-side message として write-back する ```python -def agent_loop(query): - messages = [{"role": "user", "content": query}] +messages.append({ + "role": "user", + "content": results, +}) +``` + +これで次 turn の model は、 + +- さっき自分が何を要求したか +- その結果が何だったか + +を両方読めます。 + +### 全体を 1 つの loop にまとめる + +```python +def agent_loop(state): while True: response = client.messages.create( - model=MODEL, system=SYSTEM, messages=messages, - tools=TOOLS, max_tokens=8000, + model=MODEL, + system=SYSTEM, + messages=state["messages"], + tools=TOOLS, + max_tokens=8000, ) - messages.append({"role": "assistant", "content": response.content}) + + state["messages"].append({ + "role": "assistant", + "content": response.content, + }) if response.stop_reason != "tool_use": + state["transition_reason"] = None return results = [] for block in response.content: if block.type == "tool_use": - output = run_bash(block.input["command"]) + output = run_tool(block) results.append({ "type": "tool_result", "tool_use_id": block.id, "content": output, }) - messages.append({"role": "user", "content": results}) + + state["messages"].append({ + "role": "user", + "content": results, + }) + state["turn_count"] += 1 + state["transition_reason"] = "tool_result" ``` -これでエージェント全体が30行未満に収まる。本コースの残りはすべてこのループの上に積み重なる -- ループ自体は変わらない。 +これがこの course 全体の核です。 -## 変更点 +後ろの章で何が増えても、 -| Component | Before | After | -|---------------|------------|--------------------------------| -| Agent loop | (none) | `while True` + stop_reason | -| Tools | (none) | `bash` (one tool) | -| Messages | (none) | Accumulating list | -| Control flow | (none) | `stop_reason != "tool_use"` | +**model を呼び、tool を実行し、result を戻して、必要なら続く** -## 試してみる +という骨格自体は残ります。 -```sh -cd learn-claude-code -python agents/s01_agent_loop.py -``` +## この章でわざと単純化していること + +この章では最初から複雑な control plane を教えません。 + +まだ出していないもの: + +- permission gate +- hook +- memory +- prompt assembly pipeline +- recovery branch +- compact 後の continuation + +なぜなら初学者が最初に理解すべきなのは、 + +**agent の最小閉ループ** + +だからです。 + +もし最初から複数の continuation reason や recovery branch を混ぜると、 +読者は「loop そのもの」が見えなくなります。 + +## 高完成度 system ではどう広がるか + +教材版は最も重要な骨格だけを教えます。 + +高完成度 system では、その同じ loop の外側に次の層が足されます。 + +| 観点 | この章の最小版 | 高完成度 system | +|---|---|---| +| loop 形状 | 単純な `while True` | event-driven / streaming continuation | +| 継続理由 | `tool_result` が中心 | retry、compact resume、recovery など複数 | +| tool execution | response 全体を見てから実行 | 並列実行や先行起動を含む runtime | +| state | `messages` 中心 | turn、budget、transition、recovery を explicit に持つ | +| error handling | ほぼなし | truncation、transport error、retry branch | +| observability | 最小 | progress event、structured logs、UI stream | + +ここで覚えるべき本質は細かな branch 名ではありません。 + +本質は次の 1 文です。 + +**agent は最後まで「結果を model に戻し続ける loop」であり、周囲に state 管理と continuation の理由が増えていく** + +ということです。 + +## この章を読み終えたら何が言えるべきか + +1. model だけでは agent にならず、tool result を戻す loop が必要 +2. assistant response 自体も history に残さないと次 turn が切れる +3. tool_result は terminal log ではなく、次 turn の input block である + +## 一文で覚える -1. `Create a file called hello.py that prints "Hello, World!"` -2. `List all Python files in this directory` -3. `What is the current git branch?` -4. `Create a directory called test_output and write 3 files in it` +**agent loop とは、model の要求を現実の観察へ変え、その観察をまた model に返し続ける主循環です。** diff --git a/docs/ja/s02-tool-use.md b/docs/ja/s02-tool-use.md index 3c41c1d5c..98bbc277a 100644 --- a/docs/ja/s02-tool-use.md +++ b/docs/ja/s02-tool-use.md @@ -1,6 +1,6 @@ # s02: Tool Use -`s01 > [ s02 ] s03 > s04 > s05 > s06 | s07 > s08 > s09 > s10 > s11 > s12` +`s01 > [ s02 ] > s03 > s04 > s05 > s06 > s07 > s08 > s09 > s10 > s11 > s12 > s13 > s14 > s15 > s16 > s17 > s18 > s19` > *"ツールを足すなら、ハンドラーを1つ足すだけ"* -- ループは変わらない。新ツールは dispatch map に登録するだけ。 > @@ -97,3 +97,30 @@ python agents/s02_tool_use.py 2. `Create a file called greet.py with a greet(name) function` 3. `Edit greet.py to add a docstring to the function` 4. `Read greet.py to verify the edit worked` + +## 教学上の簡略化 + +この章で本当に学ぶべきなのは、細かな production 差分ではありません。 + +学ぶべき中心は次の 4 点です。 + +1. モデルに見せる tool schema がある +2. 実装側には handler がある +3. 両者は dispatch map で結ばれる +4. 実行結果は `tool_result` として主ループへ戻る + +より完成度の高い system では、この周りに権限、hook、並列実行、結果永続化、外部 capability routing などが増えていきます。 + +しかし、それらをここで全部追い始めると、初学者は + +- schema と handler の違い +- dispatch map の役割 +- `tool_result` がなぜ主ループへ戻るのか + +という本章の主眼を見失いやすくなります。 + +この段階では、まず + +**新しい tool を足しても主ループ自体は作り替えなくてよい** + +という設計の強さを、自分で実装して理解できれば十分です。 diff --git a/docs/ja/s02a-tool-control-plane.md b/docs/ja/s02a-tool-control-plane.md new file mode 100644 index 000000000..e4fe4fe3e --- /dev/null +++ b/docs/ja/s02a-tool-control-plane.md @@ -0,0 +1,177 @@ +# s02a: Tool Control Plane + +> これは `s02` を深く理解するための橋渡し文書です。 +> 問いたいのは: +> +> **なぜ tool system は単なる `tool_name -> handler` 表では足りないのか。** + +## 先に結論 + +最小 demo では dispatch map だけでも動きます。 + +しかし高完成度の system では tool layer は次の責任をまとめて持ちます。 + +- tool schema をモデルへ見せる +- tool 名から実行先を解決する +- 実行前に permission を通す +- hook / classifier / side check を差し込む +- 実行中 progress を扱う +- 結果を整形して loop へ戻す +- 実行で変わる共有 state へアクセスする + +つまり tool layer は: + +**関数表ではなく、共有 execution plane** + +です。 + +## 最小の心智モデル + +```text +model emits tool_use + | + v +tool spec lookup + | + v +permission / hook / validation + | + v +actual execution + | + v +tool result shaping + | + v +write-back to loop +``` + +## `dispatch map` だけでは足りない理由 + +単なる map だと、せいぜい: + +- この名前ならこの関数 + +しか表せません。 + +でも実システムで必要なのは: + +- モデルへ何を見せるか +- 実行前に何を確認するか +- 実行中に何を表示するか +- 実行後にどんな result block を返すか +- どの shared context を触れるか + +です。 + +## 主要なデータ構造 + +### `ToolSpec` + +モデルに見せる tool の定義です。 + +```python +tool = { + "name": "read_file", + "description": "...", + "input_schema": {...}, +} +``` + +### `ToolDispatchMap` + +名前から handler を引く表です。 + +```python +dispatch = { + "read_file": run_read, + "bash": run_bash, +} +``` + +これは必要ですが、これだけでは足りません。 + +### `ToolUseContext` + +tool が共有状態へ触るための文脈です。 + +たとえば: + +- app state getter / setter +- permission context +- notifications +- file-state cache +- current agent identity + +などが入ります。 + +### `ToolResultEnvelope` + +loop へ返すときの整形済み result です。 + +```python +{ + "type": "tool_result", + "tool_use_id": "...", + "content": "...", +} +``` + +高完成度版では content だけでなく: + +- progress +- warnings +- structured result + +なども関わります。 + +## 実行面として見ると何が変わるか + +### 1. Tool は「名前」ではなく「実行契約」になる + +1つの tool には: + +- 入力 schema +- 実行権限 +- 実行時 context +- 出力の形 + +がひとまとまりで存在します。 + +### 2. Permission と Hook の差が見えやすくなる + +- permission: 実行してよいか +- hook: 実行の周辺で何を足すか + +### 3. Native / Task / Agent / MCP を同じ平面で見やすくなる + +参照実装でも重要なのは: + +**能力の出どころが違っても、loop から見れば 1 つの tool execution plane に入る** + +という点です。 + +## 初学者がやりがちな誤り + +### 1. tool spec と handler を混同する + +- spec はモデル向け説明 +- handler は実行コード + +### 2. permission を handler の中へ埋め込む + +これをやると gate が共有層にならず、system が読みにくくなります。 + +### 3. result shaping を軽く見る + +tool 実行結果は「文字列が返ればよい」ではありません。 + +loop が読み戻しやすい形に整える必要があります。 + +### 4. 実行状態を `messages[]` だけで持とうとする + +tool 実行は app state や runtime state を触ることがあります。 + +## 一文で覚える + +**tool system が本物らしくなるのは、名前から関数を呼べた瞬間ではなく、schema・gate・context・result を含む共有 execution plane として見えた瞬間です。** diff --git a/docs/ja/s02b-tool-execution-runtime.md b/docs/ja/s02b-tool-execution-runtime.md new file mode 100644 index 000000000..b03320dbd --- /dev/null +++ b/docs/ja/s02b-tool-execution-runtime.md @@ -0,0 +1,281 @@ +# s02b: Tool Execution Runtime + +> この bridge doc は tool の登録方法ではなく、次の問いを扱います。 +> +> **model が複数の tool call を出したとき、何を基準に並列化し、進捗を出し、結果順を安定させ、context をマージするのか。** + +## なぜこの資料が必要か + +`s02` では正しく次を教えています。 + +- tool schema +- dispatch map +- `tool_result` の main loop への回流 + +出発点としては十分です。 + +ただしシステムが大きくなると、本当に難しくなるのはもっと深い層です。 + +- どの tool は並列実行できるか +- どの tool は直列でなければならないか +- 遅い tool は途中 progress を出すべきか +- 並列結果を完了順で返すのか、元の順序で返すのか +- tool 実行が共有 context を変更するのか +- 並列変更をどう安全にマージするのか + +これらはもはや「登録」の話ではありません。 + +それは: + +**tool execution runtime** + +の話です。 + +## まず用語 + +### tool execution runtime とは + +ここでの runtime は言語 runtime の意味ではありません。 + +ここでは: + +> tool call が実際に動き始めた後、システムがそれらをどう調度し、追跡し、回写するか + +という実行規則のことです。 + +### concurrency safe とは + +concurrency safe とは: + +> 同種の仕事と同時に走っても共有 state を壊しにくい + +という意味です。 + +よくある read-only tool は安全なことが多いです。 + +- `read_file` +- いくつかの search tool +- 読み取り専用の MCP tool + +一方で write 系は安全でないことが多いです。 + +- `write_file` +- `edit_file` +- 共有 app state を変える tool + +### progress message とは + +progress message とは: + +> tool はまだ終わっていないが、「今何をしているか」を先に上流へ見せる更新 + +のことです。 + +### context modifier とは + +ある tool は text result だけでなく共有 runtime context も変更します。 + +例えば: + +- notification queue を更新する +- 実行中 tool の状態を更新する +- app state を変更する + +この共有 state 変更を context modifier と考えられます。 + +## 最小の心智モデル + +tool 実行を次のように平坦化しないでください。 + +```text +tool_use -> handler -> result +``` + +より実像に近い理解は次です。 + +```text +tool_use blocks + -> +concurrency safety で partition + -> +並列 lane か直列 lane を選ぶ + -> +必要なら progress を吐く + -> +安定順で結果を回写する + -> +queued context modifiers をマージする +``` + +ここで大事なのは二つです。 + +- 並列化は「全部まとめて走らせる」ではない +- 共有 context は完了順で勝手に書き換えない + +## 主要 record + +### 1. `ToolExecutionBatch` + +教材版なら次の程度の batch 概念で十分です。 + +```python +batch = { + "is_concurrency_safe": True, + "blocks": [tool_use_1, tool_use_2, tool_use_3], +} +``` + +意味は単純です。 + +- tool を常に 1 個ずつ扱うわけではない +- runtime はまず execution batch に分ける + +### 2. `TrackedTool` + +完成度を上げたいなら各 tool を明示的に追跡します。 + +```python +tracked_tool = { + "id": "toolu_01", + "name": "read_file", + "status": "queued", # queued / executing / completed / yielded + "is_concurrency_safe": True, + "pending_progress": [], + "results": [], + "context_modifiers": [], +} +``` + +これにより runtime は次に答えられます。 + +- 何が待機中か +- 何が実行中か +- 何が完了したか +- 何がすでに progress を出したか + +### 3. `MessageUpdate` + +tool 実行は最終結果 1 個だけを返すとは限りません。 + +最小理解は次で十分です。 + +```python +update = { + "message": maybe_message, + "new_context": current_context, +} +``` + +高完成度 runtime では、更新は通常二つに分かれます。 + +- すぐ上流へ見せる message update +- 後で merge すべき内部 context update + +### 4. queued context modifiers + +これは見落とされやすいですが、とても重要です。 + +並列 batch で安全なのは: + +> 先に終わった tool がその順で共有 context を先に変える + +ことではありません。 + +より安全なのは: + +> context modifier を一旦 queue し、最後に元の tool 順序で merge する + +ことです。 + +```python +queued_context_modifiers = { + "toolu_01": [modify_ctx_a], + "toolu_02": [modify_ctx_b], +} +``` + +## 最小実装の進め方 + +### Step 1: concurrency safety を判定する + +```python +def is_concurrency_safe(tool_name: str, tool_input: dict) -> bool: + return tool_name in {"read_file", "search_files"} +``` + +### Step 2: 実行前に partition する + +```python +batches = partition_tool_calls(tool_uses) + +for batch in batches: + if batch["is_concurrency_safe"]: + run_concurrently(batch["blocks"]) + else: + run_serially(batch["blocks"]) +``` + +### Step 3: 並列 lane では progress を先に出せるようにする + +```python +for update in run_concurrently(...): + if update.get("message"): + yield update["message"] +``` + +### Step 4: context merge は安定順で行う + +```python +queued_modifiers = {} + +for update in concurrent_updates: + if update.get("context_modifier"): + queued_modifiers[update["tool_id"]].append(update["context_modifier"]) + +for tool in original_batch_order: + for modifier in queued_modifiers.get(tool["id"], []): + context = modifier(context) +``` + +ここは教材 repo でも簡略化しすぎず、しかし主線を崩さずに教えられる重要点です。 + +## 開発者が持つべき図 + +```text +tool_use blocks + | + v +partition by concurrency safety + | + +-- safe batch ----------> concurrent execution + | | + | +-- progress updates + | +-- final results + | +-- queued context modifiers + | + +-- exclusive batch -----> serial execution + | + +-- direct result + +-- direct context update +``` + +## なぜ後半では dispatch map より重要になるのか + +小さい demo では: + +```python +handlers[tool_name](tool_input) +``` + +で十分です。 + +しかし高完成度 agent で本当に難しいのは、正しい handler を呼ぶことそのものではありません。 + +難しいのは: + +- 複数 tool を安全に調度する +- progress を見えるようにする +- 結果順を安定させる +- 共有 context を非決定的にしない + +だからこそ tool execution runtime は独立した bridge doc として教える価値があります。 diff --git a/docs/ja/s03-todo-write.md b/docs/ja/s03-todo-write.md index 541d33c39..12350d127 100644 --- a/docs/ja/s03-todo-write.md +++ b/docs/ja/s03-todo-write.md @@ -1,96 +1,388 @@ # s03: TodoWrite -`s01 > s02 > [ s03 ] s04 > s05 > s06 | s07 > s08 > s09 > s10 > s11 > s12` +`s00 > s01 > s02 > [ s03 ] > s04 > s05 > s06 > s07 > s08 > s09 > s10 > s11 > s12 > s13 > s14 > s15 > s16 > s17 > s18 > s19` -> *"計画のないエージェントは行き当たりばったり"* -- まずステップを書き出し、それから実行。 -> -> **Harness 層**: 計画 -- 航路を描かずにモデルを軌道に乗せる。 +> *planning は model の代わりに考えるためのものではありません。いま何をやっているかを、外から見える state にするためのものです。* -## 問題 +## この章が解く問題 -マルチステップのタスクで、モデルは途中で迷子になる。作業を繰り返したり、ステップを飛ばしたり、脱線したりする。長い会話になるほど悪化する -- ツール結果がコンテキストを埋めるにつれ、システムプロンプトの影響力が薄れる。10ステップのリファクタリングでステップ1-3を完了した後、残りを忘れて即興を始めてしまう。 +`s02` まで来ると agent はすでに、 -## 解決策 +- file を読む +- file を書く +- command を実行する +ことができます。 + +するとすぐに別の問題が出ます。 + +- multi-step task で一歩前の確認を忘れる +- もう終えた確認をまた繰り返す +- 最初は計画しても、数 turn 後には即興に戻る + +これは model が「考えられない」からではありません。 + +問題は、 + +**現在の plan を explicit に置いておく stable state がないこと** + +です。 + +この章で足すのはより強い tool ではなく、 + +**今の session で何をどの順で進めているかを外部状態として見えるようにする仕組み** + +です。 + +## 先に言葉をそろえる + +### session 内 planning とは何か + +ここで扱う planning は long-term project management ではありません。 + +意味は、 + +> 今回の user request を終えるために、直近の数手を外へ書き出し、途中で更新し続けること + +です。 + +### todo とは何か + +`todo` は特定 product の固有名詞として覚える必要はありません。 + +この章では単に、 + +> model が current plan を更新するための入口 + +として使います。 + +### active step とは何か + +`active step` は、 + +> いま本当に進めている 1 手 + +です。 + +教材版では `in_progress` で表します。 + +ここで狙っているのは形式美ではなく、 + +**同時にあれもこれも進めて plan をぼかさないこと** + +です。 + +### reminder とは何か + +reminder は model の代わりに plan を作るものではありません。 + +意味は、 + +> 数 turn 連続で plan 更新を忘れたときに、軽く plan へ意識を戻すナッジ + +です。 + +## 最初に強調したい境界 + +この章は task system ではありません。 + +`s03` で扱うのは、 + +- session 内の軽量な current plan +- 進行中の focus を保つための外部状態 +- turn ごとに書き換わりうる planning panel + +です。 + +ここでまだ扱わないもの: + +- durable task board +- dependency graph +- multi-agent 共有 task graph +- background runtime task manager + +それらは `s12-s14` であらためて教えます。 + +この境界を守らないと、初心者はすぐに次を混同します。 + +- 今この session で次にやる一手 +- system 全体に長く残る work goal + +## 最小心智モデル + +この章を最も簡単に捉えるなら、plan はこういう panel です。 + +```text +user が大きな仕事を頼む + | + v +model が今の plan を書き出す + | + v +plan state + - [ ] まだ着手していない + - [>] いま進めている + - [x] 完了した + | + v +1 手進むたびに更新する ``` -+--------+ +-------+ +---------+ -| User | ---> | LLM | ---> | Tools | -| prompt | | | | + todo | -+--------+ +---+---+ +----+----+ - ^ | - | tool_result | - +----------------+ - | - +-----------+-----------+ - | TodoManager state | - | [ ] task A | - | [>] task B <- doing | - | [x] task C | - +-----------------------+ - | - if rounds_since_todo >= 3: - inject into tool_result + +つまり流れはこうです。 + +1. まず current work を数手に割る +2. 1 つを `in_progress` にする +3. 終わったら `completed` にする +4. 次の 1 つを `in_progress` にする +5. しばらく更新がなければ reminder する + +この 5 手が見えていれば、この章の幹はつかめています。 + +## この章の核になるデータ構造 + +### 1. PlanItem + +最小の item は次のように考えられます。 + +```python +{ + "content": "Read the failing test", + "status": "pending" | "in_progress" | "completed", + "activeForm": "Reading the failing test", +} ``` -## 仕組み +意味は単純です。 + +- `content`: 何をするか +- `status`: いまどの段階か +- `activeForm`: 実行中に自然文でどう見せるか + +教材コードによっては `id` や `text` を使っていても本質は同じです。 + +### 2. PlanningState -1. TodoManagerはアイテムのリストをステータス付きで保持する。`in_progress`にできるのは同時に1つだけ。 +item だけでは足りません。 + +plan 全体には最低限、次の running state も要ります。 + +```python +{ + "items": [...], + "rounds_since_update": 0, +} +``` + +`rounds_since_update` の意味は、 + +> 何 turn 連続で plan が更新されていないか + +です。 + +この値があるから reminder を出せます。 + +### 3. 状態制約 + +教材版では次の制約を置くのが有効です。 + +```text +同時に in_progress は最大 1 つ +``` + +これは宇宙の真理ではありません。 +でも初学者にとっては非常に良い制約です。 + +理由は単純で、 + +**current focus を system 側から明示できる** + +からです。 + +## 最小実装を段階で追う + +### 第 1 段階: plan manager を用意する ```python class TodoManager: - def update(self, items: list) -> str: - validated, in_progress_count = [], 0 - for item in items: - status = item.get("status", "pending") - if status == "in_progress": - in_progress_count += 1 - validated.append({"id": item["id"], "text": item["text"], - "status": status}) - if in_progress_count > 1: - raise ValueError("Only one task can be in_progress") - self.items = validated - return self.render() + def __init__(self): + self.items = [] ``` -2. `todo`ツールは他のツールと同様にディスパッチマップに追加される。 +最初はこれで十分です。 + +ここで導入したいのは UI ではなく、 + +> plan を model の頭の中ではなく harness 側の state として持つ + +という発想です。 + +### 第 2 段階: plan 全体を更新できるようにする + +教材版では item をちまちま差分更新するより、 + +**現在の plan を丸ごと更新する** + +方が理解しやすいです。 + +```python +def update(self, items: list) -> str: + validated = [] + in_progress_count = 0 + + for item in items: + status = item.get("status", "pending") + if status == "in_progress": + in_progress_count += 1 + + validated.append({ + "content": item["content"], + "status": status, + "activeForm": item.get("activeForm", ""), + }) + + if in_progress_count > 1: + raise ValueError("Only one item can be in_progress") + + self.items = validated + return self.render() +``` + +ここでやっていることは 2 つです。 + +- current plan を受け取る +- 状態制約をチェックする + +### 第 3 段階: render して可読にする + +```python +def render(self) -> str: + lines = [] + for item in self.items: + marker = { + "pending": "[ ]", + "in_progress": "[>]", + "completed": "[x]", + }[item["status"]] + lines.append(f"{marker} {item['content']}") + return "\n".join(lines) +``` + +render の価値は見た目だけではありません。 + +plan が text として安定して見えることで、 + +- user が current progress を理解しやすい +- model も自分が何をどこまで進めたか確認しやすい + +状態になります。 + +### 第 4 段階: `todo` を 1 つの tool として loop へ接ぐ ```python TOOL_HANDLERS = { - # ...base tools... + "read_file": run_read, + "write_file": run_write, + "edit_file": run_edit, + "bash": run_bash, "todo": lambda **kw: TODO.update(kw["items"]), } ``` -3. nagリマインダーが、モデルが3ラウンド以上`todo`を呼ばなかった場合にナッジを注入する。 +ここで重要なのは、plan 更新を特別扱いの hidden logic にせず、 + +**tool call として explicit に loop へ入れる** + +ことです。 + +### 第 5 段階: 数 turn 更新がなければ reminder を挿入する ```python -if rounds_since_todo >= 3 and messages: - last = messages[-1] - if last["role"] == "user" and isinstance(last.get("content"), list): - last["content"].insert(0, { - "type": "text", - "text": "Update your todos.", - }) +if rounds_since_update >= 3: + results.insert(0, { + "type": "text", + "text": "Refresh your plan before continuing.", + }) ``` -「一度にin_progressは1つだけ」の制約が逐次的な集中を強制し、nagリマインダーが説明責任を生む。 +この reminder の意味は「system が代わりに plan を立てる」ではありません。 + +正しくは、 + +> plan state がしばらく stale なので、model に current plan を更新させる + +です。 -## s02からの変更点 +## main loop に何が増えるのか -| Component | Before (s02) | After (s03) | -|----------------|------------------|----------------------------| -| Tools | 4 | 5 (+todo) | -| Planning | None | TodoManager with statuses | -| Nag injection | None | `` after 3 rounds| -| Agent loop | Simple dispatch | + rounds_since_todo counter| +この章以後、main loop は `messages` だけを持つわけではなくなります。 -## 試してみる +持つ state が少なくとも 2 本になります。 -```sh -cd learn-claude-code -python agents/s03_todo_write.py +```text +messages + -> model が読む会話と観察の history + +planning state + -> 今回の session で current work をどう進めるか ``` -1. `Refactor the file hello.py: add type hints, docstrings, and a main guard` -2. `Create a Python package with __init__.py, utils.py, and tests/test_utils.py` -3. `Review all Python files and fix any style issues` +これがこの章の本当の upgrade です。 + +agent はもはや単に chat history を伸ばしているだけではなく、 + +**「いま何をしているか」を外から見える panel として維持する** + +ようになります。 + +## なぜここで task graph まで教えないのか + +初心者は planning の話が出るとすぐ、 + +> だったら durable task board も同時に作った方がよいのでは + +と考えがちです。 + +でも教学順序としては早すぎます。 + +理由は、ここで理解してほしいのが + +**session 内の軽い plan と、長く残る durable work graph は別物** + +という境界だからです。 + +`s03` は current focus の外部化です。 +`s12` 以降は durable task system です。 + +順番を守ると、後で混ざりにくくなります。 + +## 初学者が混ぜやすいポイント + +### 1. plan を model の頭の中だけに置く + +これでは multi-step work がすぐ漂います。 + +### 2. `in_progress` を複数許してしまう + +current focus がぼやけ、plan が checklist ではなく wish list になります。 + +### 3. plan を一度書いたら更新しない + +それでは plan は living state ではなく dead note です。 + +### 4. reminder を system の強制 planning と誤解する + +reminder は軽いナッジであって、plan の中身を system が代行するものではありません。 + +### 5. session plan と durable task graph を同一視する + +この章で扱うのは current request を進めるための軽量 state です。 + +## この章を読み終えたら何が言えるべきか + +1. planning は model の代わりに考えることではなく、current progress を外部 state にすること +2. session plan は durable task system とは別層であること +3. `in_progress` を 1 つに絞ると初心者の心智が安定すること + +## 一文で覚える + +**TodoWrite とは、「次に何をするか」を model の頭の中ではなく、system が見える外部 state に書き出すことです。** diff --git a/docs/ja/s04-subagent.md b/docs/ja/s04-subagent.md index bfffc3165..2462ce45b 100644 --- a/docs/ja/s04-subagent.md +++ b/docs/ja/s04-subagent.md @@ -1,94 +1,320 @@ # s04: Subagents -`s01 > s02 > s03 > [ s04 ] s05 > s06 | s07 > s08 > s09 > s10 > s11 > s12` +`s00 > s01 > s02 > s03 > [ s04 ] > s05 > s06 > s07 > s08 > s09 > s10 > s11 > s12 > s13 > s14 > s15 > s16 > s17 > s18 > s19` -> *"大きなタスクを分割し、各サブタスクにクリーンなコンテキストを"* -- サブエージェントは独立した messages[] を使い、メイン会話を汚さない。 -> -> **Harness 層**: コンテキスト隔離 -- モデルの思考の明晰さを守る。 +> *大きな仕事を全部 1 つの context に詰め込む必要はありません。* +> subagent の価値は「model を 1 個増やすこと」ではなく、「clean な別 context を 1 つ持てること」にあります。 -## 問題 +## この章が解く問題 -エージェントが作業するにつれ、messages配列は膨張し続ける。すべてのファイル読み取り、すべてのbash出力がコンテキストに永久に残る。「このプロジェクトはどのテストフレームワークを使っているか」という質問は5つのファイルを読む必要があるかもしれないが、親に必要なのは「pytest」という答えだけだ。 +agent がいろいろな調査や実装を進めると、親の `messages` はどんどん長くなります。 -## 解決策 +たとえば user の質問が単に +> 「この project は何の test framework を使っているの?」 + +だけでも、親 agent は答えるために、 + +- `pyproject.toml` を読む +- `requirements.txt` を読む +- `pytest` を検索する +- 実際に test command を走らせる + +かもしれません。 + +でも本当に親に必要な最終答えは、 + +> 「主に `pytest` を使っています」 + +の一文だけかもしれません。 + +もしこの途中作業を全部親 context に積み続けると、あとで別の質問に答えるときに、 + +- さっきの局所調査の noise +- 大量の file read +- 一時的な bash 出力 + +が main context を汚染します。 + +subagent が解くのはこの問題です。 + +**局所 task を別 context に閉じ込め、親には必要な summary だけを持ち帰る** + +のがこの章の主線です。 + +## 先に言葉をそろえる + +### 親 agent とは何か + +いま user と直接やり取りし、main `messages` を持っている actor が親 agent です。 + +### 子 agent とは何か + +親が一時的に派生させ、特定の subtask だけを処理させる actor が子 agent、つまり subagent です。 + +### context isolation とは何か + +これは単に、 + +- 親は親の `messages` +- 子は子の `messages` + +を持ち、 + +> 子の途中経過が自動で親 history に混ざらないこと + +を指します。 + +## 最小心智モデル + +この章は次の図でほぼ言い切れます。 + +```text +Parent agent + | + | 1. 局所 task を外へ出すと決める + v +Subagent + | + | 2. 自分の context で file read / search / tool execution + v +Summary + | + | 3. 必要な結果だけを親へ返す + v +Parent agent continues ``` -Parent agent Subagent -+------------------+ +------------------+ -| messages=[...] | | messages=[] | <-- fresh -| | dispatch | | -| tool: task | ----------> | while tool_use: | -| prompt="..." | | call tools | -| | summary | append results | -| result = "..." | <---------- | return last text | -+------------------+ +------------------+ - -Parent context stays clean. Subagent context is discarded. -``` -## 仕組み +ここで一番大事なのは次の 1 文です。 + +**subagent の価値は別 model instance ではなく、別 state boundary にある** + +ということです。 -1. 親に`task`ツールを追加する。子は`task`を除くすべての基本ツールを取得する(再帰的な生成は不可)。 +## 最小実装を段階で追う + +### 第 1 段階: 親に `task` tool を持たせる + +親 agent は model が明示的に言える入口を持つ必要があります。 + +> この局所仕事は clean context に外注したい + +その最小 schema は非常に簡単で構いません。 ```python -PARENT_TOOLS = CHILD_TOOLS + [ - {"name": "task", - "description": "Spawn a subagent with fresh context.", - "input_schema": { - "type": "object", - "properties": {"prompt": {"type": "string"}}, - "required": ["prompt"], - }}, -] +{ + "name": "task", + "description": "Run a subtask in a clean context and return a summary.", + "input_schema": { + "type": "object", + "properties": { + "prompt": {"type": "string"} + }, + "required": ["prompt"] + } +} ``` -2. サブエージェントは`messages=[]`で開始し、自身のループを実行する。最終テキストだけが親に返る。 +### 第 2 段階: subagent は自分専用の `messages` で始める + +subagent の本体はここです。 ```python def run_subagent(prompt: str) -> str: sub_messages = [{"role": "user", "content": prompt}] - for _ in range(30): # safety limit - response = client.messages.create( - model=MODEL, system=SUBAGENT_SYSTEM, - messages=sub_messages, - tools=CHILD_TOOLS, max_tokens=8000, - ) - sub_messages.append({"role": "assistant", - "content": response.content}) - if response.stop_reason != "tool_use": - break - results = [] - for block in response.content: - if block.type == "tool_use": - handler = TOOL_HANDLERS.get(block.name) - output = handler(**block.input) - results.append({"type": "tool_result", - "tool_use_id": block.id, - "content": str(output)[:50000]}) - sub_messages.append({"role": "user", "content": results}) - return "".join( - b.text for b in response.content if hasattr(b, "text") - ) or "(no summary)" + ... +``` + +親の `messages` をそのまま共有しないことが、最小の isolation です。 + +### 第 3 段階: 子に渡す tool は絞る + +subagent は親と完全に同じ tool set を持つ必要はありません。 + +むしろ最初は絞った方がよいです。 + +たとえば、 + +- `read_file` +- 検索系 tool +- read-only 寄りの `bash` + +だけを持たせ、 + +- さらに `task` 自体は子に渡さない + +ようにすれば、無限再帰を避けやすくなります。 + +### 第 4 段階: 子は最後に summary だけ返す + +一番大事なのはここです。 + +subagent は内部 history を親に全部戻しません。 + +戻すのは必要な summary だけです。 + +```python +return { + "type": "tool_result", + "tool_use_id": block.id, + "content": summary_text, +} +``` + +これにより親 context は、 + +- 必要な答え +- もしくは短い結論 + +だけを保持し、局所ノイズから守られます。 + +## この章の核になるデータ構造 + +この章で 1 つだけ覚えるなら、次の骨格です。 + +```python +class SubagentContext: + messages: list + tools: list + handlers: dict + max_turns: int ``` -子のメッセージ履歴全体(30回以上のツール呼び出し)は破棄される。親は1段落の要約を通常の`tool_result`として受け取る。 +意味は次の通りです。 + +- `messages`: 子自身の context +- `tools`: 子が使える道具 +- `handlers`: その tool が実際にどの code を呼ぶか +- `max_turns`: 子が無限に走り続けないための上限 + +つまり subagent は「関数呼び出し」ではなく、 + +**自分の state と tool boundary を持つ小さな agent** + +です。 + +## なぜ本当に useful なのか + +### 1. 親 context を軽く保てる + +局所 task の途中経過が main conversation に積み上がりません。 + +### 2. subtask の prompt を鋭くできる + +子に渡す prompt は次のように非常に集中できます。 + +- 「この directory の test framework を 1 文で答えて」 +- 「この file の bug を探して原因だけ返して」 +- 「3 file を読んで module 関係を summary して」 + +### 3. 後の multi-agent chapter の準備になる + +subagent は long-lived teammate より前に学ぶべき最小の delegation model です。 + +まず「1 回限りの clean delegation」を理解してから、 + +- persistent teammate +- structured protocol +- autonomous claim + +へ進むと心智がずっと滑らかになります。 + +## 0-to-1 の実装順序 + +### Version 1: blank-context subagent + +最初はこれで十分です。 + +- `task` tool +- `run_subagent(prompt)` +- 子専用 `messages` +- 最後に summary を返す + +### Version 2: tool set を制限する + +親より小さく安全な tool set を渡します。 + +### Version 3: safety bound を足す + +最低限、 + +- 最大 turn 数 +- tool failure 時の終了条件 + +は入れてください。 + +### Version 4: fork を検討する + +この順番を守ることが大事です。 + +最初から fork を入れる必要はありません。 -## s03からの変更点 +## fork とは何か、なぜ「次の段階」なのか -| Component | Before (s03) | After (s04) | -|----------------|------------------|---------------------------| -| Tools | 5 | 5 (base) + task (parent) | -| Context | Single shared | Parent + child isolation | -| Subagent | None | `run_subagent()` function | -| Return value | N/A | Summary text only | +最小 subagent は blank context から始めます。 -## 試してみる +でも subtask によっては、親が直前まで話していた内容を知らないと困ることがあります。 -```sh -cd learn-claude-code -python agents/s04_subagent.py +たとえば、 + +> 「さっき決めた方針に沿って、この module へ test を追加して」 + +のような場面です。 + +そのとき使うのが `fork` です。 + +```python +sub_messages = list(parent_messages) +sub_messages.append({"role": "user", "content": prompt}) ``` -1. `Use a subtask to find what testing framework this project uses` -2. `Delegate: read all .py files and summarize what each one does` -3. `Use a task to create a new module, then verify it from here` +fork の本質は、 + +**空白から始めるのではなく、親の既存 context を引き継いで子を始めること** + +です。 + +ただし teaching order としては、blank-context subagent を理解してからの方が安全です。 + +先に fork を入れると、初心者は + +- 何が isolation で +- 何が inherited context なのか + +を混ぜやすくなります。 + +## 初学者が混ぜやすいポイント + +### 1. subagent を「並列アピール機能」だと思う + +subagent の第一目的は concurrency 自慢ではなく、context hygiene です。 + +### 2. 子の history を全部親へ戻してしまう + +それでは isolation の価値がほとんど消えます。 + +### 3. 最初から役割を増やしすぎる + +explorer、reviewer、planner、tester などを一気に作る前に、 + +**clean context の一回限り worker** + +を正しく作る方が先です。 + +### 4. 子に `task` を持たせて無限に spawn させる + +境界がないと recursion で system が荒れます。 + +### 5. `max_turns` のような safety bound を持たない + +局所 task だからこそ、終わらない子を放置しない設計が必要です。 + +## この章を読み終えたら何が言えるべきか + +1. subagent の価値は clean context を作ることにある +2. 子は親と別の `messages` を持つべきである +3. 親へ戻すのは内部 history 全量ではなく summary でよい + +## 一文で覚える + +**Subagent とは、局所 task を clean context へ切り出し、親には必要な結論だけを持ち帰るための最小 delegation mechanism です。** diff --git a/docs/ja/s05-skill-loading.md b/docs/ja/s05-skill-loading.md index 14774bec9..b219f96dc 100644 --- a/docs/ja/s05-skill-loading.md +++ b/docs/ja/s05-skill-loading.md @@ -1,6 +1,6 @@ # s05: Skills -`s01 > s02 > s03 > s04 > [ s05 ] s06 | s07 > s08 > s09 > s10 > s11 > s12` +`s01 > s02 > s03 > s04 > [ s05 ] > s06 > s07 > s08 > s09 > s10 > s11 > s12 > s13 > s14 > s15 > s16 > s17 > s18 > s19` > *"必要な知識を、必要な時に読み込む"* -- system prompt ではなく tool_result で注入。 > @@ -106,3 +106,26 @@ python agents/s05_skill_loading.py 2. `Load the agent-builder skill and follow its instructions` 3. `I need to do a code review -- load the relevant skill first` 4. `Build an MCP server using the mcp-builder skill` + +## 高完成度システムではどう広がるか + +この章の核心は 2 層モデルです。 +まず軽い一覧で「何があるか」を知らせ、必要になったときだけ本文を深く読み込む。これはそのまま有効です。 + +より完成度の高いシステムでは、その周りに次のような広がりが出ます。 + +| 観点 | 教材版 | 高完成度システム | +|------|--------|------------------| +| 発見レイヤー | プロンプト内に名前一覧 | 予算付きの専用インベントリやリマインダ面 | +| 読み込み | `load_skill` が本文を返す | 同じ文脈へ注入、別ワーカーで実行、補助コンテキストとして添付など | +| ソース | `skills/` ディレクトリのみ | user、project、bundled、plugin、外部ソースなど | +| 適用範囲 | 常に見える | タスク種別、触ったファイル、明示指示に応じて有効化 | +| 引数 | なし | スキルへパラメータやテンプレート値を渡せる | +| ライフサイクル | 一度読むだけ | compact や再開後に復元されることがある | +| ガードレール | なし | スキルごとの許可範囲や行動制約を持てる | + +教材としては、2 層モデルだけで十分です。 +ここで学ぶべき本質は: + +**専門知識は最初から全部抱え込まず、必要な時だけ深く読み込む** +という設計です。 diff --git a/docs/ja/s06-context-compact.md b/docs/ja/s06-context-compact.md index 6927e7d1c..ceddf9fd0 100644 --- a/docs/ja/s06-context-compact.md +++ b/docs/ja/s06-context-compact.md @@ -1,10 +1,8 @@ # s06: Context Compact -`s01 > s02 > s03 > s04 > s05 > [ s06 ] | s07 > s08 > s09 > s10 > s11 > s12` +`s01 > s02 > s03 > s04 > s05 > [ s06 ] > s07 > s08 > s09 > s10 > s11 > s12 > s13 > s14 > s15 > s16 > s17 > s18 > s19` -> *"コンテキストはいつか溢れる、空ける手段が要る"* -- 3層圧縮で無限セッションを実現。 -> -> **Harness 層**: 圧縮 -- クリーンな記憶、無限のセッション。 +> *"コンテキストはいつか溢れる、空ける手段が要る"* -- 4レバー圧縮で無限セッションを実現。 ## 問題 @@ -12,18 +10,24 @@ ## 解決策 -積極性を段階的に上げる3層構成: +ツール出力時から手動トリガーまで、4つの圧縮レバー: ``` -Every turn: +Every tool call: +------------------+ | Tool call result | +------------------+ | v -[Layer 1: micro_compact] (silent, every turn) +[Lever 0: persisted-output] (at tool execution time) + Large outputs (>50KB, bash >30KB) are written to disk + and replaced with a preview marker. + | + v +[Lever 1: micro_compact] (silent, every turn) Replace tool_result > 3 turns old with "[Previous: used {tool_name}]" + (preserves read_file results as reference material) | v [Check: tokens > 50000?] @@ -31,47 +35,63 @@ Every turn: no yes | | v v -continue [Layer 2: auto_compact] +continue [Lever 2: auto_compact] Save transcript to .transcripts/ LLM summarizes conversation. Replace all messages with [summary]. | v - [Layer 3: compact tool] + [Lever 3: compact tool] Model calls compact explicitly. Same summarization as auto_compact. ``` ## 仕組み -1. **第1層 -- micro_compact**: 各LLM呼び出しの前に、古いツール結果をプレースホルダーに置換する。 +0. **レバー 0 -- persisted-output**: ツール出力がサイズ閾値を超えた場合、ディスクに書き込みプレビューマーカーに置換する。巨大な出力がコンテキストウィンドウに入るのを防ぐ。 + +```python +PERSIST_OUTPUT_TRIGGER_CHARS_DEFAULT = 50000 +PERSIST_OUTPUT_TRIGGER_CHARS_BASH = 30000 # bashはより低い閾値を使用 + +def maybe_persist_output(tool_use_id, output, trigger_chars=None): + if len(output) <= trigger: + return output + stored_path = _persist_tool_result(tool_use_id, output) + return _build_persisted_marker(stored_path, output) + # Returns: + # Output too large (48.8KB). Full output saved to: .task_outputs/tool-results/abc123.txt + # Preview (first 2.0KB): + # ... first 2000 chars ... + # +``` + +モデルは後から`read_file`で保存パスにアクセスし、完全な内容を取得できる。 + +1. **レバー 1 -- micro_compact**: 各LLM呼び出しの前に、古いツール結果をプレースホルダーに置換する。`read_file`の結果は参照資料として保持する。 ```python +PRESERVE_RESULT_TOOLS = {"read_file"} + def micro_compact(messages: list) -> list: - tool_results = [] - for i, msg in enumerate(messages): - if msg["role"] == "user" and isinstance(msg.get("content"), list): - for j, part in enumerate(msg["content"]): - if isinstance(part, dict) and part.get("type") == "tool_result": - tool_results.append((i, j, part)) + tool_results = [...] # collect all tool_result entries if len(tool_results) <= KEEP_RECENT: return messages - for _, _, part in tool_results[:-KEEP_RECENT]: - if len(part.get("content", "")) > 100: - part["content"] = f"[Previous: used {tool_name}]" + for part in tool_results[:-KEEP_RECENT]: + if tool_name in PRESERVE_RESULT_TOOLS: + continue # keep reference material + part["content"] = f"[Previous: used {tool_name}]" return messages ``` -2. **第2層 -- auto_compact**: トークンが閾値を超えたら、完全なトランスクリプトをディスクに保存し、LLMに要約を依頼する。 +2. **レバー 2 -- auto_compact**: トークンが閾値を超えたら、完全なトランスクリプトをディスクに保存し、LLMに要約を依頼する。 ```python def auto_compact(messages: list) -> list: - # Save transcript for recovery transcript_path = TRANSCRIPT_DIR / f"transcript_{int(time.time())}.jsonl" with open(transcript_path, "w") as f: for msg in messages: f.write(json.dumps(msg, default=str) + "\n") - # LLM summarizes response = client.messages.create( model=MODEL, messages=[{"role": "user", "content": @@ -84,33 +104,34 @@ def auto_compact(messages: list) -> list: ] ``` -3. **第3層 -- manual compact**: `compact`ツールが同じ要約処理をオンデマンドでトリガーする。 +3. **レバー 3 -- manual compact**: `compact`ツールが同じ要約処理をオンデマンドでトリガーする。 -4. ループが3層すべてを統合する: +4. ループが4つのレバーすべてを統合する: ```python def agent_loop(messages: list): while True: - micro_compact(messages) # Layer 1 + micro_compact(messages) # Lever 1 if estimate_tokens(messages) > THRESHOLD: - messages[:] = auto_compact(messages) # Layer 2 + messages[:] = auto_compact(messages) # Lever 2 response = client.messages.create(...) - # ... tool execution ... + # ... tool execution with persisted-output ... # Lever 0 if manual_compact: - messages[:] = auto_compact(messages) # Layer 3 + messages[:] = auto_compact(messages) # Lever 3 ``` -トランスクリプトがディスク上に完全な履歴を保持する。何も真に失われず、アクティブなコンテキストの外に移動されるだけ。 +トランスクリプトがディスク上に完全な履歴を保持する。大きな出力は`.task_outputs/tool-results/`に保存される。何も真に失われず、アクティブなコンテキストの外に移動されるだけ。 ## s05からの変更点 -| Component | Before (s05) | After (s06) | -|----------------|------------------|----------------------------| -| Tools | 5 | 5 (base + compact) | -| Context mgmt | None | Three-layer compression | -| Micro-compact | None | Old results -> placeholders| -| Auto-compact | None | Token threshold trigger | -| Transcripts | None | Saved to .transcripts/ | +| Component | Before (s05) | After (s06) | +|-------------------|------------------|----------------------------| +| Tools | 5 | 5 (base + compact) | +| Context mgmt | None | Four-lever compression | +| Persisted-output | None | Large outputs -> disk + preview | +| Micro-compact | None | Old results -> placeholders| +| Auto-compact | None | Token threshold trigger | +| Transcripts | None | Saved to .transcripts/ | ## 試してみる @@ -122,3 +143,21 @@ python agents/s06_context_compact.py 1. `Read every Python file in the agents/ directory one by one` (micro-compactが古い結果を置換するのを観察する) 2. `Keep reading files until compression triggers automatically` 3. `Use the compact tool to manually compress the conversation` + +## 高完成度システムではどう広がるか + +教材版は compact を理解しやすくするために、仕組みを大きく 4 本に絞っています。 +より完成度の高いシステムでは、その周りに追加の段階が増えます。 + +| レイヤー | 教材版 | 高完成度システム | +|---------|--------|------------------| +| 大きな出力 | 大きすぎる結果をディスクへ逃がす | 複数ツールの合計量も見ながら、文脈に入る前に予算調整する | +| 軽い整理 | 単純な micro-compact | フル要約の前に複数の軽量整理パスを入れる | +| フル compact | 閾値を超えたら要約 | 事前 compact、回復用 compact、エラー後 compact など役割分担が増える | +| 回復 | 要約 1 本に置き換える | compact 後に最近のファイル、計画、スキル、非同期状態などを戻す | +| 起動条件 | 自動または手動ツール | ユーザー操作、内部閾値、回復処理など複数の入口 | + +ここで覚えるべき核心は変わりません。 + +**compact は「履歴を捨てること」ではなく、「細部をアクティブ文脈の外へ移し、連続性を保つこと」** +です。 diff --git a/docs/ja/s07-permission-system.md b/docs/ja/s07-permission-system.md new file mode 100644 index 000000000..22fda7fb6 --- /dev/null +++ b/docs/ja/s07-permission-system.md @@ -0,0 +1,371 @@ +# s07: Permission System + +`s00 > s01 > s02 > s03 > s04 > s05 > s06 > [ s07 ] > s08 > s09 > s10 > s11 > s12 > s13 > s14 > s15 > s16 > s17 > s18 > s19` + +> *model は「こうしたい」と提案できます。けれど本当に実行する前には、必ず安全 gate を通さなければなりません。* + +## この章の核心目標 + +`s06` まで来ると agent はすでに、 + +- file を読む +- file を書く +- command を実行する +- plan を持つ +- context を compact する + +ことができます。 + +能力が増えるほど、当然危険も増えます。 + +- 間違った file を書き換える +- 危険な shell command を実行する +- user がまだ許可していない操作に踏み込む + +だからここから先は、 + +**「model の意図」がそのまま「実行」へ落ちる** + +構造をやめなければなりません。 + +この章で入れるのは、 + +**tool request を実行前に判定する permission pipeline** + +です。 + +## 併読すると楽になる資料 + +- model の提案と system の実実行が混ざるなら [`s00a-query-control-plane.md`](./s00a-query-control-plane.md) +- なぜ tool request を直接 handler に落としてはいけないか不安なら [`s02a-tool-control-plane.md`](./s02a-tool-control-plane.md) +- `PermissionRule`、`PermissionDecision`、`tool_result` が混ざるなら [`data-structures.md`](./data-structures.md) + +## 先に言葉をそろえる + +### permission system とは何か + +permission system は真偽値 1 個ではありません。 + +むしろ次の 3 問に順番に答える pipeline です。 + +1. これは即拒否すべきか +2. 自動で許可してよいか +3. 残りは user に確認すべきか + +### permission mode とは何か + +mode は、その session 全体の安全姿勢です。 + +たとえば、 + +- 慎重に進める +- 読み取りだけ許す +- 安全そうなものは自動通過させる + +といった大きな方針です。 + +### rule とは何か + +rule は、 + +> ある tool request に当たったらどう振る舞うか + +を表す小さな条項です。 + +最小形なら次のような record で表せます。 + +```python +{ + "tool": "bash", + "content": "sudo *", + "behavior": "deny", +} +``` + +意味は、 + +- `bash` に対して +- command 内容が `sudo *` に当たれば +- 拒否する + +です。 + +## 最小 permission system の形 + +0 から手で作るなら、最小で正しい pipeline は 4 段で十分です。 + +```text +tool_call + | + v +1. deny rules + -> 危険なら即拒否 + | + v +2. mode check + -> 現在 mode に照らして判定 + | + v +3. allow rules + -> 安全で明確なら自動許可 + | + v +4. ask user + -> 残りは確認に回す +``` + +この 4 段で teaching repo の主線としては十分に強いです。 + +## なぜ順番がこの形なのか + +### 1. deny を先に見る理由 + +ある種の request は mode に関係なく危険です。 + +たとえば、 + +- 明白に危険な shell command +- workspace の外へ逃げる path + +などです。 + +こうしたものは「いま auto mode だから」などの理由で通すべきではありません。 + +### 2. mode を次に見る理由 + +mode はその session の大きな姿勢だからです。 + +たとえば `plan` mode なら、 + +> まだ review / analysis 段階なので write 系をまとめて抑える + +という全体方針を早い段で効かせたいわけです。 + +### 3. allow を後に見る理由 + +deny と mode を抜けたあとで、 + +> これは何度も出てくる安全な操作だから自動で通してよい + +というものを allow します。 + +たとえば、 + +- `read_file` +- code search +- `git status` + +などです。 + +### 4. ask を最後に置く理由 + +前段で明確に決められなかった灰色領域だけを user に回すためです。 + +これで、 + +- 危険なものは system が先に止める +- 明らかに安全なものは system が先に通す +- 本当に曖昧なものだけ user が判断する + +という自然な構図になります。 + +## 最初に実装すると良い 3 つの mode + +最初から mode を増やしすぎる必要はありません。 + +まずは次の 3 つで十分です。 + +| mode | 意味 | 向いている場面 | +|---|---|---| +| `default` | rule に当たらないものは user に確認 | 普通の対話 | +| `plan` | write を止め、read 中心で進める | planning / review / analysis | +| `auto` | 明らかに安全な read は自動許可 | 高速探索 | + +この 3 つだけでも、 + +- 慎重さ +- 計画モード +- 流暢さ + +のバランスを十分教えられます。 + +## この章の核になるデータ構造 + +### 1. PermissionRule + +```python +PermissionRule = { + "tool": str, + "behavior": "allow" | "deny" | "ask", + "path": str | None, + "content": str | None, +} +``` + +必ずしも最初から `path` と `content` の両方を使う必要はありません。 + +でも少なくとも rule は次を表現できる必要があります。 + +- どの tool に対する rule か +- 当たったらどう振る舞うか + +### 2. Permission Mode + +```python +mode = "default" | "plan" | "auto" +``` + +これは個々の rule ではなく session 全体の posture です。 + +### 3. PermissionDecision + +```python +{ + "behavior": "allow" | "deny" | "ask", + "reason": "why this decision was made", +} +``` + +ここで `reason` を持つのが大切です。 + +なぜなら permission system は「通した / 止めた」だけではなく、 + +**なぜそうなったかを説明できるべき** + +だからです。 + +## 最小実装を段階で追う + +### 第 1 段階: 判定関数を書く + +```python +def check_permission(tool_name: str, tool_input: dict) -> dict: + # 1. deny rules + for rule in deny_rules: + if matches(rule, tool_name, tool_input): + return {"behavior": "deny", "reason": "matched deny rule"} + + # 2. mode check + if mode == "plan" and tool_name in WRITE_TOOLS: + return {"behavior": "deny", "reason": "plan mode blocks writes"} + if mode == "auto" and tool_name in READ_ONLY_TOOLS: + return {"behavior": "allow", "reason": "auto mode allows reads"} + + # 3. allow rules + for rule in allow_rules: + if matches(rule, tool_name, tool_input): + return {"behavior": "allow", "reason": "matched allow rule"} + + # 4. fallback + return {"behavior": "ask", "reason": "needs confirmation"} +``` + +重要なのは code の華やかさではなく、 + +**先に分類し、その後で分岐する** + +という構造です。 + +### 第 2 段階: tool 実行直前に接ぐ + +permission は tool request が来たあと、handler を呼ぶ前に入ります。 + +```python +decision = perms.check(tool_name, tool_input) + +if decision["behavior"] == "deny": + return f"Permission denied: {decision['reason']}" + +if decision["behavior"] == "ask": + ok = ask_user(...) + if not ok: + return "Permission denied by user" + +return handler(**tool_input) +``` + +これで初めて、 + +**tool request と real execution の間に control gate** + +が立ちます。 + +## `bash` を特別に気にする理由 + +すべての tool の中で `bash` は特別に危険です。 + +なぜなら、 + +- `read_file` は読むだけ +- `write_file` は書くだけ +- でも `bash` は理論上ほとんど何でもできる + +からです。 + +したがって `bash` をただの文字列入力として見るのは危険です。 + +成熟した system では、`bash` を小さな executable language として扱います。 + +教材版でも最低限、次のような危険要素は先に弾く方がよいです。 + +- `sudo` +- `rm -rf` +- 危険な redirection +- suspicious command substitution +- 明白な shell metacharacter chaining + +核心は 1 文です。 + +**bash は普通の text ではなく、可実行 action の記述** + +です。 + +## 初学者が混ぜやすいポイント + +### 1. permission を yes/no の 2 値で考える + +実際には `deny / allow / ask` の 3 分岐以上が必要です。 + +### 2. mode を rule の代わりにしようとする + +mode は全体 posture、rule は個別条項です。役割が違います。 + +### 3. `bash` を普通の string と同じ感覚で通す + +execution power が桁違いです。 + +### 4. deny / allow より先に user へ全部投げる + +それでは system 側の safety design を学べません。 + +### 5. decision に reason を残さない + +あとで「なぜ止まったか」が説明できなくなります。 + +## 拒否トラッキングの意味 + +教材コードでは、連続拒否を数える簡単な circuit breaker を持たせるのも有効です。 + +なぜなら agent が同じ危険 request を何度も繰り返すとき、 + +- mode が合っていない +- plan を作り直すべき +- 別 route を選ぶべき + +という合図になるからです。 + +これは高度な observability ではなく、 + +**permission failure も agent の progress 状態の一部である** + +と教えるための最小観測です。 + +## この章を読み終えたら何が言えるべきか + +1. model の意図は handler へ直結させず、permission pipeline を通すべき +2. `default / plan / auto` の 3 mode だけでも十分に teaching mainline が作れる +3. `bash` は普通の text 入力ではなく、高い実行力を持つ tool なので特別に警戒すべき + +## 一文で覚える + +**Permission System とは、model の意図をそのまま実行に落とさず、deny / mode / allow / ask の pipeline で安全に変換する層です。** diff --git a/docs/ja/s08-background-tasks.md b/docs/ja/s08-background-tasks.md deleted file mode 100644 index b3fe0773e..000000000 --- a/docs/ja/s08-background-tasks.md +++ /dev/null @@ -1,107 +0,0 @@ -# s08: Background Tasks - -`s01 > s02 > s03 > s04 > s05 > s06 | s07 > [ s08 ] s09 > s10 > s11 > s12` - -> *"遅い操作はバックグラウンドへ、エージェントは次を考え続ける"* -- デーモンスレッドがコマンド実行、完了後に通知を注入。 -> -> **Harness 層**: バックグラウンド実行 -- モデルが考え続ける間、Harness が待つ。 - -## 問題 - -一部のコマンドは数分かかる: `npm install`、`pytest`、`docker build`。ブロッキングループでは、モデルはサブプロセスの完了を待って座っている。ユーザーが「依存関係をインストールして、その間にconfigファイルを作って」と言っても、エージェントは並列ではなく逐次的に処理する。 - -## 解決策 - -``` -Main thread Background thread -+-----------------+ +-----------------+ -| agent loop | | subprocess runs | -| ... | | ... | -| [LLM call] <---+------- | enqueue(result) | -| ^drain queue | +-----------------+ -+-----------------+ - -Timeline: -Agent --[spawn A]--[spawn B]--[other work]---- - | | - v v - [A runs] [B runs] (parallel) - | | - +-- results injected before next LLM call --+ -``` - -## 仕組み - -1. BackgroundManagerがスレッドセーフな通知キューでタスクを追跡する。 - -```python -class BackgroundManager: - def __init__(self): - self.tasks = {} - self._notification_queue = [] - self._lock = threading.Lock() -``` - -2. `run()`がデーモンスレッドを開始し、即座にリターンする。 - -```python -def run(self, command: str) -> str: - task_id = str(uuid.uuid4())[:8] - self.tasks[task_id] = {"status": "running", "command": command} - thread = threading.Thread( - target=self._execute, args=(task_id, command), daemon=True) - thread.start() - return f"Background task {task_id} started" -``` - -3. サブプロセス完了時に、結果を通知キューへ。 - -```python -def _execute(self, task_id, command): - try: - r = subprocess.run(command, shell=True, cwd=WORKDIR, - capture_output=True, text=True, timeout=300) - output = (r.stdout + r.stderr).strip()[:50000] - except subprocess.TimeoutExpired: - output = "Error: Timeout (300s)" - with self._lock: - self._notification_queue.append({ - "task_id": task_id, "result": output[:500]}) -``` - -4. エージェントループが各LLM呼び出しの前に通知をドレインする。 - -```python -def agent_loop(messages: list): - while True: - notifs = BG.drain_notifications() - if notifs: - notif_text = "\n".join( - f"[bg:{n['task_id']}] {n['result']}" for n in notifs) - messages.append({"role": "user", - "content": f"\n{notif_text}\n" - f""}) - response = client.messages.create(...) -``` - -ループはシングルスレッドのまま。サブプロセスI/Oだけが並列化される。 - -## s07からの変更点 - -| Component | Before (s07) | After (s08) | -|----------------|------------------|----------------------------| -| Tools | 8 | 6 (base + background_run + check)| -| Execution | Blocking only | Blocking + background threads| -| Notification | None | Queue drained per loop | -| Concurrency | None | Daemon threads | - -## 試してみる - -```sh -cd learn-claude-code -python agents/s08_background_tasks.py -``` - -1. `Run "sleep 5 && echo done" in the background, then create a file while it runs` -2. `Start 3 background tasks: "sleep 2", "sleep 4", "sleep 6". Check their status.` -3. `Run pytest in the background and keep working on other things` diff --git a/docs/ja/s08-hook-system.md b/docs/ja/s08-hook-system.md new file mode 100644 index 000000000..7df109931 --- /dev/null +++ b/docs/ja/s08-hook-system.md @@ -0,0 +1,151 @@ +# s08: Hook System + +`s00 > s01 > s02 > s03 > s04 > s05 > s06 > s07 > [ s08 ] > s09 > s10 > s11 > s12 > s13 > s14 > s15 > s16 > s17 > s18 > s19` + +> *ループそのものを書き換えなくても、ライフサイクルの周囲に拡張点を置ける。* + +## この章が解決する問題 + +`s07` までで、agent はかなり実用的になりました。 + +しかし実際には、ループの外側で足したい振る舞いが増えていきます。 + +- 監査ログ +- 実行追跡 +- 通知 +- 追加の安全チェック +- 実行前後の補助メッセージ + +こうした周辺機能を毎回メインループに直接書き込むと、すぐに主線が読みにくくなります。 + +そこで必要なのが Hook です。 + +## 主線とどう併読するか + +- Hook を「主ループの中へ if/else を足すこと」だと思い始めたら、まず [`s02a-tool-control-plane.md`](./s02a-tool-control-plane.md) に戻ります。 +- 主ループ、tool handler、hook の副作用が同じ層に見えてきたら、[`entity-map.md`](./entity-map.md) で「主状態を進めるもの」と「横から観測するもの」を分けます。 +- この先で prompt、recovery、teams まで読むつもりなら、[`s00e-reference-module-map.md`](./s00e-reference-module-map.md) を近くに置いておくと、「control plane + sidecar 拡張」が何度も出てきても崩れにくくなります。 + +## Hook を最も簡単に言うと + +Hook は: + +**主ループの決まった節目で、追加動作を差し込む拡張点** + +です。 + +ここで大切なのは、Hook が主ループの代わりになるわけではないことです。 +主ループは引き続き: + +- モデル呼び出し +- ツール実行 +- 結果の追記 + +を担当します。 + +## 最小の心智モデル + +```text +tool_call from model + | + v +[PreToolUse hooks] + | + v +[execute tool] + | + v +[PostToolUse hooks] + | + v +append result and continue +``` + +この形なら、ループの主線を壊さずに拡張できます。 + +## まず教えるべき 3 つのイベント + +| イベント | いつ発火するか | 主な用途 | +|---|---|---| +| `SessionStart` | セッション開始時 | 初期通知、ウォームアップ | +| `PreToolUse` | ツール実行前 | 監査、ブロック、補助判断 | +| `PostToolUse` | ツール実行後 | 結果記録、通知、追跡 | + +これだけで教学版としては十分です。 + +## 重要な境界 + +### Hook は主状態遷移を置き換えない + +Hook がやるのは「観察して補助すること」です。 + +メッセージ履歴、停止条件、ツール呼び出しの主責任は、あくまでメインループに残します。 + +### Hook には整ったイベント情報を渡す + +理想的には、各 Hook は同じ形の情報を受け取ります。 + +たとえば: + +- `event` +- `tool_name` +- `tool_input` +- `tool_output` +- `error` + +この形が揃っていると、Hook を増やしても心智が崩れません。 + +## 最小実装 + +### 1. 設定を読む + +```python +hooks = { + "PreToolUse": [...], + "PostToolUse": [...], + "SessionStart": [...], +} +``` + +### 2. 実行関数を作る + +```python +def run_hooks(event_name: str, ctx: dict): + for hook in hooks.get(event_name, []): + run_one_hook(hook, ctx) +``` + +### 3. ループに接続する + +```python +run_hooks("PreToolUse", ctx) +output = handler(**tool_input) +run_hooks("PostToolUse", ctx) +``` + +## 初学者が混乱しやすい点 + +### 1. Hook を第二の主ループのように考える + +そうすると制御が分裂して、一気に分かりにくくなります。 + +### 2. Hook ごとに別のデータ形を渡す + +新しい Hook を足すたびに、読む側の心智コストが増えてしまいます。 + +### 3. 何でも Hook に入れようとする + +Hook は便利ですが、メインの状態遷移まで押し込む場所ではありません。 + +## Try It + +```sh +cd learn-claude-code +python agents/s08_hook_system.py +``` + +見るポイント: + +1. どのイベントで Hook が走るか +2. Hook が主ループを壊さずに追加動作だけを行っているか +3. イベント情報の形が揃っているか diff --git a/docs/ja/s09-agent-teams.md b/docs/ja/s09-agent-teams.md deleted file mode 100644 index 671b6e660..000000000 --- a/docs/ja/s09-agent-teams.md +++ /dev/null @@ -1,125 +0,0 @@ -# s09: Agent Teams - -`s01 > s02 > s03 > s04 > s05 > s06 | s07 > s08 > [ s09 ] s10 > s11 > s12` - -> *"一人で終わらないなら、チームメイトに任せる"* -- 永続チームメイト + 非同期メールボックス。 -> -> **Harness 層**: チームメールボックス -- 複数モデルをファイルで協調。 - -## 問題 - -サブエージェント(s04)は使い捨てだ: 生成し、作業し、要約を返し、消滅する。アイデンティティもなく、呼び出し間の記憶もない。バックグラウンドタスク(s08)はシェルコマンドを実行するが、LLM誘導の意思決定はできない。 - -本物のチームワークには: (1)単一プロンプトを超えて存続する永続エージェント、(2)アイデンティティとライフサイクル管理、(3)エージェント間の通信チャネルが必要だ。 - -## 解決策 - -``` -Teammate lifecycle: - spawn -> WORKING -> IDLE -> WORKING -> ... -> SHUTDOWN - -Communication: - .team/ - config.json <- team roster + statuses - inbox/ - alice.jsonl <- append-only, drain-on-read - bob.jsonl - lead.jsonl - - +--------+ send("alice","bob","...") +--------+ - | alice | -----------------------------> | bob | - | loop | bob.jsonl << {json_line} | loop | - +--------+ +--------+ - ^ | - | BUS.read_inbox("alice") | - +---- alice.jsonl -> read + drain ---------+ -``` - -## 仕組み - -1. TeammateManagerがconfig.jsonでチーム名簿を管理する。 - -```python -class TeammateManager: - def __init__(self, team_dir: Path): - self.dir = team_dir - self.dir.mkdir(exist_ok=True) - self.config_path = self.dir / "config.json" - self.config = self._load_config() - self.threads = {} -``` - -2. `spawn()`がチームメイトを作成し、そのエージェントループをスレッドで開始する。 - -```python -def spawn(self, name: str, role: str, prompt: str) -> str: - member = {"name": name, "role": role, "status": "working"} - self.config["members"].append(member) - self._save_config() - thread = threading.Thread( - target=self._teammate_loop, - args=(name, role, prompt), daemon=True) - thread.start() - return f"Spawned teammate '{name}' (role: {role})" -``` - -3. MessageBus: 追記専用のJSONLインボックス。`send()`がJSON行を追記し、`read_inbox()`がすべて読み取ってドレインする。 - -```python -class MessageBus: - def send(self, sender, to, content, msg_type="message", extra=None): - msg = {"type": msg_type, "from": sender, - "content": content, "timestamp": time.time()} - if extra: - msg.update(extra) - with open(self.dir / f"{to}.jsonl", "a") as f: - f.write(json.dumps(msg) + "\n") - - def read_inbox(self, name): - path = self.dir / f"{name}.jsonl" - if not path.exists(): return "[]" - msgs = [json.loads(l) for l in path.read_text().strip().splitlines() if l] - path.write_text("") # drain - return json.dumps(msgs, indent=2) -``` - -4. 各チームメイトは各LLM呼び出しの前にインボックスを確認し、受信メッセージをコンテキストに注入する。 - -```python -def _teammate_loop(self, name, role, prompt): - messages = [{"role": "user", "content": prompt}] - for _ in range(50): - inbox = BUS.read_inbox(name) - if inbox != "[]": - messages.append({"role": "user", - "content": f"{inbox}"}) - response = client.messages.create(...) - if response.stop_reason != "tool_use": - break - # execute tools, append results... - self._find_member(name)["status"] = "idle" -``` - -## s08からの変更点 - -| Component | Before (s08) | After (s09) | -|----------------|------------------|----------------------------| -| Tools | 6 | 9 (+spawn/send/read_inbox) | -| Agents | Single | Lead + N teammates | -| Persistence | None | config.json + JSONL inboxes| -| Threads | Background cmds | Full agent loops per thread| -| Lifecycle | Fire-and-forget | idle -> working -> idle | -| Communication | None | message + broadcast | - -## 試してみる - -```sh -cd learn-claude-code -python agents/s09_agent_teams.py -``` - -1. `Spawn alice (coder) and bob (tester). Have alice send bob a message.` -2. `Broadcast "status update: phase 1 complete" to all teammates` -3. `Check the lead inbox for any messages` -4. `/team`と入力してステータス付きのチーム名簿を確認する -5. `/inbox`と入力してリーダーのインボックスを手動確認する diff --git a/docs/ja/s09-memory-system.md b/docs/ja/s09-memory-system.md new file mode 100644 index 000000000..9e1b94a6f --- /dev/null +++ b/docs/ja/s09-memory-system.md @@ -0,0 +1,184 @@ +# s09: Memory System + +`s00 > s01 > s02 > s03 > s04 > s05 > s06 > s07 > s08 > [ s09 ] > s10 > s11 > s12 > s13 > s14 > s15 > s16 > s17 > s18 > s19` + +> *memory は会話の全部を保存する場所ではない。次のセッションでも残すべき事実だけを小さく持つ場所である。* + +## この章が解決する問題 + +memory がなければ、新しいセッションは毎回ゼロから始まります。 + +その結果、agent は何度も同じことを忘れます。 + +- ユーザーの好み +- すでに何度も訂正された注意点 +- コードだけでは分かりにくいプロジェクト事情 +- 外部参照の場所 + +そこで必要になるのが memory です。 + +## 最初に立てるべき境界 + +この章で最も大事なのは: + +**何でも memory に入れない** + +ことです。 + +memory に入れるべきなのは: + +- セッションをまたいでも価値がある +- 現在のリポジトリを読み直すだけでは分かりにくい + +こうした情報だけです。 + +## 主線とどう併読するか + +- memory を「長い context の置き場」だと思ってしまうなら、[`s06-context-compact.md`](./s06-context-compact.md) に戻って compact と durable memory を分けます。 +- `messages[]`、summary block、memory store が頭の中で混ざってきたら、[`data-structures.md`](./data-structures.md) を見ながら読みます。 +- このあと `s10` へ進むなら、[`s10a-message-prompt-pipeline.md`](./s10a-message-prompt-pipeline.md) を横に置くと、memory が次の入力へどう戻るかをつかみやすくなります。 + +## 初学者向けの 4 分類 + +### 1. `user` + +安定したユーザーの好み。 + +例: + +- `pnpm` を好む +- 回答は短めがよい + +### 2. `feedback` + +ユーザーが明示的に直した点。 + +例: + +- 生成ファイルは勝手に触らない +- テストの更新前に確認する + +### 3. `project` + +コードを見ただけでは分かりにくい持続的事情。 + +### 4. `reference` + +外部資料や外部ボードへの参照先。 + +## 入れてはいけないもの + +| 入れないもの | 理由 | +|---|---| +| ディレクトリ構造 | コードを読めば分かる | +| 関数名やシグネチャ | ソースが真実だから | +| 現在タスクの進捗 | task / plan の責務 | +| 一時的なブランチ名 | すぐ古くなる | +| 秘密情報 | 危険 | + +## 最小の心智モデル + +```text +conversation + | + | 長期的に残すべき事実が出る + v +save_memory + | + v +.memory/ + ├── MEMORY.md + ├── prefer_pnpm.md + └── ask_before_codegen.md + | + v +次回セッション開始時に再読込 +``` + +## 重要なデータ構造 + +### 1. 1 メモリ = 1 ファイル + +```md +--- +name: prefer_pnpm +description: User prefers pnpm over npm +type: user +--- +The user explicitly prefers pnpm for package management commands. +``` + +### 2. 小さな索引 + +```md +# Memory Index + +- prefer_pnpm [user] +- ask_before_codegen [feedback] +``` + +索引は内容そのものではなく、「何があるか」を素早く知るための地図です。 + +## 最小実装 + +```python +MEMORY_TYPES = ("user", "feedback", "project", "reference") +``` + +```python +def save_memory(name, description, mem_type, content): + path = memory_dir / f"{slugify(name)}.md" + path.write_text(render_frontmatter(name, description, mem_type) + content) + rebuild_index() +``` + +次に、セッション開始時に読み込みます。 + +```python +memories = memory_store.load_all() +``` + +そして `s10` で prompt 組み立てに入れます。 + +## 近い概念との違い + +### memory + +次回以降も役立つ事実。 + +### task + +いま何を完了したいか。 + +### plan + +このターンでどう進めるか。 + +### `CLAUDE.md` + +より安定した指示文書や standing rules。 + +## 初学者がよくやる間違い + +### 1. コードを読めば分かることまで保存する + +それは memory ではなく、重複です。 + +### 2. 現在の作業状況を memory に入れる + +それは task / plan の責務です。 + +### 3. memory を絶対真実のように扱う + +memory は古くなり得ます。 + +安全な原則は: + +**memory は方向を与え、現在観測は真実を与える。** + +## Try It + +```sh +cd learn-claude-code +python agents/s09_memory_system.py +``` diff --git a/docs/ja/s10-system-prompt.md b/docs/ja/s10-system-prompt.md new file mode 100644 index 000000000..3c1868b83 --- /dev/null +++ b/docs/ja/s10-system-prompt.md @@ -0,0 +1,156 @@ +# s10: System Prompt + +`s00 > s01 > s02 > s03 > s04 > s05 > s06 > s07 > s08 > s09 > [ s10 ] > s11 > s12 > s13 > s14 > s15 > s16 > s17 > s18 > s19` + +> *system prompt は巨大な固定文字列ではなく、複数ソースから組み立てるパイプラインである。* + +## なぜこの章が必要か + +最初は 1 本の system prompt 文字列でも動きます。 + +しかし機能が増えると、入力の材料が増えます。 + +- 安定した役割説明 +- ツール一覧 +- skills +- memory +- `CLAUDE.md` +- 現在ディレクトリや日時のような動的状態 + +こうなると、1 本の固定文字列では心智が崩れます。 + +## 主線とどう併読するか + +- prompt をまだ「大きな謎の文字列」として見てしまうなら、[`s00a-query-control-plane.md`](./s00a-query-control-plane.md) に戻って、モデル入力がどの control 層を通るかを見直します。 +- どの順で何を組み立てるかを安定させたいなら、[`s10a-message-prompt-pipeline.md`](./s10a-message-prompt-pipeline.md) をこの章の橋渡し資料として併読します。 +- system rules、tool docs、memory、runtime state が 1 つの入力塊に見えてきたら、[`data-structures.md`](./data-structures.md) で入力片の出所を分け直します。 + +## 最小の心智モデル + +```text +1. core identity +2. tools +3. skills +4. memory +5. CLAUDE.md chain +6. dynamic runtime context +``` + +最後に順に連結します。 + +```text +core ++ tools ++ skills ++ memory ++ claude_md ++ dynamic_context += final model input +``` + +## 最も重要な境界 + +分けるべきなのは: + +- 安定したルール +- 毎ターン変わる補足情報 + +安定したもの: + +- 役割 +- 安全ルール +- ツール契約 +- 長期指示 + +動的なもの: + +- 現在日時 +- cwd +- 現在モード +- このターンだけの注意 + +## 最小 builder + +```python +class SystemPromptBuilder: + def build(self) -> str: + parts = [] + parts.append(self._build_core()) + parts.append(self._build_tools()) + parts.append(self._build_skills()) + parts.append(self._build_memory()) + parts.append(self._build_claude_md()) + parts.append(self._build_dynamic()) + return "\n\n".join(p for p in parts if p) +``` + +ここで重要なのは、各メソッドが 1 つの責務だけを持つことです。 + +## 1 本の大文字列より良い理由 + +### 1. どこから来た情報か分かる + +### 2. 部分ごとにテストしやすい + +### 3. 安定部分と動的部分を分けて育てられる + +## `system prompt` と `system reminder` + +より分かりやすい考え方は: + +- `system prompt`: 安定した土台 +- `system reminder`: このターンだけの追加注意 + +こうすると、長期ルールと一時的ノイズが混ざりにくくなります。 + +## `CLAUDE.md` が独立した段なのはなぜか + +`CLAUDE.md` は memory でも skill でもありません。 + +より安定した指示文書の層です。 + +教学版では、次のように積み上げると理解しやすいです。 + +1. ユーザー級 +2. プロジェクト根 +3. サブディレクトリ級 + +重要なのは: + +**指示源は上書き一発ではなく、層として積める** + +ということです。 + +## memory とこの章の関係 + +memory は保存するだけでは意味がありません。 + +モデル入力に再び入って初めて、agent の行動に効いてきます。 + +だから: + +- `s09` で記憶する +- `s10` で入力に組み込む + +という流れになります。 + +## 初学者が混乱しやすい点 + +### 1. system prompt を固定文字列だと思う + +### 2. 毎回変わる情報も全部同じ塊に入れる + +### 3. skills、memory、`CLAUDE.md` を同じものとして扱う + +似て見えても責務は違います。 + +- `skills`: 任意の能力パッケージ +- `memory`: セッションをまたぐ事実 +- `CLAUDE.md`: 立ち続ける指示文書 + +## Try It + +```sh +cd learn-claude-code +python agents/s10_system_prompt.py +``` diff --git a/docs/ja/s10-team-protocols.md b/docs/ja/s10-team-protocols.md deleted file mode 100644 index fd19562d9..000000000 --- a/docs/ja/s10-team-protocols.md +++ /dev/null @@ -1,106 +0,0 @@ -# s10: Team Protocols - -`s01 > s02 > s03 > s04 > s05 > s06 | s07 > s08 > s09 > [ s10 ] s11 > s12` - -> *"チームメイト間には統一の通信ルールが必要"* -- 1つの request-response パターンが全交渉を駆動。 -> -> **Harness 層**: プロトコル -- モデル間の構造化されたハンドシェイク。 - -## 問題 - -s09ではチームメイトが作業し通信するが、構造化された協調がない: - -**シャットダウン**: スレッドを強制終了するとファイルが中途半端に書かれ、config.jsonが不正な状態になる。ハンドシェイクが必要 -- リーダーが要求し、チームメイトが承認(完了して退出)か拒否(作業継続)する。 - -**プラン承認**: リーダーが「認証モジュールをリファクタリングして」と言うと、チームメイトは即座に開始する。リスクの高い変更では、実行前にリーダーが計画をレビューすべきだ。 - -両方とも同じ構造: 一方がユニークIDを持つリクエストを送り、他方がそのIDで応答する。 - -## 解決策 - -``` -Shutdown Protocol Plan Approval Protocol -================== ====================== - -Lead Teammate Teammate Lead - | | | | - |--shutdown_req-->| |--plan_req------>| - | {req_id:"abc"} | | {req_id:"xyz"} | - | | | | - |<--shutdown_resp-| |<--plan_resp-----| - | {req_id:"abc", | | {req_id:"xyz", | - | approve:true} | | approve:true} | - -Shared FSM: - [pending] --approve--> [approved] - [pending] --reject---> [rejected] - -Trackers: - shutdown_requests = {req_id: {target, status}} - plan_requests = {req_id: {from, plan, status}} -``` - -## 仕組み - -1. リーダーがrequest_idを生成し、インボックス経由でシャットダウンを開始する。 - -```python -shutdown_requests = {} - -def handle_shutdown_request(teammate: str) -> str: - req_id = str(uuid.uuid4())[:8] - shutdown_requests[req_id] = {"target": teammate, "status": "pending"} - BUS.send("lead", teammate, "Please shut down gracefully.", - "shutdown_request", {"request_id": req_id}) - return f"Shutdown request {req_id} sent (status: pending)" -``` - -2. チームメイトがリクエストを受信し、承認または拒否で応答する。 - -```python -if tool_name == "shutdown_response": - req_id = args["request_id"] - approve = args["approve"] - shutdown_requests[req_id]["status"] = "approved" if approve else "rejected" - BUS.send(sender, "lead", args.get("reason", ""), - "shutdown_response", - {"request_id": req_id, "approve": approve}) -``` - -3. プラン承認も同一パターン。チームメイトがプランを提出(request_idを生成)、リーダーがレビュー(同じrequest_idを参照)。 - -```python -plan_requests = {} - -def handle_plan_review(request_id, approve, feedback=""): - req = plan_requests[request_id] - req["status"] = "approved" if approve else "rejected" - BUS.send("lead", req["from"], feedback, - "plan_approval_response", - {"request_id": request_id, "approve": approve}) -``` - -1つのFSM、2つの応用。同じ`pending -> approved | rejected`状態機械が、あらゆるリクエスト-レスポンスプロトコルに適用できる。 - -## s09からの変更点 - -| Component | Before (s09) | After (s10) | -|----------------|------------------|------------------------------| -| Tools | 9 | 12 (+shutdown_req/resp +plan)| -| Shutdown | Natural exit only| Request-response handshake | -| Plan gating | None | Submit/review with approval | -| Correlation | None | request_id per request | -| FSM | None | pending -> approved/rejected | - -## 試してみる - -```sh -cd learn-claude-code -python agents/s10_team_protocols.py -``` - -1. `Spawn alice as a coder. Then request her shutdown.` -2. `List teammates to see alice's status after shutdown approval` -3. `Spawn bob with a risky refactoring task. Review and reject his plan.` -4. `Spawn charlie, have him submit a plan, then approve it.` -5. `/team`と入力してステータスを監視する diff --git a/docs/ja/s10a-message-prompt-pipeline.md b/docs/ja/s10a-message-prompt-pipeline.md new file mode 100644 index 000000000..3866b81d6 --- /dev/null +++ b/docs/ja/s10a-message-prompt-pipeline.md @@ -0,0 +1,127 @@ +# s10a: Message / Prompt 組み立てパイプライン + +> これは `s10` を補う橋渡し文書です。 +> ここでの問いは: +> +> **モデルが実際に見る入力は、system prompt 1 本だけなのか。** + +## 結論 + +違います。 + +高完成度の system では、モデル入力は複数 source の合成物です。 + +たとえば: + +- stable system prompt blocks +- normalized messages +- memory section +- dynamic reminders +- tool instructions + +つまり system prompt は大事ですが、**入力全体の一部**です。 + +## 最小の心智モデル + +```text +stable rules + + +tool surface + + +memory / CLAUDE.md / skills + + +normalized messages + + +dynamic reminders + = +final model input +``` + +## 主要な構造 + +### `PromptParts` + +入力 source を組み立て前に分けて持つ構造です。 + +```python +parts = { + "core": "...", + "tools": "...", + "memory": "...", + "skills": "...", + "dynamic": "...", +} +``` + +### `SystemPromptBlock` + +1 本の巨大文字列ではなく、section 単位で扱うための単位です。 + +```python +block = { + "text": "...", + "cache_scope": None, +} +``` + +### `NormalizedMessage` + +API に渡す前に整えられた messages です。 + +```python +{ + "role": "user", + "content": [ + {"type": "text", "text": "..."} + ], +} +``` + +## なぜ分ける必要があるか + +### 1. 何が stable で何が dynamic かを分けるため + +- system rules は比較的 stable +- current messages は dynamic +- reminders はより短命 + +### 2. どの source が何を足しているか追えるようにするため + +source を混ぜて 1 本にすると: + +- memory がどこから来たか +- skill がいつ入ったか +- reminder がなぜ入ったか + +が見えにくくなります。 + +### 3. compact / recovery / retry の説明がしやすくなるため + +入力 source が分かれていると: + +- 何を再利用するか +- 何を要約するか +- 何を次ターンで作り直すか + +が明確になります。 + +## 初学者が混ぜやすい境界 + +### `Message` と `PromptBlock` + +- `Message`: 会話履歴 +- `PromptBlock`: system 側の説明断片 + +### `Memory` と `Prompt` + +- memory は内容 source +- prompt pipeline は source を組む仕組み + +### `Tool instructions` と `Messages` + +- tool instructions は model が使える surface の説明 +- messages は今まで起きた対話 / 結果 + +## 一文で覚える + +**system prompt は入力の全部ではなく、複数 source を束ねた pipeline の 1 つの section です。** diff --git a/docs/ja/s11-autonomous-agents.md b/docs/ja/s11-autonomous-agents.md deleted file mode 100644 index 4bc690e61..000000000 --- a/docs/ja/s11-autonomous-agents.md +++ /dev/null @@ -1,142 +0,0 @@ -# s11: Autonomous Agents - -`s01 > s02 > s03 > s04 > s05 > s06 | s07 > s08 > s09 > s10 > [ s11 ] s12` - -> *"チームメイトが自らボードを見て、仕事を取る"* -- リーダーが逐一割り振る必要はない。 -> -> **Harness 層**: 自律 -- 指示なしで仕事を見つけるモデル。 - -## 問題 - -s09-s10では、チームメイトは明示的に指示された時のみ作業する。リーダーは各チームメイトを特定のプロンプトでspawnしなければならない。タスクボードに未割り当てのタスクが10個あっても、リーダーが手動で各タスクを割り当てる。これはスケールしない。 - -真の自律性とは、チームメイトが自分で作業を見つけること: タスクボードをスキャンし、未確保のタスクを確保し、作業し、完了したら次を探す。 - -もう1つの問題: コンテキスト圧縮(s06)後にエージェントが自分の正体を忘れる可能性がある。アイデンティティ再注入がこれを解決する。 - -## 解決策 - -``` -Teammate lifecycle with idle cycle: - -+-------+ -| spawn | -+---+---+ - | - v -+-------+ tool_use +-------+ -| WORK | <------------- | LLM | -+---+---+ +-------+ - | - | stop_reason != tool_use (or idle tool called) - v -+--------+ -| IDLE | poll every 5s for up to 60s -+---+----+ - | - +---> check inbox --> message? ----------> WORK - | - +---> scan .tasks/ --> unclaimed? -------> claim -> WORK - | - +---> 60s timeout ----------------------> SHUTDOWN - -Identity re-injection after compression: - if len(messages) <= 3: - messages.insert(0, identity_block) -``` - -## 仕組み - -1. チームメイトのループはWORKとIDLEの2フェーズ。LLMがツール呼び出しを止めた時(または`idle`ツールを呼んだ時)、IDLEフェーズに入る。 - -```python -def _loop(self, name, role, prompt): - while True: - # -- WORK PHASE -- - messages = [{"role": "user", "content": prompt}] - for _ in range(50): - response = client.messages.create(...) - if response.stop_reason != "tool_use": - break - # execute tools... - if idle_requested: - break - - # -- IDLE PHASE -- - self._set_status(name, "idle") - resume = self._idle_poll(name, messages) - if not resume: - self._set_status(name, "shutdown") - return - self._set_status(name, "working") -``` - -2. IDLEフェーズがインボックスとタスクボードをポーリングする。 - -```python -def _idle_poll(self, name, messages): - for _ in range(IDLE_TIMEOUT // POLL_INTERVAL): # 60s / 5s = 12 - time.sleep(POLL_INTERVAL) - inbox = BUS.read_inbox(name) - if inbox: - messages.append({"role": "user", - "content": f"{inbox}"}) - return True - unclaimed = scan_unclaimed_tasks() - if unclaimed: - claim_task(unclaimed[0]["id"], name) - messages.append({"role": "user", - "content": f"Task #{unclaimed[0]['id']}: " - f"{unclaimed[0]['subject']}"}) - return True - return False # timeout -> shutdown -``` - -3. タスクボードスキャン: pendingかつ未割り当てかつブロックされていないタスクを探す。 - -```python -def scan_unclaimed_tasks() -> list: - unclaimed = [] - for f in sorted(TASKS_DIR.glob("task_*.json")): - task = json.loads(f.read_text()) - if (task.get("status") == "pending" - and not task.get("owner") - and not task.get("blockedBy")): - unclaimed.append(task) - return unclaimed -``` - -4. アイデンティティ再注入: コンテキストが短すぎる(圧縮が起きた)場合にアイデンティティブロックを挿入する。 - -```python -if len(messages) <= 3: - messages.insert(0, {"role": "user", - "content": f"You are '{name}', role: {role}, " - f"team: {team_name}. Continue your work."}) - messages.insert(1, {"role": "assistant", - "content": f"I am {name}. Continuing."}) -``` - -## s10からの変更点 - -| Component | Before (s10) | After (s11) | -|----------------|------------------|----------------------------| -| Tools | 12 | 14 (+idle, +claim_task) | -| Autonomy | Lead-directed | Self-organizing | -| Idle phase | None | Poll inbox + task board | -| Task claiming | Manual only | Auto-claim unclaimed tasks | -| Identity | System prompt | + re-injection after compress| -| Timeout | None | 60s idle -> auto shutdown | - -## 試してみる - -```sh -cd learn-claude-code -python agents/s11_autonomous_agents.py -``` - -1. `Create 3 tasks on the board, then spawn alice and bob. Watch them auto-claim.` -2. `Spawn a coder teammate and let it find work from the task board itself` -3. `Create tasks with dependencies. Watch teammates respect the blocked order.` -4. `/tasks`と入力してオーナー付きのタスクボードを確認する -5. `/team`と入力して誰が作業中でアイドルかを監視する diff --git a/docs/ja/s11-error-recovery.md b/docs/ja/s11-error-recovery.md new file mode 100644 index 000000000..ee9e62345 --- /dev/null +++ b/docs/ja/s11-error-recovery.md @@ -0,0 +1,396 @@ +# s11: Error Recovery + +`s00 > s01 > s02 > s03 > s04 > s05 > s06 > s07 > s08 > s09 > s10 > [ s11 ] > s12 > s13 > s14 > s15 > s16 > s17 > s18 > s19` + +> *error は例外イベントではなく、main loop が最初から用意しておくべき通常分岐です。* + +## この章が解く問題 + +`s10` まで来ると agent はもう demo ではありません。 + +すでに system には、 + +- main loop +- tool use +- planning +- compaction +- permission +- hook +- memory +- prompt assembly + +があります。 + +こうなると failure も自然に増えます。 + +- model output が途中で切れる +- context が大きすぎて request が入らない +- API timeout や rate limit で一時的に失敗する + +もし recovery がなければ、main loop は最初の失敗で止まります。 + +そして初心者はよく、 + +> agent が不安定なのは model が弱いからだ + +と誤解します。 + +しかし実際には多くの failure は、 + +**task そのものが失敗したのではなく、この turn の続け方を変える必要があるだけ** + +です。 + +この章の目標は 1 つです。 + +**「error が出たら停止」から、「error の種類を見て recovery path を選ぶ」へ進むこと** + +です。 + +## 併読すると楽になる資料 + +- 今の query がなぜまだ続いているのか見失ったら [`s00c-query-transition-model.md`](./s00c-query-transition-model.md) +- compact と recovery が同じ mechanism に見えたら [`s06-context-compact.md`](./s06-context-compact.md) +- このあと `s12` へ進む前に、recovery state と durable task state を混ぜたくなったら [`data-structures.md`](./data-structures.md) + +## 先に言葉をそろえる + +### recovery とは何か + +recovery は「error をなかったことにする」ことではありません。 + +意味は次です。 + +- これは一時的 failure かを判定する +- 一時的なら有限回の補救動作を試す +- だめなら明示的に fail として返す + +### retry budget とは何か + +retry budget は、 + +> 最大で何回までこの recovery path を試すか + +です。 + +例: + +- continuation は最大 3 回 +- transport retry は最大 3 回 + +これがないと loop が無限に回る危険があります。 + +### state machine とは何か + +この章での state machine は難しい theory ではありません。 + +単に、 + +> normal execution と各 recovery branch を、明確な状態遷移として見ること + +です。 + +この章から query の進行は次のように見えるようになります。 + +- normal +- continue after truncation +- compact then retry +- backoff then retry +- final fail + +## 最小心智モデル + +最初は 3 種類の failure だけ区別できれば十分です。 + +```text +1. output truncated + model はまだ言い終わっていないが token が尽きた + +2. context too large + request 全体が model window に入らない + +3. transient transport failure + timeout / rate limit / temporary connection issue +``` + +それぞれに対応する recovery path はこうです。 + +```text +LLM call + | + +-- stop_reason == "max_tokens" + | -> continuation message を入れる + | -> retry + | + +-- prompt too long + | -> compact する + | -> retry + | + +-- timeout / rate limit / connection error + -> 少し待つ + -> retry +``` + +これが最小ですが、十分に正しい recovery model です。 + +## この章の核になるデータ構造 + +### 1. Recovery State + +```python +recovery_state = { + "continuation_attempts": 0, + "compact_attempts": 0, + "transport_attempts": 0, +} +``` + +役割は 2 つあります。 + +- 各 recovery path ごとの retry 回数を分けて数える +- 無限 recovery を防ぐ + +### 2. Recovery Decision + +```python +{ + "kind": "continue" | "compact" | "backoff" | "fail", + "reason": "why this branch was chosen", +} +``` + +ここで大事なのは、 + +**error の見た目と、次に選ぶ動作を分ける** + +ことです。 + +この分離があると loop が読みやすくなります。 + +### 3. Continuation Message + +```python +CONTINUE_MESSAGE = ( + "Output limit hit. Continue directly from where you stopped. " + "Do not restart or repeat." +) +``` + +この message は地味ですが非常に重要です。 + +なぜなら model は「続けて」とだけ言うと、 + +- 最初から言い直す +- もう一度要約し直す +- 直前の内容を繰り返す + +ことがあるからです。 + +## 最小実装を段階で追う + +### 第 1 段階: recovery chooser を作る + +```python +def choose_recovery(stop_reason: str | None, error_text: str | None) -> dict: + if stop_reason == "max_tokens": + return {"kind": "continue", "reason": "output truncated"} + + if error_text and "prompt" in error_text and "long" in error_text: + return {"kind": "compact", "reason": "context too large"} + + if error_text and any(word in error_text for word in [ + "timeout", "rate", "unavailable", "connection" + ]): + return {"kind": "backoff", "reason": "transient transport failure"} + + return {"kind": "fail", "reason": "unknown or non-recoverable error"} +``` + +この関数がやっている本質は、 + +**まず分類し、そのあと branch を返す** + +という 1 点です。 + +### 第 2 段階: main loop に差し込む + +```python +while True: + try: + response = client.messages.create(...) + decision = choose_recovery(response.stop_reason, None) + except Exception as e: + response = None + decision = choose_recovery(None, str(e).lower()) + + if decision["kind"] == "continue": + messages.append({"role": "user", "content": CONTINUE_MESSAGE}) + continue + + if decision["kind"] == "compact": + messages = auto_compact(messages) + continue + + if decision["kind"] == "backoff": + time.sleep(backoff_delay(...)) + continue + + if decision["kind"] == "fail": + break + + # normal tool handling +``` + +ここで一番大事なのは、 + +- catch したら即 stop + +ではなく、 + +- 何の失敗かを見る +- どの recovery path を試すか決める + +という構造です。 + +## 3 つの主 recovery path が埋めている穴 + +### 1. continuation + +これは「model が言い終わる前に output budget が切れた」問題を埋めます。 + +本質は、 + +> task が失敗したのではなく、1 turn の出力空間が足りなかった + +ということです。 + +最小形はこうです。 + +```python +if response.stop_reason == "max_tokens": + if state["continuation_attempts"] >= 3: + return "Error: output recovery exhausted" + state["continuation_attempts"] += 1 + messages.append({"role": "user", "content": CONTINUE_MESSAGE}) + continue +``` + +### 2. compact + +これは「task が無理」ではなく、 + +> active context が大きすぎて request が入らない + +ときに使います。 + +ここで大事なのは、compact を delete と考えないことです。 + +compact は、 + +**過去を、そのままの原文ではなく、まだ続行可能な summary へ変換する** + +操作です。 + +最小例: + +```python +def auto_compact(messages: list) -> list: + summary = summarize_messages(messages) + return [{ + "role": "user", + "content": "This session was compacted. Continue from this summary:\n" + summary, + }] +``` + +最低限 summary に残したいのは次です。 + +- 今の task は何か +- 何をすでに終えたか +- 重要 decision は何か +- 次に何をするつもりか + +### 3. backoff + +これは timeout、rate limit、temporary connection issue のような + +**時間を置けば通るかもしれない failure** + +に対して使います。 + +考え方は単純です。 + +```python +if decision["kind"] == "backoff": + if state["transport_attempts"] >= 3: + break + state["transport_attempts"] += 1 + time.sleep(backoff_delay(state["transport_attempts"])) + continue +``` + +ここで大切なのは「retry すること」よりも、 + +**retry にも budget があり、同じ速度で無限に叩かないこと** + +です。 + +## compact と recovery を混ぜない + +これは初学者が特に混ぜやすい点です。 + +- `s06` の compact は context hygiene のために行うことがある +- `s11` の compact recovery は request failure から戻るために行う + +同じ compact という操作でも、 + +**目的が違います。** + +目的が違えば、それを呼ぶ branch も別に見るべきです。 + +## recovery は query の continuation 理由でもある + +`s11` の重要な学びは、error handling を `except` の奥へ隠さないことです。 + +むしろ次を explicit に持つ方が良いです。 + +- なぜまだ続いているのか +- 何回その branch を試したのか +- 次にどの branch を試すのか + +すると recovery は hidden plumbing ではなく、 + +**query transition を説明する状態** + +になります。 + +## 初学者が混ぜやすいポイント + +### 1. すべての failure に同じ retry をかける + +truncation と transport error は同じ問題ではありません。 + +### 2. retry budget を持たない + +無限 loop の原因になります。 + +### 3. compact と recovery を 1 つの話にしてしまう + +context hygiene と failure recovery は目的が違います。 + +### 4. continuation message を曖昧にする + +「続けて」だけでは model が restart / repeat しやすいです。 + +### 5. なぜ続行しているのかを state に残さない + +debug も teaching も急に難しくなります。 + +## この章を読み終えたら何が言えるべきか + +1. 多くの error は task failure ではなく、「この turn の続け方を変えるべき」信号である +2. recovery は `continue / compact / backoff / fail` の branch として考えられる +3. recovery path ごとに budget を持たないと loop が壊れやすい + +## 一文で覚える + +**Error Recovery とは、failure を見た瞬間に止まるのではなく、failure の種類に応じて continuation path を選び直す control layer です。** diff --git a/docs/ja/s07-task-system.md b/docs/ja/s12-task-system.md similarity index 76% rename from docs/ja/s07-task-system.md rename to docs/ja/s12-task-system.md index 0a500a87c..62c0a4fbd 100644 --- a/docs/ja/s07-task-system.md +++ b/docs/ja/s12-task-system.md @@ -1,6 +1,6 @@ -# s07: Task System +# s12: Task System -`s01 > s02 > s03 > s04 > s05 > s06 | [ s07 ] s08 > s09 > s10 > s11 > s12` +`s01 > s02 > s03 > s04 > s05 > s06 | s07 > s08 > s09 > s10 > s11 > [ s12 ]` > *"大きな目標を小タスクに分解し、順序付けし、ディスクに記録する"* -- ファイルベースのタスクグラフ、マルチエージェント協調の基盤。 > @@ -12,6 +12,12 @@ s03のTodoManagerはメモリ上のフラットなチェックリストに過ぎ 明示的な関係がなければ、エージェントは何が実行可能で、何がブロックされ、何が同時に走れるかを判断できない。しかもリストはメモリ上にしかないため、コンテキスト圧縮(s06)で消える。 +## 主線とどう併読するか + +- `s03` からそのまま来たなら、[`data-structures.md`](./data-structures.md) へ戻って `TodoItem` / `PlanState` と `TaskRecord` を分けます。 +- object 境界が混ざり始めたら、[`entity-map.md`](./entity-map.md) で message、task、runtime task、teammate を分離してから戻ります。 +- 次に `s13` を読むなら、[`s13a-runtime-task-model.md`](./s13a-runtime-task-model.md) を横に置いて、durable task と runtime task を同じ言葉で潰さないようにします。 + ## 解決策 フラットなチェックリストをディスクに永続化する**タスクグラフ**に昇格させる。各タスクは1つのJSONファイルで、ステータス・前方依存(`blockedBy`)を持つ。タスクグラフは常に3つの問いに答える: @@ -44,7 +50,7 @@ s03のTodoManagerはメモリ上のフラットなチェックリストに過ぎ ステータス: pending -> in_progress -> completed ``` -このタスクグラフは s07 以降の全メカニズムの協調バックボーンとなる: バックグラウンド実行(s08)、マルチエージェントチーム(s09+)、worktree分離(s12)はすべてこの同じ構造を読み書きする。 +このタスクグラフは後続の runtime / platform 章の協調バックボーンになる: バックグラウンド実行(`s13`)、マルチエージェントチーム(`s15+`)、worktree 分離(`s18`)はすべてこの durable な構造の恩恵を受ける。 ## 仕組み @@ -106,11 +112,11 @@ TOOL_HANDLERS = { } ``` -s07以降、タスクグラフがマルチステップ作業のデフォルト。s03のTodoは軽量な単一セッション用チェックリストとして残る。 +`s12` 以降、タスクグラフが durable なマルチステップ作業のデフォルトになる。`s03` の Todo は軽量な単一セッション用チェックリストとして残る。 ## s06からの変更点 -| コンポーネント | Before (s06) | After (s07) | +| コンポーネント | Before (s06) | After (s12) | |---|---|---| | Tools | 5 | 8 (`task_create/update/list/get`) | | 計画モデル | フラットチェックリスト (メモリ) | 依存関係付きタスクグラフ (ディスク) | @@ -122,10 +128,23 @@ s07以降、タスクグラフがマルチステップ作業のデフォルト ```sh cd learn-claude-code -python agents/s07_task_system.py +python agents/s12_task_system.py ``` 1. `Create 3 tasks: "Setup project", "Write code", "Write tests". Make them depend on each other in order.` 2. `List all tasks and show the dependency graph` 3. `Complete task 1 and then list tasks to see task 2 unblocked` 4. `Create a task board for refactoring: parse -> transform -> emit -> test, where transform and emit can run in parallel after parse` + +## 教学上の境界 + +このリポジトリで本当に重要なのは、完全な製品向け保存層の再現ではありません。 + +重要なのは: + +- durable なタスク記録 +- 明示的な依存エッジ +- 分かりやすい状態遷移 +- 後続章が再利用できる構造 + +この 4 点を自分で実装できれば、タスクシステムの核心はつかめています。 diff --git a/docs/ja/s12-worktree-task-isolation.md b/docs/ja/s12-worktree-task-isolation.md deleted file mode 100644 index 380422c52..000000000 --- a/docs/ja/s12-worktree-task-isolation.md +++ /dev/null @@ -1,121 +0,0 @@ -# s12: Worktree + Task Isolation - -`s01 > s02 > s03 > s04 > s05 > s06 | s07 > s08 > s09 > s10 > s11 > [ s12 ]` - -> *"各自のディレクトリで作業し、互いに干渉しない"* -- タスクは目標を管理、worktree はディレクトリを管理、IDで紐付け。 -> -> **Harness 層**: ディレクトリ隔離 -- 決して衝突しない並列実行レーン。 - -## 問題 - -s11までにエージェントはタスクを自律的に確保して完了できるようになった。しかし全タスクが1つの共有ディレクトリで走る。2つのエージェントが同時に異なるモジュールをリファクタリングすると衝突する: 片方が`config.py`を編集し、もう片方も`config.py`を編集し、未コミットの変更が混ざり合い、どちらもクリーンにロールバックできない。 - -タスクボードは*何をやるか*を追跡するが、*どこでやるか*には関知しない。解決策: 各タスクに専用のgit worktreeディレクトリを与える。タスクが目標を管理し、worktreeが実行コンテキストを管理する。タスクIDで紐付ける。 - -## 解決策 - -``` -Control plane (.tasks/) Execution plane (.worktrees/) -+------------------+ +------------------------+ -| task_1.json | | auth-refactor/ | -| status: in_progress <------> branch: wt/auth-refactor -| worktree: "auth-refactor" | task_id: 1 | -+------------------+ +------------------------+ -| task_2.json | | ui-login/ | -| status: pending <------> branch: wt/ui-login -| worktree: "ui-login" | task_id: 2 | -+------------------+ +------------------------+ - | - index.json (worktree registry) - events.jsonl (lifecycle log) - -State machines: - Task: pending -> in_progress -> completed - Worktree: absent -> active -> removed | kept -``` - -## 仕組み - -1. **タスクを作成する。** まず目標を永続化する。 - -```python -TASKS.create("Implement auth refactor") -# -> .tasks/task_1.json status=pending worktree="" -``` - -2. **worktreeを作成してタスクに紐付ける。** `task_id`を渡すと、タスクが自動的に`in_progress`に遷移する。 - -```python -WORKTREES.create("auth-refactor", task_id=1) -# -> git worktree add -b wt/auth-refactor .worktrees/auth-refactor HEAD -# -> index.json gets new entry, task_1.json gets worktree="auth-refactor" -``` - -紐付けは両側に状態を書き込む: - -```python -def bind_worktree(self, task_id, worktree): - task = self._load(task_id) - task["worktree"] = worktree - if task["status"] == "pending": - task["status"] = "in_progress" - self._save(task) -``` - -3. **worktree内でコマンドを実行する。** `cwd`が分離ディレクトリを指す。 - -```python -subprocess.run(command, shell=True, cwd=worktree_path, - capture_output=True, text=True, timeout=300) -``` - -4. **終了処理。** 2つの選択肢: - - `worktree_keep(name)` -- ディレクトリを保持する。 - - `worktree_remove(name, complete_task=True)` -- ディレクトリを削除し、紐付けられたタスクを完了し、イベントを発行する。1回の呼び出しで後片付けと完了を処理する。 - -```python -def remove(self, name, force=False, complete_task=False): - self._run_git(["worktree", "remove", wt["path"]]) - if complete_task and wt.get("task_id") is not None: - self.tasks.update(wt["task_id"], status="completed") - self.tasks.unbind_worktree(wt["task_id"]) - self.events.emit("task.completed", ...) -``` - -5. **イベントストリーム。** ライフサイクルの各ステップが`.worktrees/events.jsonl`に記録される: - -```json -{ - "event": "worktree.remove.after", - "task": {"id": 1, "status": "completed"}, - "worktree": {"name": "auth-refactor", "status": "removed"}, - "ts": 1730000000 -} -``` - -発行されるイベント: `worktree.create.before/after/failed`, `worktree.remove.before/after/failed`, `worktree.keep`, `task.completed`。 - -クラッシュ後も`.tasks/` + `.worktrees/index.json`から状態を再構築できる。会話メモリは揮発性だが、ファイル状態は永続的だ。 - -## s11からの変更点 - -| Component | Before (s11) | After (s12) | -|--------------------|----------------------------|----------------------------------------------| -| Coordination | Task board (owner/status) | Task board + explicit worktree binding | -| Execution scope | Shared directory | Task-scoped isolated directory | -| Recoverability | Task status only | Task status + worktree index | -| Teardown | Task completion | Task completion + explicit keep/remove | -| Lifecycle visibility | Implicit in logs | Explicit events in `.worktrees/events.jsonl` | - -## 試してみる - -```sh -cd learn-claude-code -python agents/s12_worktree_task_isolation.py -``` - -1. `Create tasks for backend auth and frontend login page, then list tasks.` -2. `Create worktree "auth-refactor" for task 1, then bind task 2 to a new worktree "ui-login".` -3. `Run "git status --short" in worktree "auth-refactor".` -4. `Keep worktree "ui-login", then list worktrees and inspect events.` -5. `Remove worktree "auth-refactor" with complete_task=true, then list tasks/worktrees/events.` diff --git a/docs/ja/s13-background-tasks.md b/docs/ja/s13-background-tasks.md new file mode 100644 index 000000000..9d60025cd --- /dev/null +++ b/docs/ja/s13-background-tasks.md @@ -0,0 +1,390 @@ +# s13: バックグラウンドタスク + +`s00 > s01 > s02 > s03 > s04 > s05 > s06 > s07 > s08 > s09 > s10 > s11 > s12 > [ s13 ] > s14 > s15 > s16 > s17 > s18 > s19` + +> *遅い command は横で待たせればよく、main loop まで一緒に止まる必要はありません。* + +## この章が解く問題 + +前の章までの tool call は、基本的に次の形でした。 + +```text +model が tool を要求する + -> +すぐ実行する + -> +すぐ結果を返す +``` + +短い command ならこれで問題ありません。 + +でも次のような処理はすぐに詰まります。 + +- `npm install` +- `pytest` +- `docker build` +- 重い code generation +- 長時間の lint / typecheck + +もし main loop がその完了を同期的に待ち続けると、2 つの問題が起きます。 + +- model は待ち時間のあいだ次の判断へ進めない +- user は別の軽い作業を進めたいのに、agent 全体が足止めされる + +この章で入れるのは、 + +**遅い実行を background へ逃がし、main loop は次の仕事へ進めるようにすること** + +です。 + +## 併読すると楽になる資料 + +- `task goal` と `live execution slot` がまだ混ざるなら [`s13a-runtime-task-model.md`](./s13a-runtime-task-model.md) +- `RuntimeTaskRecord` と task board の境界を見直したいなら [`data-structures.md`](./data-structures.md) +- background execution が「別の main loop」に見えてきたら [`s02b-tool-execution-runtime.md`](./s02b-tool-execution-runtime.md) + +## 先に言葉をそろえる + +### foreground とは何か + +ここで言う foreground は、 + +> この turn の中で今すぐ結果が必要なので、main loop がその場で待つ実行 + +です。 + +### background とは何か + +background は謎の裏世界ではありません。 + +意味は単純で、 + +> command を別の execution line に任せ、main loop は先に別のことを進める + +ことです。 + +### 通知キューとは何か + +background task が終わっても、その完全な出力をいきなり model へ丸ごと押し込む必要はありません。 + +いったん queue に要約通知として積み、 + +> 次の model call の直前にまとめて main loop へ戻す + +のが分かりやすい設計です。 + +## 最小心智モデル + +この章で最も大切な 1 文は次です。 + +**並行になるのは実行と待機であって、main loop 自体が増えるわけではありません。** + +図にするとこうです。 + +```text +Main loop + | + +-- background_run("pytest") + | -> すぐ task_id を返す + | + +-- そのまま別の仕事を続ける + | + +-- 次の model call の前 + -> drain_notifications() + -> 結果要約を messages へ注入 + +Background lane + | + +-- 実際に subprocess を実行 + +-- 終了後に result preview を queue へ積む +``` + +この図を保ったまま理解すれば、後でもっと複雑な runtime へ進んでも心智が崩れにくくなります。 + +## この章の核になるデータ構造 + +### 1. RuntimeTaskRecord + +この章で扱う background task は durable task board の task とは別物です。 + +教材コードでは、background 実行はおおむね次の record を持ちます。 + +```python +task = { + "id": "a1b2c3d4", + "command": "pytest", + "status": "running", + "started_at": 1710000000.0, + "finished_at": None, + "result_preview": "", + "output_file": ".runtime-tasks/a1b2c3d4.log", +} +``` + +各 field の意味は次の通りです。 + +- `id`: runtime slot の識別子 +- `command`: 今走っている command +- `status`: `running` / `completed` / `timeout` / `error` +- `started_at`: いつ始まったか +- `finished_at`: いつ終わったか +- `result_preview`: model に戻す短い要約 +- `output_file`: 完全出力の保存先 + +教材版ではこれを disk 上にも分けて残します。 + +```text +.runtime-tasks/ + a1b2c3d4.json + a1b2c3d4.log +``` + +これで読者は、 + +- `json` は状態 record +- `log` は完全出力 +- model へ戻すのはまず preview + +という 3 層を自然に見分けられます。 + +### 2. Notification + +background result はまず notification queue に入ります。 + +```python +notification = { + "task_id": "a1b2c3d4", + "status": "completed", + "command": "pytest", + "preview": "42 tests passed", + "output_file": ".runtime-tasks/a1b2c3d4.log", +} +``` + +notification の役割は 1 つだけです。 + +> main loop に「結果が戻ってきた」と知らせること + +ここに完全出力の全量を埋め込む必要はありません。 + +## 最小実装を段階で追う + +### 第 1 段階: background manager を持つ + +最低限必要なのは次の 2 つの状態です。 + +- `tasks`: いま存在する runtime task +- `_notification_queue`: main loop にまだ回収されていない結果 + +```python +class BackgroundManager: + def __init__(self): + self.tasks = {} + self._notification_queue = [] + self._lock = threading.Lock() +``` + +ここで lock を置いているのは、background thread と main loop が同じ queue / dict を触るからです。 + +### 第 2 段階: `run()` はすぐ返す + +background 化の一番大きな変化はここです。 + +```python +def run(self, command: str) -> str: + task_id = str(uuid.uuid4())[:8] + self.tasks[task_id] = { + "id": task_id, + "status": "running", + "command": command, + "started_at": time.time(), + } + + thread = threading.Thread( + target=self._execute, + args=(task_id, command), + daemon=True, + ) + thread.start() + return task_id +``` + +重要なのは thread 自体より、 + +**main loop が結果ではなく `task_id` を受け取り、先に進める** + +ことです。 + +### 第 3 段階: subprocess が終わったら notification を積む + +```python +def _execute(self, task_id: str, command: str): + try: + result = subprocess.run(..., timeout=300) + status = "completed" + preview = (result.stdout + result.stderr)[:500] + except subprocess.TimeoutExpired: + status = "timeout" + preview = "command timed out" + + with self._lock: + self.tasks[task_id]["status"] = status + self._notification_queue.append({ + "task_id": task_id, + "status": status, + "preview": preview, + }) +``` + +ここでの設計意図ははっきりしています。 + +- execution lane は command を実際に走らせる +- notification queue は main loop へ戻すための要約を持つ + +役割を分けることで、result transport が見やすくなります。 + +### 第 4 段階: 次の model call 前に queue を drain する + +```python +def agent_loop(messages: list): + while True: + notifications = BG.drain_notifications() + if notifications: + notif_text = "\n".join( + f"[bg:{n['task_id']}] {n['preview']}" for n in notifications + ) + messages.append({ + "role": "user", + "content": f"\n{notif_text}\n", + }) + messages.append({ + "role": "assistant", + "content": "Noted background results.", + }) +``` + +この構造が大切です。 + +結果は「いつでも割り込んで model へ押し込まれる」のではなく、 + +**次の model call の入口でまとめて注入される** + +からです。 + +### 第 5 段階: preview と full output を分ける + +教材コードでは `result_preview` と `output_file` を分けています。 + +これは初心者にも非常に大事な設計です。 + +なぜなら background result にはしばしば次の問題があるからです。 + +- 出力が長い +- model に全量を見せる必要がない +- user だけ詳細 log を見れば十分なことが多い + +そこでまず model には短い preview を返し、必要なら後で `read_file` 等で full log を読む形にします。 + +### 第 6 段階: stalled task も見られるようにする + +教材コードは `STALL_THRESHOLD_S` を持ち、長く走りすぎている task を拾えます。 + +```python +def detect_stalled(self) -> list[str]: + now = time.time() + stalled = [] + for task_id, info in self.tasks.items(): + if info["status"] != "running": + continue + elapsed = now - info.get("started_at", now) + if elapsed > STALL_THRESHOLD_S: + stalled.append(task_id) + return stalled +``` + +ここで学ぶべき本質は sophisticated monitoring ではありません。 + +**background 化したら「開始したまま返ってこないもの」を見張る観点が必要になる** + +ということです。 + +## これは task board の task とは違う + +ここは混ざりやすいので強調します。 + +`s12` の `task` は durable goal node です。 + +一方この章の background task は、 + +> いま実行中の live runtime slot + +です。 + +同じ `task` という言葉を使っても指している層が違います。 + +だから分からなくなったら、本文だけを往復せずに次へ戻るべきです。 + +- [`entity-map.md`](./entity-map.md) +- [`data-structures.md`](./data-structures.md) +- [`s13a-runtime-task-model.md`](./s13a-runtime-task-model.md) + +## 前の章とどうつながるか + +この章は `s12` の durable task graph を否定する章ではありません。 + +むしろ、 + +- `s12` が「何の仕事が存在するか」を管理し +- `s13` が「いまどの command が走っているか」を管理する + +という役割分担を教える章です。 + +後の `s14`、`s17`、`s18` へ行く前に、 + +**goal と runtime slot を分けて見る癖** + +をここで作っておくことが重要です。 + +## 初学者が混ぜやすいポイント + +### 1. background execution を「もう 1 本の main loop」と考える + +実際に増えているのは subprocess waiting lane であって、main conversational loop ではありません。 + +### 2. result を queue ではなく即座に messages へ乱暴に書き込む + +これでは model input の入口が分散し、system の流れが追いにくくなります。 + +### 3. full output と preview を分けない + +長い log で context がすぐあふれます。 + +### 4. runtime task と durable task を同一視する + +「いま走っている command」と「長く残る work goal」は別物です。 + +### 5. queue 操作に lock を使わない + +background thread と main loop の競合で状態が壊れやすくなります。 + +### 6. timeout / error を `completed` と同じように扱う + +戻すべき情報は同じではありません。終了理由は explicit に残すべきです。 + +## 教学上の境界 + +この章でまず理解すべき中心は、製品用の完全な async runtime ではありません。 + +中心は次の 3 行です。 + +- 遅い仕事を foreground から切り離す +- 結果は notification として main loop に戻す +- runtime slot は durable task board とは別層で管理する + +ここが腹落ちしてから、 + +- より複雑な scheduler +- 複数種類の background lane +- 分散 worker + +へ進めば十分です。 diff --git a/docs/ja/s13a-runtime-task-model.md b/docs/ja/s13a-runtime-task-model.md new file mode 100644 index 000000000..a82df1c45 --- /dev/null +++ b/docs/ja/s13a-runtime-task-model.md @@ -0,0 +1,262 @@ +# s13a: Runtime Task Model + +> この bridge doc はすぐに混ざる次の点をほどくためのものです。 +> +> **work graph 上の task と、いま実行中の task は同じものではありません。** + +## 主線とどう併読するか + +次の順で読むのが最も分かりやすいです。 + +- まず [`s12-task-system.md`](./s12-task-system.md) を読み、durable な work graph を固める +- 次に [`s13-background-tasks.md`](./s13-background-tasks.md) を読み、background execution を見る +- 用語が混ざり始めたら [`glossary.md`](./glossary.md) を見直す +- field を正確に合わせたいなら [`data-structures.md`](./data-structures.md) と [`entity-map.md`](./entity-map.md) を見直す + +## なぜこの橋渡しが必要か + +主線自体は正しいです。 + +- `s12` は task system +- `s13` は background tasks + +ただし bridge layer を一枚挟まないと、読者は二種類の「task」をすぐに同じ箱へ入れてしまいます。 + +例えば: + +- 「auth module を実装する」という work-graph task +- 「pytest を走らせる」という background execution +- 「alice がコード修正をしている」という teammate execution + +どれも日常語では task と呼べますが、同じ層にはありません。 + +## 二つの全く違う task + +### 1. work-graph task + +これは `s12` の durable node です。 + +答えるものは: + +- 何をやるか +- どの仕事がどの仕事に依存するか +- 誰が owner か +- 進捗はどうか + +つまり: + +> 目標として管理される durable work unit + +です。 + +### 2. runtime task + +こちらが答えるものは: + +- 今どの execution unit が生きているか +- それが何の type か +- running / completed / failed / killed のどれか +- 出力がどこにあるか + +つまり: + +> runtime の中で生きている execution slot + +です。 + +## 最小の心智モデル + +まず二つの表として分けて考えてください。 + +```text +work-graph task + - durable + - goal / dependency oriented + - 寿命が長い + +runtime task + - execution oriented + - output / status oriented + - 寿命が短い +``` + +両者の関係は「どちらか一方」ではありません。 + +```text +1 つの work-graph task + から +1 個以上の runtime task が派生しうる +``` + +例えば: + +```text +work-graph task: + "Implement auth module" + +runtime tasks: + 1. background で test を走らせる + 2. coder teammate を起動する + 3. 外部 service を monitor する +``` + +## なぜこの区別が重要か + +この境界が崩れると、後続章がすぐに絡み始めます。 + +- `s13` の background execution が `s12` の task board と混ざる +- `s15-s17` の teammate work がどこにぶら下がるか不明になる +- `s18` の worktree が何に紐づくのか曖昧になる + +最短の正しい要約はこれです。 + +**work-graph task は目標を管理し、runtime task は実行を管理する** + +## 主要 record + +### 1. `WorkGraphTaskRecord` + +これは `s12` の durable task です。 + +```python +task = { + "id": 12, + "subject": "Implement auth module", + "status": "in_progress", + "blockedBy": [], + "blocks": [13], + "owner": "alice", + "worktree": "auth-refactor", +} +``` + +### 2. `RuntimeTaskState` + +教材版の最小形は次の程度で十分です。 + +```python +runtime_task = { + "id": "b8k2m1qz", + "type": "local_bash", + "status": "running", + "description": "Run pytest", + "start_time": 1710000000.0, + "end_time": None, + "output_file": ".task_outputs/b8k2m1qz.txt", + "notified": False, +} +``` + +重要 field は: + +- `type`: どの execution unit か +- `status`: active か terminal か +- `output_file`: 結果がどこにあるか +- `notified`: 結果を system がもう表に出したか + +### 3. `RuntimeTaskType` + +教材 repo ですべての type を即実装する必要はありません。 + +ただし runtime task は単なる shell 1 種ではなく、型族だと読者に見せるべきです。 + +最小表は: + +```text +local_bash +local_agent +remote_agent +in_process_teammate +monitor +workflow +``` + +## 最小実装の進め方 + +### Step 1: `s12` の task board はそのまま保つ + +ここへ runtime state を混ぜないでください。 + +### Step 2: 別の runtime task manager を足す + +```python +class RuntimeTaskManager: + def __init__(self): + self.tasks = {} +``` + +### Step 3: background work 開始時に runtime task を作る + +```python +def spawn_bash_task(command: str): + task_id = new_runtime_id() + runtime_tasks[task_id] = { + "id": task_id, + "type": "local_bash", + "status": "running", + "description": command, + } +``` + +### Step 4: 必要なら work graph へ結び戻す + +```python +runtime_tasks[task_id]["work_graph_task_id"] = 12 +``` + +初日から必須ではありませんが、teams や worktrees へ進むほど重要になります。 + +## 開発者が持つべき図 + +```text +Work Graph + task #12: Implement auth module + | + +-- runtime task A: local_bash (pytest) + +-- runtime task B: local_agent (coder worker) + +-- runtime task C: monitor (watch service status) + +Runtime Task Layer + A/B/C each have: + - own runtime ID + - own status + - own output + - own lifecycle +``` + +## 後続章とのつながり + +この層が明確になると、後続章がかなり読みやすくなります。 + +- `s13` の background command は runtime task +- `s15-s17` の teammate も runtime task の一種として見られる +- `s18` の worktree は主に durable work に紐づくが runtime execution にも影響する +- `s19` の monitor や async external work も runtime layer に落ちうる + +「裏で生きていて仕事を進めているもの」を見たら、まず二つ問います。 + +- これは work graph 上の durable goal か +- それとも runtime 上の live execution slot か + +## 初学者がやりがちな間違い + +### 1. background shell の state を task board に直接入れる + +durable task state と runtime execution state が混ざります。 + +### 2. 1 つの work-graph task は 1 つの runtime task しか持てないと思う + +現実の system では、1 つの goal から複数 execution unit が派生することは普通です。 + +### 3. 両層で同じ status 語彙を使い回す + +例えば: + +- durable tasks: `pending / in_progress / completed` +- runtime tasks: `running / completed / failed / killed` + +可能な限り分けた方が安全です。 + +### 4. `output_file` や `notified` のような runtime 専用 field を軽視する + +durable task board はそこまで気にしませんが、runtime layer は強く依存します。 diff --git a/docs/ja/s14-cron-scheduler.md b/docs/ja/s14-cron-scheduler.md new file mode 100644 index 000000000..ecdc344a2 --- /dev/null +++ b/docs/ja/s14-cron-scheduler.md @@ -0,0 +1,182 @@ +# s14: Cron Scheduler + +`s00 > s01 > s02 > s03 > s04 > s05 > s06 > s07 > s08 > s09 > s10 > s11 > s12 > s13 > [ s14 ] > s15 > s16 > s17 > s18 > s19` + +> *バックグラウンドタスクが「遅い仕事をどう続けるか」を扱うなら、スケジューラは「未来のいつ仕事を始めるか」を扱う。* + +## この章が解決する問題 + +`s13` で、遅い処理をバックグラウンドへ逃がせるようになりました。 + +でもそれは「今すぐ始める仕事」です。 + +現実には: + +- 毎晩実行したい +- 毎週決まった時刻にレポートを作りたい +- 30 分後に再確認したい + +といった未来トリガーが必要になります。 + +この章の核心は: + +**未来の意図を今記録して、時刻が来たら新しい仕事として戻す** + +ことです。 + +## 教学上の境界 + +この章の中心は cron 構文の暗記ではありません。 + +本当に理解すべきなのは: + +**schedule record が通知になり、通知が主ループへ戻る流れ** + +です。 + +## 主線とどう併読するか + +- `schedule`、`task`、`runtime task` がまだ同じ object に見えるなら、[`s13a-runtime-task-model.md`](./s13a-runtime-task-model.md) に戻ります。 +- 1 つの trigger が最終的にどう主線へ戻るかを見たいなら、[`s00b-one-request-lifecycle.md`](./s00b-one-request-lifecycle.md) と一緒に読みます。 +- 未来トリガーが別の実行系に見えてきたら、[`data-structures.md`](./data-structures.md) で schedule record と runtime record を分け直します。 + +## 最小の心智モデル + +```text +1. schedule records +2. time checker +3. notification queue +``` + +流れ: + +```text +schedule_create(...) + -> +記録を保存 + -> +time checker が定期的に一致判定 + -> +一致したら scheduled notification を積む + -> +主ループがそれを新しい仕事として受け取る +``` + +重要なのは: + +**scheduler 自体は第二の agent ではない** + +ということです。 + +## 重要なデータ構造 + +### 1. schedule record + +```python +schedule = { + "id": "job_001", + "cron": "0 9 * * 1", + "prompt": "Run the weekly status report.", + "recurring": True, + "durable": True, + "created_at": 1710000000.0, + "last_fired_at": None, +} +``` + +### 2. scheduled notification + +```python +{ + "type": "scheduled_prompt", + "schedule_id": "job_001", + "prompt": "Run the weekly status report.", +} +``` + +### 3. check interval + +教学版なら分単位で十分です。 + +## 最小実装 + +```python +def create(self, cron_expr: str, prompt: str, recurring: bool = True): + job = { + "id": new_id(), + "cron": cron_expr, + "prompt": prompt, + "recurring": recurring, + "created_at": time.time(), + "last_fired_at": None, + } + self.jobs.append(job) + return job +``` + +```python +def check_loop(self): + while True: + now = datetime.now() + self.check_jobs(now) + time.sleep(60) +``` + +```python +def check_jobs(self, now): + for job in self.jobs: + if cron_matches(job["cron"], now): + self.queue.put({ + "type": "scheduled_prompt", + "schedule_id": job["id"], + "prompt": job["prompt"], + }) + job["last_fired_at"] = now.timestamp() +``` + +最後に主ループへ戻します。 + +```python +notifications = scheduler.drain() +for item in notifications: + messages.append({ + "role": "user", + "content": f"[scheduled:{item['schedule_id']}] {item['prompt']}", + }) +``` + +## なぜ `s13` の後なのか + +この 2 章は近い問いを扱います。 + +| 仕組み | 中心の問い | +|---|---| +| background tasks | 遅い仕事を止めずにどう続けるか | +| scheduling | 未来の仕事をいつ始めるか | + +この順序の方が、初学者には自然です。 + +## 初学者がやりがちな間違い + +### 1. cron 構文だけに意識を取られる + +### 2. `last_fired_at` を持たない + +### 3. スケジュールをメモリにしか置かない + +### 4. 未来トリガーの仕事を裏で黙って全部実行する + +より分かりやすい主線は: + +- trigger +- notify +- main loop が処理を決める + +です。 + +## Try It + +```sh +cd learn-claude-code +python agents/s14_cron_scheduler.py +``` diff --git a/docs/ja/s15-agent-teams.md b/docs/ja/s15-agent-teams.md new file mode 100644 index 000000000..a01a17a66 --- /dev/null +++ b/docs/ja/s15-agent-teams.md @@ -0,0 +1,426 @@ +# s15: Agent Teams + +`s00 > s01 > s02 > s03 > s04 > s05 > s06 > s07 > s08 > s09 > s10 > s11 > s12 > s13 > s14 > [ s15 ] > s16 > s17 > s18 > s19` + +> *subagent は一回きりの委譲に向く。team system が解くのは、「誰かが長く online で残り、繰り返し仕事を受け取り、互いに協調できる」状態です。* + +## この章が本当に解きたい問題 + +`s04` の subagent は、main agent が作業を小さく切り出すのに十分役立ちます。 + +ただし subagent には明確な境界があります。 + +```text +生成される + -> +少し作業する + -> +要約を返す + -> +消える +``` + +これは一回きりの調査や短い委譲にはとても向いています。 +しかし、次のような system を作りたいときには足りません。 + +- テスト担当の agent を長く待機させる +- リファクタ担当とテスト担当を並行して持ち続ける +- ある teammate が後のターンでも同じ責任を持ち続ける +- lead が後で同じ teammate へ再び仕事を振る + +つまり今不足しているのは「model call を 1 回増やすこと」ではありません。 + +不足しているのは: + +**名前・役割・inbox・状態を持った、長期的に存在する実行者の集まり** + +です。 + +## 併読のすすめ + +- teammate と `s04` の subagent をまだ同じものに見てしまうなら、[`entity-map.md`](./entity-map.md) に戻ります。 +- `s16-s18` まで続けて読むなら、[`team-task-lane-model.md`](./team-task-lane-model.md) を手元に置き、teammate、protocol request、task、runtime slot、worktree lane を混ぜないようにします。 +- 長く生きる teammate と background 実行の runtime slot が混ざり始めたら、[`s13a-runtime-task-model.md`](./s13a-runtime-task-model.md) で goal / execution の境界を先に固めます。 + +## まず用語をはっきり分ける + +### teammate とは何か + +ここでの `teammate` は: + +> 名前、役割、inbox、lifecycle を持ち、複数ターンにまたがって system 内へ残る agent + +のことです。 + +重要なのは「賢い helper」ではなく、**持続する actor** だという点です。 + +### roster とは何か + +`roster` は team member の名簿です。 + +少なくとも次を答えられる必要があります。 + +- 今 team に誰がいるか +- その人の role は何か +- その人は idle か、working か、shutdown 済みか + +### mailbox とは何か + +`mailbox` は各 teammate が持つ受信箱です。 + +他の member はそこへ message を送ります。 +受信側は、自分の次の work loop に入る前に mailbox を drain します。 + +この設計の利点は、協調が次のように見えることです。 + +- 誰が誰に送ったか +- どの member がまだ未読か +- どの message が actor 間通信なのか + +## 最小心智モデル + +この章をいちばん壊れにくく理解する方法は、各 teammate を次のように見ることです。 + +> 自分の `messages`、自分の mailbox、自分の agent loop を持った長期 actor + +```text +lead + | + +-- spawn alice (tester) + +-- spawn bob (refactorer) + | + +-- send message -> alice inbox + +-- send message -> bob inbox + +alice + | + +-- 自分の messages + +-- 自分の inbox + +-- 自分の agent loop + +bob + | + +-- 自分の messages + +-- 自分の inbox + +-- 自分の agent loop +``` + +この章の一番大事な対比は次です。 + +- subagent: 一回きりの探索 helper +- teammate: 長く存在し続ける協調 member + +## それまでの章にどう接続するか + +`s15` は単に「人数を増やす章」ではありません。 +`s12-s14` でできた task / runtime / schedule の上に、**長く残る実行者層**を足す章です。 + +接続の主線は次です。 + +```text +lead が「長く担当させたい仕事」を見つける + -> +teammate を spawn する + -> +team roster に登録する + -> +mailbox に仕事の手がかりや依頼を送る + -> +teammate が自分の inbox を drain する + -> +自分の agent loop と tools を回す + -> +結果を message / task update として返す +``` + +ここで見失ってはいけない境界は 4 つです。 + +1. `s12-s14` が作ったのは work layer であり、ここでは actor layer を足している +2. `s15` の default はまだ lead 主導である +3. structured protocol は次章 `s16` +4. autonomous claim は `s17` + +つまりこの章は、team system の中でもまだ: + +- 名付ける +- 残す +- 送る +- 受け取る + +という基礎層を作っている段階です。 + +## 主要データ構造 + +### `TeamMember` + +```python +member = { + "name": "alice", + "role": "tester", + "status": "working", +} +``` + +教学版では、まずこの 3 つが揃っていれば十分です。 + +- `name`: 誰か +- `role`: 何を主に担当するか +- `status`: 今どういう状態か + +最初から大量の field を足す必要はありません。 +この章で大事なのは「長く存在する actor が立ち上がること」です。 + +### `TeamConfig` + +```python +config = { + "team_name": "default", + "members": [member1, member2], +} +``` + +通常は次のような場所に置きます。 + +```text +.team/config.json +``` + +この record があると system は再起動後も、 + +- 以前誰がいたか +- 誰がどの role を持っていたか + +を失わずに済みます。 + +### `MessageEnvelope` + +```python +message = { + "type": "message", + "from": "lead", + "to": "alice", + "content": "Please review auth module.", + "timestamp": 1710000000.0, +} +``` + +`envelope` は「本文だけでなくメタ情報も含めて包んだ 1 件の message record」です。 + +これを使う理由: + +- sender が分かる +- receiver が分かる +- message type を分けられる +- mailbox を durable channel として扱える + +## 最小実装の進め方 + +### Step 1: まず roster を持つ + +```python +class TeammateManager: + def __init__(self, team_dir: Path): + self.team_dir = team_dir + self.config_path = team_dir / "config.json" + self.config = self._load_config() +``` + +この章の起点は roster です。 +roster がないまま team を語ると、結局「今この場で数回呼び出した model たち」にしか見えません。 + +### Step 2: teammate を spawn する + +```python +def spawn(self, name: str, role: str, prompt: str): + member = {"name": name, "role": role, "status": "working"} + self.config["members"].append(member) + self._save_config() + + thread = threading.Thread( + target=self._teammate_loop, + args=(name, role, prompt), + daemon=True, + ) + thread.start() +``` + +ここで大切なのは thread という実装選択そのものではありません。 +大切なのは次のことです。 + +**一度 spawn された teammate は、一回限りの tool call ではなく、継続する lifecycle を持つ** + +### Step 3: 各 teammate に mailbox を持たせる + +教学版で一番分かりやすいのは JSONL inbox です。 + +```text +.team/inbox/alice.jsonl +.team/inbox/bob.jsonl +``` + +送信側: + +```python +def send(self, sender: str, to: str, content: str): + with open(f"{to}.jsonl", "a") as f: + f.write(json.dumps({ + "type": "message", + "from": sender, + "to": to, + "content": content, + "timestamp": time.time(), + }) + "\n") +``` + +受信側: + +1. すべて読む +2. JSON として parse する +3. 読み終わったら inbox を drain する + +ここで教えたいのは storage trick ではありません。 + +教えたいのは: + +**協調は shared `messages[]` ではなく、mailbox boundary を通して起こる** + +という構造です。 + +### Step 4: teammate は毎ラウンド mailbox を先に確認する + +```python +def teammate_loop(name: str, role: str, prompt: str): + messages = [{"role": "user", "content": prompt}] + + while True: + inbox = bus.read_inbox(name) + for item in inbox: + messages.append({"role": "user", "content": json.dumps(item)}) + + response = client.messages.create(...) + ... +``` + +この step をあいまいにすると、読者はすぐこう誤解します。 + +- 新しい仕事を与えるたびに teammate を再生成するのか +- 元の context はどこに残るのか + +正しくは: + +- teammate は残る +- messages も残る +- 新しい仕事は inbox 経由で入る +- 次ラウンドに入る前に mailbox を見る + +です。 + +## Teammate / Subagent / Runtime Slot をどう分けるか + +この段階で最も混ざりやすいのはこの 3 つです。 +次の表をそのまま覚えて構いません。 + +| 仕組み | 何に近いか | lifecycle | 核心境界 | +|---|---|---|---| +| subagent | 一回きりの外部委託 helper | 作って、少し働いて、終わる | 小さな探索文脈の隔離 | +| runtime slot | 実行中の background slot | その実行が終われば消える | 長い execution を追跡する | +| teammate | 長期に残る team member | idle と working を行き来する | 名前、role、mailbox、独立 loop | + +口語的に言い換えると: + +- subagent: 「ちょっと調べて戻ってきて」 +- runtime slot: 「これは裏で走らせて、あとで知らせて」 +- teammate: 「あなたは今後しばらくテスト担当ね」 + +## ここで教えるべき境界 + +この章でまず固めるべきは 3 つだけです。 + +- roster +- mailbox +- 独立 loop + +これだけで「長く残る teammate」という実体は十分立ち上がります。 + +ただし、まだここでは教え過ぎない方がよいものがあります。 + +### 1. protocol request layer + +つまり: + +- どの message が普通の会話か +- どの message が `request_id` を持つ構造化 request か + +これは `s16` の範囲です。 + +### 2. autonomous claim layer + +つまり: + +- teammate が自分で仕事を探すか +- どの policy で self-claim するか +- resume は何を根拠に行うか + +これは `s17` の範囲です。 + +`s15` の default はあくまで: + +- lead が作る +- lead が送る +- teammate が受ける + +です。 + +## 初学者が特によくやる間違い + +### 1. teammate を「名前付き subagent」にする + +名前が付いていても、実装が + +```text +spawn -> work -> summary -> destroy +``` + +なら本質的にはまだ subagent です。 + +### 2. team 全員で 1 本の `messages` を共有する + +これは一見簡単ですが、文脈汚染がすぐ起きます。 + +各 teammate は少なくとも: + +- 自分の messages +- 自分の inbox +- 自分の status + +を持つべきです。 + +### 3. roster を durable にしない + +system を止めた瞬間に「team に誰がいたか」を完全に失うなら、長期 actor layer としてはかなり弱いです。 + +### 4. mailbox なしで shared variable だけで会話させる + +実装は短くできますが、teammate 間協調の境界が見えなくなります。 +教学 repo では durable mailbox を置いた方が、読者の心智がずっと安定します。 + +## 学び終わったら言えるべきこと + +少なくとも次の 4 つを自分の言葉で説明できれば、この章の主線は掴めています。 + +1. teammate の本質は「多 model」ではなく「長期に残る actor identity」である +2. team system の最小構成は「roster + mailbox + 独立 loop」である +3. subagent と teammate の違いは lifecycle の長さにある +4. teammate と runtime slot の違いは、「actor identity」か「live execution」かにある + +## 次章で何を足すか + +この章が解いているのは: + +> team member が長く存在し、互いに message を送り合えるようにすること + +次章 `s16` が解くのは: + +> message が単なる自由文ではなく、追跡・承認・拒否・期限切れを持つ protocol object になるとき、どう設計するか + +つまり `s15` が「team の存在」を作り、`s16` が「team の構造化協調」を作ります。 diff --git a/docs/ja/s16-team-protocols.md b/docs/ja/s16-team-protocols.md new file mode 100644 index 000000000..27552fc0e --- /dev/null +++ b/docs/ja/s16-team-protocols.md @@ -0,0 +1,382 @@ +# s16: Team Protocols + +`s00 > s01 > s02 > s03 > s04 > s05 > s06 > s07 > s08 > s09 > s10 > s11 > s12 > s13 > s14 > s15 > [ s16 ] > s17 > s18 > s19` + +> *mailbox があるだけでは「話せる team」に過ぎません。protocol が入って初めて、「規則に従って協調できる team」になります。* + +## この章が解く問題 + +`s15` までで teammate 同士は message を送り合えます。 + +しかし自由文だけに頼ると、すぐに 2 つの問題が出ます。 + +- 明確な承認 / 拒否が必要な場面で、曖昧な返事しか残らない +- request が複数同時に走ると、どの返答がどの件に対応するのか分からなくなる + +特に分かりやすいのは次の 2 場面です。 + +1. graceful shutdown を依頼したい +2. 高リスク plan を実行前に approval したい + +一見別の話に見えても、骨格は同じです。 + +```text +requester が request を送る + -> +receiver が明確に response する + -> +両者が同じ request_id で対応関係を追える +``` + +この章で追加するのは message の量ではなく、 + +**追跡可能な request-response protocol** + +です。 + +## 併読すると楽になる資料 + +- 普通の message と protocol request が混ざったら [`glossary.md`](./glossary.md) と [`entity-map.md`](./entity-map.md) +- `s17` や `s18` に進む前に境界を固めたいなら [`team-task-lane-model.md`](./team-task-lane-model.md) +- request が主システムへどう戻るか見直したいなら [`s00b-one-request-lifecycle.md`](./s00b-one-request-lifecycle.md) + +## 先に言葉をそろえる + +### protocol とは何か + +ここでの `protocol` は難しい通信理論ではありません。 + +意味は、 + +> message の形、処理手順、状態遷移を事前に決めた協調ルール + +です。 + +### request_id とは何か + +`request_id` は request の一意な番号です。 + +役割は 1 つで、 + +> 後から届く response や status update を、元の request と正確に結びつけること + +です。 + +### request-response pattern とは何か + +これも難しく考える必要はありません。 + +```text +requester: この操作をしたい +receiver: 承認する / 拒否する +``` + +この往復を、自然文の雰囲気で済ませず、**構造化 record として残す**のがこの章です。 + +## 最小心智モデル + +教学上は、protocol を 2 層で見ると分かりやすくなります。 + +```text +1. protocol envelope +2. durable request record +``` + +### protocol envelope + +これは inbox を流れる 1 通の構造化 message です。 + +```python +{ + "type": "shutdown_request", + "from": "lead", + "to": "alice", + "request_id": "req_001", + "payload": {}, +} +``` + +### durable request record + +これは request の lifecycle を disk に追う record です。 + +```python +{ + "request_id": "req_001", + "kind": "shutdown", + "from": "lead", + "to": "alice", + "status": "pending", +} +``` + +この 2 層がそろうと system は、 + +- いま何を送ったのか +- その request は今どの状態か + +を両方説明できるようになります。 + +## この章の核になるデータ構造 + +### 1. ProtocolEnvelope + +protocol message は普通の message より多くのメタデータを持ちます。 + +```python +message = { + "type": "shutdown_request", + "from": "lead", + "to": "alice", + "request_id": "req_001", + "payload": {}, + "timestamp": 1710000000.0, +} +``` + +特に重要なのは次の 3 つです。 + +- `type`: これは何の protocol message か +- `request_id`: どの request thread に属するか +- `payload`: 本文以外の構造化内容 + +### 2. RequestRecord + +request record は `.team/requests/` に durable に保存されます。 + +```python +request = { + "request_id": "req_001", + "kind": "shutdown", + "from": "lead", + "to": "alice", + "status": "pending", + "created_at": 1710000000.0, + "updated_at": 1710000000.0, +} +``` + +この record があることで、system は message を送ったあとでも request の状態を追い続けられます。 + +教材コードでは実際に次のような path を使います。 + +```text +.team/requests/ + req_001.json + req_002.json +``` + +これにより、 + +- request の状態を再読込できる +- protocol の途中経過をあとから確認できる +- main loop が先へ進んでも request thread が消えない + +という利点が生まれます。 + +### 3. 状態機械 + +この章の state machine は難しくありません。 + +```text +pending -> approved +pending -> rejected +pending -> expired +``` + +ここで大事なのは theory ではなく、 + +**承認系の協調には「いまどの状態か」を explicit に持つ必要がある** + +ということです。 + +## 最小実装を段階で追う + +### 第 1 段階: team mailbox の上に protocol line を通す + +この章の本質は新しい message type を 2 個足すことではありません。 + +本質は、 + +```text +requester が protocol action を開始する + -> +request record を保存する + -> +protocol envelope を inbox に送る + -> +receiver が request_id 付きで response する + -> +record の status を更新する +``` + +という一本の durable flow を通すことです。 + +### 第 2 段階: shutdown protocol を作る + +graceful shutdown は「thread を即 kill する」ことではありません。 + +正しい流れは次です。 + +1. shutdown request を作る +2. teammate が approve / reject を返す +3. approve なら後始末して終了する + +request 側の最小形はこうです。 + +```python +def request_shutdown(target: str): + request_id = new_id() + REQUEST_STORE.create({ + "request_id": request_id, + "kind": "shutdown", + "from": "lead", + "to": target, + "status": "pending", + }) + BUS.send( + "lead", + target, + "Please shut down gracefully.", + "shutdown_request", + {"request_id": request_id}, + ) +``` + +response 側は request_id を使って同じ record を更新します。 + +```python +def handle_shutdown_response(request_id: str, approve: bool): + record = REQUEST_STORE.update( + request_id, + status="approved" if approve else "rejected", + ) +``` + +### 第 3 段階: plan approval も同じ骨格で扱う + +高リスクな変更を teammate が即時実行してしまうと危険なことがあります。 + +そこで plan approval protocol を入れます。 + +```python +def submit_plan(name: str, plan_text: str): + request_id = new_id() + REQUEST_STORE.create({ + "request_id": request_id, + "kind": "plan_approval", + "from": name, + "to": "lead", + "status": "pending", + "plan": plan_text, + }) +``` + +lead はその `request_id` を見て承認または却下します。 + +```python +def review_plan(request_id: str, approve: bool, feedback: str = ""): + REQUEST_STORE.update( + request_id, + status="approved" if approve else "rejected", + feedback=feedback, + ) +``` + +ここで伝えたい中心は、 + +**shutdown と plan approval は中身は違っても、request-response correlation の骨格は同じ** + +という点です。 + +## Message / Protocol / Request / Task の境界 + +この章で最も混ざりやすい 4 つを表で分けます。 + +| オブジェクト | 何を答えるか | 典型 field | +|---|---|---| +| `MessageEnvelope` | 誰が誰に何を送ったか | `from`, `to`, `content` | +| `ProtocolEnvelope` | それが構造化 request / response か | `type`, `request_id`, `payload` | +| `RequestRecord` | その協調フローはいまどこまで進んだか | `kind`, `status`, `from`, `to` | +| `TaskRecord` | 実際の work goal は何か | `subject`, `status`, `owner`, `blockedBy` | + +ここで絶対に混ぜないでほしい点は次です。 + +- protocol request は task そのものではない +- request store は task board ではない +- protocol は協調フローを追う +- task は仕事の進行を追う + +## `s15` から何が増えたか + +`s15` の team system は「話せる team」でした。 + +`s16` ではそこへ、 + +- request_id +- durable request store +- approved / rejected の explicit status +- protocol-specific message type + +が入ります。 + +すると team は単なる chat 集合ではなく、 + +**追跡可能な coordination system** + +に進みます。 + +## 初学者が混ぜやすいポイント + +### 1. request を普通の text message と同じように扱う + +これでは承認状態を追えません。 + +### 2. request_id を持たせない + +同時に複数 request が走った瞬間に対応関係が壊れます。 + +### 3. request の状態を memory 内 dict にしか置かない + +プロセスをまたいで追えず、観測性も悪くなります。 + +### 4. approved / rejected を曖昧な文章だけで表す + +state machine が読めなくなります。 + +### 5. protocol と task を混同する + +plan approval request は「plan を実行してよいか」の協調であって、work item 本体ではありません。 + +## 前の章とどうつながるか + +この章は `s15` の mailbox-based team を次の段階へ押し上げます。 + +- `s15`: teammate が message を送れる +- `s16`: teammate が structured protocol で協調できる + +そしてこの先、 + +- `s17`: idle teammate が自分で task を claim する +- `s18`: task ごとに isolation lane を持つ + +へ進む準備になります。 + +もしここで protocol の境界が曖昧なままだと、後の autonomy や worktree を読むときに + +- 誰が誰に依頼したのか +- どの state が協調の state で、どれが work の state か + +がすぐ混ざります。 + +## 教学上の境界 + +この章でまず教えるべきのは、製品に存在しうる全 protocol の一覧ではありません。 + +中心は次の 3 点です。 + +- request と response を同じ `request_id` で結び付けること +- 承認状態を explicit state として残すこと +- team coordination を自由文から durable workflow へ進めること + +ここが見えていれば、後から protocol の種類が増えても骨格は崩れません。 diff --git a/docs/ja/s17-autonomous-agents.md b/docs/ja/s17-autonomous-agents.md new file mode 100644 index 000000000..a98e6c315 --- /dev/null +++ b/docs/ja/s17-autonomous-agents.md @@ -0,0 +1,546 @@ +# s17: Autonomous Agents + +`s00 > s01 > s02 > s03 > s04 > s05 > s06 > s07 > s08 > s09 > s10 > s11 > s12 > s13 > s14 > s15 > s16 > [ s17 ] > s18 > s19` + +> *本当にチームらしくなる瞬間は、人数が増えたときではなく、空いている teammate が次の仕事を自分で拾えるようになったときです。* + +## この章が解く問題 + +`s16` まで来ると、チームにはすでに次のものがあります。 + +- 長く生きる teammate +- inbox +- protocol request / response +- task board + +それでも、まだ 1 つ大きな詰まりが残っています。 + +**仕事の割り振りが lead に集中しすぎることです。** + +たとえば task board に ready な task が 10 個あっても、 + +- Alice はこれ +- Bob はこれ +- Charlie はこれ + +と lead が 1 件ずつ指名し続けるなら、team は増えても coordination の中心は 1 人のままです。 + +この章で入れるのは、 + +**空いている teammate が、自分で board を見て、取ってよい task を安全に claim する仕組み** + +です。 + +## 併読すると楽になる資料 + +- teammate / task / runtime slot の境界が怪しくなったら [`team-task-lane-model.md`](./team-task-lane-model.md) +- `auto-claim` を読んで runtime record の置き場所が曖昧なら [`s13a-runtime-task-model.md`](./s13a-runtime-task-model.md) +- 長期 teammate と一回限りの subagent の違いが薄れたら [`entity-map.md`](./entity-map.md) + +## 先に言葉をそろえる + +### 自治とは何か + +ここで言う `autonomous` は、 + +> 何の制御もなく勝手に暴走すること + +ではありません。 + +正しくは、 + +> 事前に与えたルールに従って、空いている teammate が次の仕事を自分で選べること + +です。 + +つまり自治は自由放任ではなく、**規則付きの自律再開**です。 + +### claim とは何か + +`claim` は、 + +> まだ owner が付いていない task を「今から自分が担当する」と確定させること + +です。 + +「見つける」だけでは不十分で、**owner を書き込み、他の teammate が同じ task を取らないようにする**ところまでが claim です。 + +### idle とは何か + +`idle` は終了でも停止でもありません。 + +意味は次の通りです。 + +> 今この teammate には active work がないが、まだ system の中で生きていて、新しい input を待てる状態 + +です。 + +## 最小心智モデル + +この章を最も簡単に捉えるなら、teammate の lifecycle を 2 フェーズで見ます。 + +```text +WORK + | + | 今の作業を終える / idle を選ぶ + v +IDLE + | + +-- inbox に新着がある -> WORK + | + +-- task board に claimable task がある -> claim -> WORK + | + +-- 一定時間なにもない -> shutdown +``` + +ここで大事なのは、 + +**main loop を無限に回し続けることではなく、idle 中に何を見て、どの順番で resume するか** + +です。 + +## この章の核になるデータ構造 + +### 1. Claimable Predicate + +最初に理解すべきなのは、 + +> どんな task なら「この teammate が今 claim してよい」と判定できるのか + +です。 + +教材コードでは、判定は単に `status == "pending"` では終わりません。 + +```python +def is_claimable_task(task: dict, role: str | None = None) -> bool: + return ( + task.get("status") == "pending" + and not task.get("owner") + and not task.get("blockedBy") + and _task_allows_role(task, role) + ) +``` + +この 4 条件はそれぞれ別の意味を持ちます。 + +- `status == "pending"`: まだ開始していない +- `not owner`: まだ誰も担当していない +- `not blockedBy`: 前提 task が残っていない +- `_task_allows_role(...)`: この teammate の role が claim policy に合っている + +最後の条件が特に重要です。 + +task は今の教材コードでは次のような role 制約を持てます。 + +- `claim_role` +- `required_role` + +たとえば、 + +```python +{ + "id": 7, + "subject": "Implement login page", + "status": "pending", + "owner": "", + "blockedBy": [], + "claim_role": "frontend", +} +``` + +なら、空いている teammate 全員が取れるわけではありません。 + +**frontend role の teammate だけが claim 候補になります。** + +### 2. Claim 後の TaskRecord + +claim が成功すると、task record は少なくとも次のように更新されます。 + +```python +{ + "id": 7, + "owner": "alice", + "status": "in_progress", + "claimed_at": 1710000000.0, + "claim_source": "auto", +} +``` + +この中で初心者が見落としやすいのは `claimed_at` と `claim_source` です。 + +- `claimed_at`: いつ取られたか +- `claim_source`: 手動か自動か + +これがあることで system は、 + +- 今だれが担当しているか +- その担当は lead の指名か +- それとも idle scan による auto-claim か + +をあとから説明できます。 + +### 3. Claim Event Log + +task file の更新だけでは、今の最終状態しか見えません。 + +そこでこの章では claim 操作を別の append-only log にも書きます。 + +```text +.tasks/claim_events.jsonl +``` + +中身のイメージはこうです。 + +```python +{ + "event": "task.claimed", + "task_id": 7, + "owner": "alice", + "role": "frontend", + "source": "auto", + "ts": 1710000000.0, +} +``` + +この log があると、 + +- task がいつ取られたか +- 誰が取ったか +- 手動か自動か + +が current state とは別に追えます。 + +### 4. Durable Request Record + +`s17` は autonomy を追加する章ですが、`s16` の protocol line を捨てる章ではありません。 + +そのため shutdown や plan approval の request は引き続き disk に保存されます。 + +```text +.team/requests/{request_id}.json +``` + +これは重要です。 + +なぜなら autonomous teammate は、 + +> protocol を無視して好きに動く worker + +ではなく、 + +> 既存の protocol system の上で、idle 時に自分で次の仕事を探せる teammate + +だからです。 + +### 5. Identity Block + +compact の後や idle からの復帰直後は、teammate が自分の identity を見失いやすくなります。 + +そのため教材コードには identity block の再注入があります。 + +```python +{ + "role": "user", + "content": "You are 'alice', role: frontend, team: default. Continue your work.", +} +``` + +さらに短い assistant acknowledgement も添えています。 + +```python +{"role": "assistant", "content": "I am alice. Continuing."} +``` + +この 2 行は装飾ではありません。 + +ここで守っているのは次の 3 点です。 + +- 私は誰か +- どの role か +- どの team に属しているか + +## 最小実装を段階で追う + +### 第 1 段階: WORK と IDLE を分ける + +まず teammate loop を 2 フェーズに分けます。 + +```python +while True: + run_work_phase(...) + should_resume = run_idle_phase(...) + if not should_resume: + break +``` + +これで初めて、 + +- いま作業中なのか +- いま待機中なのか +- 次に resume する理由は何か + +を分けて考えられます。 + +### 第 2 段階: idle では先に inbox を見る + +`idle` に入ったら最初に見るべきは task board ではなく inbox です。 + +```python +def idle_phase(name: str, messages: list) -> bool: + inbox = bus.read_inbox(name) + if inbox: + messages.append({ + "role": "user", + "content": json.dumps(inbox), + }) + return True +``` + +理由は単純で、 + +**明示的に自分宛てに来た仕事の方が、board 上の一般 task より優先度が高い** + +からです。 + +### 第 3 段階: inbox が空なら role 付きで task board を走査する + +```python +unclaimed = scan_unclaimed_tasks(role) +if unclaimed: + task = unclaimed[0] + claim_result = claim_task( + task["id"], + name, + role=role, + source="auto", + ) +``` + +ここでの要点は 2 つです。 + +- `scan_unclaimed_tasks(role)` は role を無視して全件取るわけではない +- `source="auto"` を書いて claim の由来を残している + +つまり自治とは、 + +> 何でも空いていれば奪うこと + +ではなく、 + +> role、block 状態、owner 状態を見たうえで、今この teammate に許された仕事だけを取ること + +です。 + +### 第 4 段階: claim 後は identity と task hint を両方戻す + +claim 成功後は、そのまま resume してはいけません。 + +```python +ensure_identity_context(messages, name, role, team_name) +messages.append({ + "role": "user", + "content": f"Task #{task['id']}: {task['subject']}", +}) +messages.append({ + "role": "assistant", + "content": f"{claim_result}. Working on it.", +}) +return True +``` + +この段で context に戻しているのは 2 種類の情報です。 + +- identity: この teammate は誰か +- fresh work item: いま何を始めたのか + +この 2 つがそろって初めて、次の WORK phase が迷わず進みます。 + +### 第 5 段階: 長時間なにもなければ shutdown する + +idle teammate を永久に残す必要はありません。 + +教材版では、 + +> 一定時間 inbox も task board も空なら shutdown + +という単純な出口で十分です。 + +ここでの主眼は resource policy の最適化ではなく、 + +**idle からの再開条件と終了条件を明示すること** + +です。 + +## なぜ claim は原子的でなければならないか + +`atomic` という言葉は難しく見えますが、ここでは次の意味です。 + +> claim 処理は「全部成功する」か「起きない」かのどちらかでなければならない + +理由は race condition です。 + +Alice と Bob が同時に同じ task を見たら、 + +- Alice も `owner == ""` を見る +- Bob も `owner == ""` を見る +- 両方が自分を owner として保存する + +という事故が起こりえます。 + +そのため教材コードでも lock を使っています。 + +```python +with claim_lock: + task = load(task_id) + if task["owner"]: + return "already claimed" + task["owner"] = name + task["status"] = "in_progress" + save(task) +``` + +初心者向けに言い換えるなら、 + +**claim は「見てから書く」までを他の teammate に割り込まれずに一気に行う** + +必要があります。 + +## identity 再注入が重要な理由 + +これは地味ですが、自治の品質を大きく左右します。 + +compact の後や long-lived teammate の再開時には、context 冒頭から次の情報が薄れがちです。 + +- 私は誰か +- 何 role か +- どの team か + +この状態で work を再開すると、 + +- role に合わない判断をしやすくなる +- protocol 上の責務を忘れやすくなる +- それまでの persona がぶれやすくなる + +だから教材版では、 + +> idle から戻る前、または compact 後に identity が薄いなら再注入する + +という復帰ルールを置いています。 + +## `s17` は `s16` を上書きしない + +ここは誤解しやすいので強調します。 + +`s17` で増えるのは autonomy ですが、だからといって `s16` の protocol layer が消えるわけではありません。 + +両者はこういう関係です。 + +```text +s16: + request_id を持つ durable protocol + +s17: + idle teammate が board を見て次の仕事を探せる +``` + +つまり `s17` は、 + +**protocol がある team に autonomy を足す章** + +であって、 + +**自由に動く worker 群へ退化させる章** + +ではありません。 + +## 前の章とどうつながるか + +この章は前の複数章が初めて強く結びつく場所です。 + +- `s12`: task board を作る +- `s15`: persistent teammate を作る +- `s16`: request / response protocol を作る +- `s17`: 指名がなくても次の work を自分で取れるようにする + +したがって `s17` は、 + +**受け身の team から、自分で回り始める team への橋渡し** + +と考えると分かりやすいです。 + +## 自治するのは long-lived teammate であって subagent ではない + +ここで `s04` と混ざる人が多いです。 + +この章の actor は one-shot subagent ではありません。 + +この章の teammate は次の特徴を持ちます。 + +- 名前がある +- role がある +- inbox がある +- idle state がある +- 複数回 task を受け取れる + +一方、subagent は通常、 + +- 一度 delegated work を受ける +- 独立 context で処理する +- summary を返して終わる + +という使い方です。 + +また、この章で claim する対象は `s12` の task であり、`s13` の runtime slot ではありません。 + +## 初学者が混ぜやすいポイント + +### 1. `pending` だけ見て `blockedBy` を見ない + +task が `pending` でも dependency が残っていればまだ取れません。 + +### 2. role 条件を無視する + +`claim_role` や `required_role` を見ないと、間違った teammate が task を取ります。 + +### 3. claim lock を置かない + +同一 task の二重 claim が起こります。 + +### 4. idle 中に board しか見ない + +これでは明示的な inbox message を取りこぼします。 + +### 5. event log を書かない + +「いま誰が持っているか」は分かっても、 + +- いつ取ったか +- 自動か手動か + +が追えません。 + +### 6. idle teammate を永遠に残す + +教材版では shutdown 条件を持たせた方が lifecycle を理解しやすくなります。 + +### 7. compact 後に identity を戻さない + +長く動く teammate ほど、identity drift が起きやすくなります。 + +## 教学上の境界 + +この章でまず掴むべき主線は 1 本です。 + +**idle で待つ -> 安全に claim する -> identity を整えて work に戻る** + +ここで学ぶ中心は自治の骨格であって、 + +- 高度な scheduler 最適化 +- 分散環境での claim +- 複雑な fairness policy + +ではありません。 + +その先へ進む前に、読者が自分の言葉で次の 1 文を言えることが大切です。 + +> autonomous teammate とは、空いたときに勝手に暴走する worker ではなく、inbox と task board を規則通りに見て、取ってよい仕事だけを自分で取りにいける長期 actor である。 diff --git a/docs/ja/s18-worktree-task-isolation.md b/docs/ja/s18-worktree-task-isolation.md new file mode 100644 index 000000000..34bac72af --- /dev/null +++ b/docs/ja/s18-worktree-task-isolation.md @@ -0,0 +1,534 @@ +# s18: Worktree + Task Isolation + +`s00 > s01 > s02 > s03 > s04 > s05 > s06 > s07 > s08 > s09 > s10 > s11 > s12 > s13 > s14 > s15 > s16 > s17 > [ s18 ] > s19` + +> *task board が答えるのは「何をやるか」、worktree が答えるのは「どこでやるか、しかも互いに踏み荒らさずに」です。* + +## この章が解く問題 + +`s17` までで system はすでに次のことができます。 + +- task を作る +- teammate が task を claim する +- 複数の teammate が並行に作業する + +それでも、全員が同じ working directory で作業するなら、すぐに限界が来ます。 + +典型的な壊れ方は次の通りです。 + +- 2 つの task が同じ file を同時に編集する +- 片方の未完了変更がもう片方の task を汚染する +- 「この task の変更だけ見たい」が非常に難しくなる + +つまり `s12-s17` までで答えられていたのは、 + +**誰が何をやるか** + +までであって、 + +**その仕事をどの execution lane で進めるか** + +はまだ答えられていません。 + +それを担当するのが `worktree` です。 + +## 併読すると楽になる資料 + +- task / runtime slot / worktree lane が同じものに見えたら [`team-task-lane-model.md`](./team-task-lane-model.md) +- task record と worktree record に何を保存すべきか確認したいなら [`data-structures.md`](./data-structures.md) +- なぜ worktree の章が tasks / teams より後ろに来るか再確認したいなら [`s00e-reference-module-map.md`](./s00e-reference-module-map.md) + +## 先に言葉をそろえる + +### worktree とは何か + +Git に慣れている人なら、 + +> 同じ repository を別ディレクトリへ独立 checkout した作業コピー + +と見て構いません。 + +まだ Git の言葉に慣れていないなら、まずは次の理解で十分です。 + +> 1 つの task に割り当てる専用の作業レーン + +### isolation とは何か + +`isolation` は、 + +> task A は task A の directory で実行し、task B は task B の directory で実行して、未コミット変更を最初から共有しないこと + +です。 + +### binding とは何か + +`binding` は、 + +> task ID と worktree record を明示的に結びつけること + +です。 + +これがないと、system は「この directory が何のために存在しているのか」を説明できません。 + +## 最小心智モデル + +この章は 2 枚の表を別物として見ると一気に分かりやすくなります。 + +```text +Task Board + - 何をやるか + - 誰が持っているか + - 今どの状態か + +Worktree Registry + - どこでやるか + - どの branch / path か + - どの task に結び付いているか +``` + +両者は `task_id` でつながります。 + +```text +.tasks/task_12.json + { + "id": 12, + "subject": "Refactor auth flow", + "status": "in_progress", + "worktree": "auth-refactor" + } + +.worktrees/index.json + { + "worktrees": [ + { + "name": "auth-refactor", + "path": ".worktrees/auth-refactor", + "branch": "wt/auth-refactor", + "task_id": 12, + "status": "active" + } + ] + } +``` + +この 2 つを見て、 + +- task は goal を記録する +- worktree は execution lane を記録する + +と分けて理解できれば、この章の幹はつかめています。 + +## この章の核になるデータ構造 + +### 1. TaskRecord 側の lane 情報 + +この段階の教材コードでは、task 側に単に `worktree` という名前だけがあるわけではありません。 + +```python +task = { + "id": 12, + "subject": "Refactor auth flow", + "status": "in_progress", + "owner": "alice", + "worktree": "auth-refactor", + "worktree_state": "active", + "last_worktree": "auth-refactor", + "closeout": None, +} +``` + +それぞれの意味は次の通りです。 + +- `worktree`: 今この task がどの lane に結び付いているか +- `worktree_state`: その lane が `active` / `kept` / `removed` / `unbound` のどれか +- `last_worktree`: 直近で使っていた lane 名 +- `closeout`: 最後にどういう終わらせ方をしたか + +ここが重要です。 + +task 側はもはや単に「現在の directory 名」を持っているだけではありません。 + +**いま結び付いている lane と、最後にどう閉じたかまで記録し始めています。** + +### 2. WorktreeRecord + +worktree registry 側の record は path の写しではありません。 + +```python +worktree = { + "name": "auth-refactor", + "path": ".worktrees/auth-refactor", + "branch": "wt/auth-refactor", + "task_id": 12, + "status": "active", + "last_entered_at": 1710000000.0, + "last_command_at": 1710000012.0, + "last_command_preview": "pytest tests/auth -q", + "closeout": None, +} +``` + +ここで答えているのは path だけではありません。 + +- いつ lane に入ったか +- 最近何を実行したか +- どんな closeout が最後に行われたか + +つまり worktree record は、 + +**directory mapping ではなく、観測可能な execution lane record** + +です。 + +### 3. CloseoutRecord + +closeout は「最後に削除したかどうか」だけではありません。 + +教材コードでは次のような record を残します。 + +```python +closeout = { + "action": "keep", + "reason": "Need follow-up review", + "at": 1710000100.0, +} +``` + +これにより system は、 + +- keep したのか +- remove したのか +- なぜそうしたのか + +を state として残せます。 + +初心者にとって大事なのはここです。 + +**closeout は単なる cleanup コマンドではなく、execution lane の終わり方を明示する操作** + +です。 + +### 4. Event Record + +worktree は lifecycle が長いので event log も必要です。 + +```python +{ + "event": "worktree.closeout.keep", + "task_id": 12, + "worktree": "auth-refactor", + "reason": "Need follow-up review", + "ts": 1710000100.0, +} +``` + +なぜ state file だけでは足りないかというと、lane の lifecycle には複数段階があるからです。 + +- create +- enter +- run +- keep +- remove +- remove failed + +append-only の event があれば、いまの最終状態だけでなく、 + +**そこへ至る途中の挙動** + +も追えます。 + +## 最小実装を段階で追う + +### 第 1 段階: 先に task を作り、そのあと lane を作る + +順番は非常に大切です。 + +```python +task = tasks.create("Refactor auth flow") +worktrees.create("auth-refactor", task_id=task["id"]) +``` + +この順番にする理由は、 + +**worktree は task の代替ではなく、task にぶら下がる execution lane** + +だからです。 + +最初に goal があり、そのあと goal に lane を割り当てます。 + +### 第 2 段階: worktree を作り、registry に書く + +```python +def create(self, name: str, task_id: int): + path = self.root / ".worktrees" / name + branch = f"wt/{name}" + + run_git(["worktree", "add", "-b", branch, str(path), "HEAD"]) + + record = { + "name": name, + "path": str(path), + "branch": branch, + "task_id": task_id, + "status": "active", + } + self.index["worktrees"].append(record) + self._save_index() +``` + +ここで registry は次を答えられるようになります。 + +- lane 名 +- 実 directory +- branch +- 対応 task +- active かどうか + +### 第 3 段階: task record 側も同時に更新する + +lane registry を書くだけでは不十分です。 + +```python +def bind_worktree(task_id: int, name: str): + task = tasks.load(task_id) + task["worktree"] = name + task["last_worktree"] = name + task["worktree_state"] = "active" + if task["status"] == "pending": + task["status"] = "in_progress" + tasks.save(task) +``` + +なぜ両側へ書く必要があるか。 + +もし registry だけ更新して task board 側を更新しなければ、 + +- task 一覧から lane が見えない +- closeout 時にどの task を終わらせるか分かりにくい +- crash 後の再構成が不自然になる + +からです。 + +### 第 4 段階: lane に入ることと、lane で command を実行することを分ける + +教材コードでは `enter` と `run` を分けています。 + +```python +worktree_enter("auth-refactor") +worktree_run("auth-refactor", "pytest tests/auth -q") +``` + +底では本質的に次のことをしています。 + +```python +def enter(self, name: str): + self._update_entry(name, last_entered_at=time.time()) + self.events.emit("worktree.enter", ...) + +def run(self, name: str, command: str): + subprocess.run(command, cwd=worktree_path, ...) +``` + +特に大事なのは `cwd=worktree_path` です。 + +同じ `pytest` でも、どの `cwd` で走るかによって影響範囲が変わります。 + +`enter` を別操作として教える理由は、読者に次の境界を見せるためです。 + +- lane を割り当てた +- 実際にその lane へ入った +- その lane で command を実行した + +この 3 段階が分かれているからこそ、 + +- `last_entered_at` +- `last_command_at` +- `last_command_preview` + +のような観測項目が自然に見えてきます。 + +### 第 5 段階: 終わるときは closeout を明示する + +教材上は、`keep` と `remove` をバラバラの小技として見せるより、 + +> closeout という 1 つの判断に 2 分岐ある + +と見せた方が心智が安定します。 + +```python +worktree_closeout( + name="auth-refactor", + action="keep", # or "remove" + reason="Need follow-up review", + complete_task=False, +) +``` + +これで読者は次のことを一度に理解できます。 + +- lane の終わらせ方には選択肢がある +- その選択には理由を持たせられる +- closeout は task record / lane record / event log に反映される + +もちろん実装下層では、 + +- `worktree_keep(name)` +- `worktree_remove(name, reason=..., complete_task=True)` + +のような分離 API を持っていても構いません。 + +ただし教学の主線では、 + +**closeout decision -> keep / remove** + +という形にまとめた方が初心者には伝わります。 + +## なぜ `status` と `worktree_state` を分けるのか + +これは非常に大事な区別です。 + +初学者はよく、 + +> task に `status` があるなら十分ではないか + +と考えます。 + +しかし実際は答えている質問が違います。 + +- `task.status`: その仕事が `pending` / `in_progress` / `completed` のどれか +- `worktree_state`: その execution lane が `active` / `kept` / `removed` / `unbound` のどれか + +たとえば、 + +```text +task は completed +でも worktree は kept +``` + +という状態は自然に起こります。 + +review 用に directory を残しておきたいからです。 + +したがって、 + +**goal state と lane state は同じ field に潰してはいけません。** + +## なぜ worktree は「Git の小技」で終わらないのか + +初見では「別 directory を増やしただけ」に見えるかもしれません。 + +でも教学上の本質はそこではありません。 + +本当に重要なのは、 + +**task と execution directory の対応関係を明示 record として持つこと** + +です。 + +それがあるから system は、 + +- どの lane がどの task に属するか +- 完了時に何を closeout すべきか +- crash 後に何を復元すべきか + +を説明できます。 + +## 前の章とどうつながるか + +この章は前段を次のように結びます。 + +- `s12`: task ID を与える +- `s15-s17`: teammate と claim を与える +- `s18`: 各 task に独立 execution lane を与える + +流れで書くとこうです。 + +```text +task を作る + -> +teammate が claim する + -> +system が worktree lane を割り当てる + -> +commands がその lane の directory で走る + -> +終了時に keep / remove を選ぶ +``` + +ここまで来ると multi-agent の並行作業が「同じ場所に集まる chaos」ではなく、 + +**goal と lane を分けた協調システム** + +として見えてきます。 + +## worktree は task そのものではない + +ここは何度でも繰り返す価値があります。 + +- task は「何をやるか」 +- worktree は「どこでやるか」 + +です。 + +同様に、 + +- runtime slot は「今動いている execution」 +- worktree lane は「どの directory / branch で動くか」 + +という別軸です。 + +もしこの辺りが混ざり始めたら、次を開いて整理し直してください。 + +- [`team-task-lane-model.md`](./team-task-lane-model.md) +- [`s13a-runtime-task-model.md`](./s13a-runtime-task-model.md) +- [`entity-map.md`](./entity-map.md) + +## 初学者が混ぜやすいポイント + +### 1. registry だけあって task record に `worktree` がない + +task board から lane の情報が見えなくなります。 + +### 2. task ID はあるのに command が repo root で走っている + +`cwd` が切り替わっていなければ isolation は成立していません。 + +### 3. `remove` だけを覚えて closeout の意味を教えない + +読者は「directory を消す小技」としか理解できなくなります。 + +### 4. remove 前に dirty state を気にしない + +教材版でも最低限、 + +**消す前に未コミット変更を確認する** + +という原則は持たせるべきです。 + +### 5. `worktree_state` や `closeout` を持たない + +lane の終わり方が state として残らなくなります。 + +### 6. lane を増やすだけで掃除しない + +長く使うと registry も directory もすぐ乱れます。 + +### 7. event log を持たない + +create / remove failure や binding ミスの調査が極端にやりづらくなります。 + +## 教学上の境界 + +この章でまず教えるべき中心は、製品レベルの Git 運用細目ではありません。 + +中心は次の 3 行です。 + +- task が「何をやるか」を記録する +- worktree が「どこでやるか」を記録する +- enter / run / closeout が execution lane の lifecycle を構成する + +merge 自動化、複雑な回収 policy、cross-machine execution などは、その幹が見えてからで十分です。 + +この章を読み終えた読者が次の 1 文を言えれば成功です。 + +> task system は仕事の目標を管理し、worktree system はその仕事を安全に進めるための独立レーンを管理する。 diff --git a/docs/ja/s19-mcp-plugin.md b/docs/ja/s19-mcp-plugin.md new file mode 100644 index 000000000..27740520d --- /dev/null +++ b/docs/ja/s19-mcp-plugin.md @@ -0,0 +1,255 @@ +# s19: MCP & Plugin + +`s00 > s01 > s02 > s03 > s04 > s05 > s06 > s07 > s08 > s09 > s10 > s11 > s12 > s13 > s14 > s15 > s16 > s17 > s18 > [ s19 ]` + +> *すべての能力を主プログラムへ直書きする必要はない。外部能力も同じ routing 面へ接続できる。* + +## この章が本当に教えるもの + +前の章までは、ツールの多くが自分の Python コード内にありました。 + +これは教学として正しい出発点です。 + +しかしシステムが大きくなると、自然に次の要望が出ます。 + +> "外部プログラムの能力を、毎回主プログラムを書き換えずに使えないか?" + +それに答えるのが MCP です。 + +## MCP を一番簡単に言うと + +MCP は: + +**agent が外部 capability server と会話するための標準的な方法** + +と考えれば十分です。 + +主線は次の 4 ステップです。 + +1. 外部 server を起動する +2. どんなツールがあるか聞く +3. 必要な呼び出しをその server へ転送する +4. 結果を標準化して主ループへ戻す + +## なぜ最後の章なのか + +MCP は出発点ではありません。 + +先に理解しておくべきものがあります。 + +- agent loop +- tool routing +- permissions +- tasks +- worktree isolation + +それらが見えてからだと、MCP は: + +**新しい capability source** + +として自然に理解できます。 + +## 主線とどう併読するか + +- MCP を「遠隔 tool」だけで理解しているなら、[`s19a-mcp-capability-layers.md`](./s19a-mcp-capability-layers.md) を読んで tools、resources、prompts、plugin discovery を 1 つの platform boundary へ戻します。 +- 外部 capability がなぜ同じ execution surface へ戻るのかを確かめたいなら、[`s02b-tool-execution-runtime.md`](./s02b-tool-execution-runtime.md) を併読します。 +- query control と外部 capability routing が頭の中で分離し始めたら、[`s00a-query-control-plane.md`](./s00a-query-control-plane.md) に戻ります。 + +## 最小の心智モデル + +```text +LLM + | + | tool を呼びたい + v +Agent tool router + | + +-- native tool -> local Python handler + | + +-- MCP tool -> external MCP server + | + v + return result +``` + +## 重要な 3 要素 + +### 1. `MCPClient` + +役割: + +- server へ接続 +- tool 一覧取得 +- tool 呼び出し + +### 2. 命名規則 + +外部ツールとローカルツールが衝突しないように prefix を付けます。 + +```text +mcp__{server}__{tool} +``` + +例: + +```text +mcp__postgres__query +mcp__browser__open_tab +``` + +### 3. 1 本の unified router + +```python +if tool_name.startswith("mcp__"): + return mcp_router.call(tool_name, arguments) +else: + return native_handler(arguments) +``` + +## Plugin は何をするか + +MCP が: + +> 外部 server とどう会話するか + +を扱うなら、plugin は: + +> その server をどう発見し、どう設定するか + +を扱います。 + +最小 plugin は: + +```text +.claude-plugin/ + plugin.json +``` + +だけでも十分です。 + +## 最小設定 + +```json +{ + "name": "my-db-tools", + "version": "1.0.0", + "mcpServers": { + "postgres": { + "command": "npx", + "args": ["-y", "@modelcontextprotocol/server-postgres"] + } + } +} +``` + +これは要するに: + +> "この server が必要なら、このコマンドで起動する" + +と主プログラムへ教えているだけです。 + +## システム全体へどう接続するか + +MCP が急に難しく見えるのは、別世界の仕組みとして見てしまうときです。 +より安定した心智モデルは次です。 + +```text +startup + -> +plugin loader が manifest を見つける + -> +server config を取り出す + -> +MCP client が connect / list_tools する + -> +external tools を同じ tool pool に正規化して入れる + +runtime + -> +LLM が tool_use を出す + -> +共有 permission gate + -> +native route または MCP route + -> +result normalization + -> +同じ loop へ tool_result を返す +``` + +入口は違っても、control plane と execution plane は同じです。 + +## 重要なデータ構造 + +### 1. server config + +```python +{ + "command": "npx", + "args": ["-y", "..."], + "env": {} +} +``` + +### 2. 標準化された外部ツール定義 + +```python +{ + "name": "mcp__postgres__query", + "description": "Run a SQL query", + "input_schema": {...} +} +``` + +### 3. client registry + +```python +clients = { + "postgres": mcp_client_instance +} +``` + +## 絶対に崩してはいけない境界 + +この章で最も重要なのは: + +**外部ツールも同じ permission 面を通る** + +ということです。 + +MCP が permission を素通りしたら、外側に安全穴を開けるだけです。 + +## Plugin / Server / Tool を同じ層にしない + +| 層 | 何か | 何を担当するか | +|---|---|---| +| plugin manifest | 設定宣言 | どの server を見つけて起動するかを教える | +| MCP server | 外部 process / connection | 能力の集合を expose する | +| MCP tool | server が出す 1 つの callable capability | モデルが実際に呼ぶ対象 | + +最短で覚えるなら: + +- plugin = discovery +- server = connection +- tool = invocation + +## 初学者が迷いやすい点 + +### 1. いきなりプロトコル細部へ入る + +先に見るべきは capability routing です。 + +### 2. MCP を別世界だと思う + +実際には、同じ routing、同じ permission、同じ result append に戻します。 + +### 3. 正規化を省く + +外部ツールをローカルツールと同じ形へ揃えないと、後の心智が急に重くなります。 + +## Try It + +```sh +cd learn-claude-code +python agents/s19_mcp_plugin.py +``` diff --git a/docs/ja/s19a-mcp-capability-layers.md b/docs/ja/s19a-mcp-capability-layers.md new file mode 100644 index 000000000..40b056394 --- /dev/null +++ b/docs/ja/s19a-mcp-capability-layers.md @@ -0,0 +1,257 @@ +# s19a: MCP Capability Layers + +> `s19` の主線は引き続き tools-first で進めるべきです。 +> その上で、この bridge doc は次の心智を足します。 +> +> **MCP は単なる外部 tool 接続ではなく、複数の capability layer を持つ platform です。** + +## 主線とどう併読するか + +MCP を主線から外れずに学ぶなら次の順がよいです。 + +- まず [`s19-mcp-plugin.md`](./s19-mcp-plugin.md) を読み、tools-first の入口を固める +- 次に [`s02a-tool-control-plane.md`](./s02a-tool-control-plane.md) を見直し、外部 capability がどう unified tool bus に戻るかを見る +- state record が混ざり始めたら [`data-structures.md`](./data-structures.md) を見直す +- concept boundary が混ざり始めたら [`glossary.md`](./glossary.md) と [`entity-map.md`](./entity-map.md) を見直す + +## なぜ別立てで必要か + +教材 repo として、正文を external tools から始めるのは正しいです。 + +最も入りやすい入口は: + +- 外部 server に接続する +- tool 定義を受け取る +- tool を呼ぶ +- 結果を agent へ戻す + +しかし完成度を上げようとすると、すぐ次の問いに出会います。 + +- server は stdio / HTTP / SSE / WebSocket のどれでつながるのか +- なぜ `connected` の server もあれば `pending` や `needs-auth` の server もあるのか +- resources や prompts は tools とどう並ぶのか +- elicitation はなぜ特別な対話になるのか +- OAuth のような auth flow はどの層で理解すべきか + +capability-layer map がないと、MCP は急に散らばって見えます。 + +## まず用語 + +### capability layer とは + +capability layer は: + +> 大きな system の中の 1 つの責務面 + +です。 + +MCP のすべてを 1 つの袋に入れないための考え方です。 + +### transport とは + +transport は接続通路です。 + +- stdio +- HTTP +- SSE +- WebSocket + +### elicitation とは + +これは見慣れない用語ですが、教材版では次の理解で十分です。 + +> MCP server 側が追加情報を要求し、user からさらに入力を引き出す対話 + +つまり常に: + +> agent calls tool -> tool returns result + +だけとは限らず、server 側から: + +> 続けるためにもっと入力が必要 + +と言ってくる場合があります。 + +## 最小の心智モデル + +MCP を 6 層で見ると整理しやすいです。 + +```text +1. Config Layer + server 設定がどう表現されるか + +2. Transport Layer + 何の通路で接続するか + +3. Connection State Layer + connected / pending / failed / needs-auth + +4. Capability Layer + tools / resources / prompts / elicitation + +5. Auth Layer + 認証が必要か、認証状態は何か + +6. Router Integration Layer + tool routing / permission / notifications にどう戻るか +``` + +ここで最重要なのは: + +**tools は一層であって、MCP の全体ではない** + +という点です。 + +## なぜ正文は tools-first のままでよいか + +教材として大事なポイントです。 + +MCP に複数 layer があっても、正文主線はまず次で十分です。 + +### Step 1: 外部 tools から入る + +これは読者がすでに学んだものと最も自然につながります。 + +- local tools +- external tools +- 1 本の shared router + +### Step 2: その上で他の layer があると知らせる + +例えば: + +- resources +- prompts +- elicitation +- auth + +### Step 3: どこまで実装するかを決める + +これが教材 repo の目的に合っています。 + +**まず似た system を作り、その後で platform layer を厚くする** + +## 主要 record + +### 1. `ScopedMcpServerConfig` + +教材版でも最低限この概念は見せるべきです。 + +```python +config = { + "name": "postgres", + "type": "stdio", + "command": "npx", + "args": ["-y", "..."], + "scope": "project", +} +``` + +`scope` が重要なのは、server config が 1 つの場所からだけ来るとは限らないからです。 + +### 2. MCP connection state + +```python +server_state = { + "name": "postgres", + "status": "connected", # pending / failed / needs-auth / disabled + "config": {...}, +} +``` + +### 3. `MCPToolSpec` + +```python +tool = { + "name": "mcp__postgres__query", + "description": "...", + "input_schema": {...}, +} +``` + +### 4. `ElicitationRequest` + +```python +request = { + "server_name": "some-server", + "message": "Please provide additional input", + "requested_schema": {...}, +} +``` + +ここでの教材上の要点は、elicitation を今すぐ全部実装することではありません。 + +要点は: + +**MCP は常に一方向の tool invocation だけとは限らない** + +という点です。 + +## より整理された図 + +```text +MCP Config + | + v +Transport + | + v +Connection State + | + +-- connected + +-- pending + +-- needs-auth + +-- failed + | + v +Capabilities + +-- tools + +-- resources + +-- prompts + +-- elicitation + | + v +Router / Permission / Notification Integration +``` + +## なぜ auth を主線の中心にしない方がよいか + +auth は platform 全体では本物の layer です。 + +しかし正文が早い段階で OAuth や vendor 固有 detail へ落ちると、初学者は system shape を失います。 + +教材としては次の順がよいです。 + +- まず auth layer が存在すると知らせる +- 次に `connected` と `needs-auth` が違う connection state だと教える +- さらに進んだ platform work の段階で auth state machine を詳しく扱う + +これなら正確さを保ちつつ、主線を壊しません。 + +## `s19` と `s02a` との関係 + +- `s19` 本文は tools-first の external capability path を教える +- この note は broader platform map を補う +- `s02a` は MCP capability が unified tool control plane にどう戻るかを補う + +三つを合わせて初めて、読者は本当の構図を持てます。 + +**MCP は外部 capability platform であり、tools はその最初の切り口にすぎない** + +## 初学者がやりがちな間違い + +### 1. MCP を外部 tool catalog だけだと思う + +その理解だと resources / prompts / auth / elicitation が後で急に見えて混乱します。 + +### 2. transport や OAuth detail に最初から沈み込む + +これでは主線が壊れます。 + +### 3. MCP tool を permission の外に置く + +system boundary に危険な横穴を開けます。 + +### 4. server config・connection state・exposed capabilities を一つに混ぜる + +この三層は概念的に分けておくべきです。 diff --git a/docs/ja/teaching-scope.md b/docs/ja/teaching-scope.md new file mode 100644 index 000000000..e0ab36b29 --- /dev/null +++ b/docs/ja/teaching-scope.md @@ -0,0 +1,142 @@ +# 教材の守備範囲 + +> この文書は、この教材が何を教え、何を意図的に主線から外すかを明示するためのものです。 + +## この教材の目標 + +これは、ある実運用コードベースを逐行で注釈するためのリポジトリではありません。 + +本当の目標は: + +**高完成度の coding-agent harness を 0 から自力で作れるようにすること** + +です。 + +そのために守るべき条件は 3 つあります。 + +1. 学習者が本当に自分で作り直せること +2. 主線が side detail に埋もれないこと +3. 実在しない mechanism を学ばせないこと + +## 主線章で必ず明示すべきこと + +各章は次をはっきりさせるべきです。 + +- その mechanism が何の問題を解くか +- どの module / layer に属するか +- どんな state を持つか +- どんな data structure を導入するか +- loop にどうつながるか +- runtime flow がどう変わるか + +## 主線を支配させない方がよいもの + +次の話題は存在してよいですが、初心者向け主線の中心に置くべきではありません。 + +- packaging / build / release flow +- cross-platform compatibility glue +- telemetry / enterprise policy wiring +- historical compatibility branches +- product 固有の naming accident +- 上流コードとの逐行一致 + +## ここでいう高忠実度とは何か + +高忠実度とは、すべての周辺 detail を 1:1 で再現することではありません。 + +ここで寄せるべき対象は: + +- core runtime model +- module boundaries +- key records +- state transitions +- major subsystem cooperation + +つまり: + +**幹には忠実に、枝葉は教材として意識的に簡略化する** + +ということです。 + +## 想定読者 + +標準的な想定読者は: + +- 基本的な Python は読める +- 関数、クラス、list、dict は分かる +- ただし agent platform は初学者でもよい + +したがって文章は: + +- 先に概念を説明する +- 1つの概念を1か所で完結させる +- `what -> why -> how` の順で進める + +のが望ましいです。 + +## 各章の推奨構成 + +1. これが無いと何が困るか +2. 先に新しい言葉を説明する +3. 最小の心智モデルを示す +4. 主要 record / data structure を示す +5. 最小で正しい実装を示す +6. loop への接続点を示す +7. 初学者がやりがちな誤りを示す +8. 高完成度版で後から足すものを示す + +## 用語の扱い + +次の種類の語が出るときは、名前だけ投げず意味を説明した方がよいです。 + +- design pattern +- data structure +- concurrency term +- protocol / networking term +- 一般的ではない engineering vocabulary + +例: + +- state machine +- scheduler +- queue +- worktree +- DAG +- protocol envelope + +## 最小正解版の原則 + +現実の mechanism は複雑でも、教材は最初から全分岐を見せる必要はありません。 + +よい順序は: + +1. 最小で正しい版を示す +2. それで既に解ける core problem を示す +3. 後で何を足すかを示す + +例: + +- permission: `deny -> mode -> allow -> ask` +- error recovery: 主要な回復枝から始める +- task system: records / dependencies / unlocks から始める +- team protocol: request / response + `request_id` から始める + +## 逆向きソースの使い方 + +逆向きで得たソースは: + +**保守者の校正材料** + +として使うのが正しいです。 + +役割は: + +- 主線 mechanism の説明がズレていないか確かめる +- 重要な境界や record が抜けていないか確かめる +- 教材実装が fiction に流れていないか確かめる + +読者がそれを見ないと本文を理解できない構成にしてはいけません。 + +## 一文で覚える + +**よい教材は、細部をたくさん言うことより、重要な細部を完全に説明し、重要でない細部を安全に省くことによって質が決まります。** diff --git a/docs/ja/team-task-lane-model.md b/docs/ja/team-task-lane-model.md new file mode 100644 index 000000000..58109c93c --- /dev/null +++ b/docs/ja/team-task-lane-model.md @@ -0,0 +1,308 @@ +# Team Task Lane Model + +> `s15-s18` に入ると、関数名よりも先に混ざりやすいものがあります。 +> +> それは、 +> +> **誰が働き、誰が調整し、何が目標を記録し、何が実行レーンを提供しているのか** +> +> という層の違いです。 + +## この橋渡し資料が解決すること + +`s15-s18` を通して読むと、次の言葉が一つの曖昧な塊になりやすくなります。 + +- teammate +- protocol request +- task +- runtime task +- worktree + +全部「仕事が進む」ことに関係していますが、同じ層ではありません。 + +ここを分けないと、後半が急に分かりにくくなります。 + +- teammate は task と同じなのか +- `request_id` と `task_id` は何が違うのか +- worktree は runtime task の一種なのか +- task が終わっているのに、なぜ worktree が kept のままなのか + +この資料は、その層をきれいに分けるためのものです。 + +## 読む順番 + +1. [`s15-agent-teams.md`](./s15-agent-teams.md) で長寿命 teammate を確認する +2. [`s16-team-protocols.md`](./s16-team-protocols.md) で追跡可能な request-response を確認する +3. [`s17-autonomous-agents.md`](./s17-autonomous-agents.md) で自律 claim を確認する +4. [`s18-worktree-task-isolation.md`](./s18-worktree-task-isolation.md) で隔離 execution lane を確認する + +用語が混ざってきたら、次も見直してください。 + +- [`entity-map.md`](./entity-map.md) +- [`data-structures.md`](./data-structures.md) +- [`s13a-runtime-task-model.md`](./s13a-runtime-task-model.md) + +## まずはこの区別を固定する + +```text +teammate + = 長期に協力する主体 + +protocol request + = チーム内で追跡される調整要求 + +task + = 何をやるべきか + +runtime task / execution slot + = 今まさに動いている実行単位 + +worktree + = 他の変更とぶつからずに仕事を進める実行ディレクトリ +``` + +特に混ざりやすいのは最後の3つです。 + +- `task` +- `runtime task` +- `worktree` + +毎回、次の3つを別々に問い直してください。 + +- これは目標か +- これは実行中の単位か +- これは隔離された実行ディレクトリか + +## 一番小さい図 + +```text +Team Layer + teammate: alice (frontend) + +Protocol Layer + request_id=req_01 + kind=plan_approval + status=pending + +Work Graph Layer + task_id=12 + subject="Implement login page" + owner="alice" + status="in_progress" + +Runtime Layer + runtime_id=rt_01 + type=in_process_teammate + status=running + +Execution Lane Layer + worktree=login-page + path=.worktrees/login-page + status=active +``` + +この中で、仕事そのものの目標を表しているのは一つだけです。 + +> `task_id=12` + +他は、その目標のまわりで協調・実行・分離を支える層です。 + +## 1. Teammate: 誰が協力しているか + +`s15` で導入される層です。 + +ここが答えること: + +- 長寿命 worker の名前 +- 役割 +- `working` / `idle` / `shutdown` +- 独立した inbox を持つか + +例: + +```python +member = { + "name": "alice", + "role": "frontend", + "status": "idle", +} +``` + +大事なのは「agent をもう1個増やす」ことではありません。 + +> 繰り返し仕事を受け取れる長寿命の身元 + +これが本質です。 + +## 2. Protocol Request: 何を調整しているか + +`s16` の層です。 + +ここが答えること: + +- 誰が誰に依頼したか +- どんな種類の request か +- pending なのか、もう解決済みなのか + +例: + +```python +request = { + "request_id": "a1b2c3d4", + "kind": "plan_approval", + "from": "alice", + "to": "lead", + "status": "pending", +} +``` + +これは普通の会話ではありません。 + +> 状態更新を続けられる調整記録 + +です。 + +## 3. Task: 何をやるのか + +これは `s12` の durable work-graph task であり、`s17` で teammate が claim する対象です。 + +ここが答えること: + +- 目標は何か +- 誰が担当しているか +- 何にブロックされているか +- 進捗状態はどうか + +例: + +```python +task = { + "id": 12, + "subject": "Implement login page", + "status": "in_progress", + "owner": "alice", + "blockedBy": [], +} +``` + +キーワードは: + +**目標** + +ディレクトリでも、protocol でも、process でもありません。 + +## 4. Runtime Task / Execution Slot: 今なにが走っているか + +この層は `s13` の橋渡し資料ですでに説明されていますが、`s15-s18` ではさらに重要になります。 + +例: + +- background shell が走っている +- 長寿命 teammate が今作業している +- monitor が外部状態を見ている + +これらは、 + +> 実行中の slot + +として理解するのが一番きれいです。 + +例: + +```python +runtime = { + "id": "rt_01", + "type": "in_process_teammate", + "status": "running", + "work_graph_task_id": 12, +} +``` + +大事な境界: + +- 1つの task から複数の runtime task が派生しうる +- runtime task は durable な目標そのものではなく、実行インスタンスである + +## 5. Worktree: どこでやるのか + +`s18` で導入される execution lane 層です。 + +ここが答えること: + +- どの隔離ディレクトリを使うか +- どの task と結び付いているか +- その lane は `active` / `kept` / `removed` のどれか + +例: + +```python +worktree = { + "name": "login-page", + "path": ".worktrees/login-page", + "task_id": 12, + "status": "active", +} +``` + +キーワードは: + +**実行境界** + +task そのものではなく、その task を進めるための隔離レーンです。 + +## 層はどうつながるか + +```text +teammate + protocol request で協調し + task を claim し + execution slot として走り + worktree lane の中で作業する +``` + +もっと具体的に言うなら: + +> `alice` が `task #12` を claim し、`login-page` worktree lane の中でそれを進める + +この言い方は、 + +> "alice is doing the login-page worktree task" + +のような曖昧な言い方よりずっと正確です。 + +後者は次の3層を一つに潰してしまいます。 + +- teammate +- task +- worktree + +## よくある間違い + +### 1. teammate と task を同じものとして扱う + +teammate は実行者、task は目標です。 + +### 2. `request_id` と `task_id` を同じ種類の ID だと思う + +片方は調整、片方は目標です。 + +### 3. runtime slot を durable task だと思う + +実行は終わっても、durable task は残ることがあります。 + +### 4. worktree を task そのものだと思う + +worktree は execution lane でしかありません。 + +### 5. 「並列で動く」とだけ言って層の名前を出さない + +良い教材は「agent がたくさんいる」で止まりません。 + +次のように言える必要があります。 + +> teammate は長期協力を担い、request は調整を追跡し、task は目標を記録し、runtime slot は実行を担い、worktree は実行ディレクトリを隔離する。 + +## 読み終えたら言えるようになってほしいこと + +1. `s17` の自律 claim は `s12` の work-graph task を取るのであって、`s13` の runtime slot を取るのではない。 +2. `s18` の worktree は task に execution lane を結び付けるのであって、task をディレクトリへ変えるのではない。 diff --git a/docs/zh/data-structures.md b/docs/zh/data-structures.md new file mode 100644 index 000000000..8b7ff979c --- /dev/null +++ b/docs/zh/data-structures.md @@ -0,0 +1,800 @@ +# Core Data Structures (核心数据结构总表) + +> 学习 agent,最容易迷路的地方不是功能太多,而是不知道“状态到底放在哪”。这份文档把主线章节和桥接章节里反复出现的关键数据结构集中列出来,方便你把整套系统看成一张图。 + +## 推荐联读 + +建议把这份总表当成“状态地图”来用: + +- 先不懂词,就回 [`glossary.md`](./glossary.md)。 +- 先不懂边界,就回 [`entity-map.md`](./entity-map.md)。 +- 如果卡在 `TaskRecord` 和 `RuntimeTaskState`,继续看 [`s13a-runtime-task-model.md`](./s13a-runtime-task-model.md)。 +- 如果卡在 MCP 为什么还有 resource / prompt / elicitation,继续看 [`s19a-mcp-capability-layers.md`](./s19a-mcp-capability-layers.md)。 + +## 先记住两个总原则 + +### 原则 1:区分“内容状态”和“流程状态” + +- `messages`、`tool_result`、memory 正文,属于内容状态。 +- `turn_count`、`transition`、`pending_classifier_check`,属于流程状态。 + +很多初学者会把这两类状态混在一起。 +一混,后面就很难看懂为什么一个结构完整的系统会需要控制平面。 + +### 原则 2:区分“持久状态”和“运行时状态” + +- task、memory、schedule 这类状态,通常会落盘,跨会话存在。 +- runtime task、当前 permission decision、当前 MCP connection 这类状态,通常只在系统运行时活着。 + +## 1. 查询与对话控制状态 + +### Message + +作用:保存当前对话和工具往返历史。 + +最小形状: + +```python +message = { + "role": "user" | "assistant", + "content": "...", +} +``` + +支持工具调用后,`content` 常常不再只是字符串,而会变成块列表,其中可能包含: + +- text block +- `tool_use` +- `tool_result` + +相关章节: + +- `s01` +- `s02` +- `s06` +- `s10` + +### NormalizedMessage + +作用:把不同来源的消息整理成统一、稳定、可送给模型 API 的消息格式。 + +最小形状: + +```python +message = { + "role": "user" | "assistant", + "content": [ + {"type": "text", "text": "..."}, + ], +} +``` + +它和普通 `Message` 的区别是: + +- `Message` 偏“系统内部记录” +- `NormalizedMessage` 偏“准备发给模型之前的统一输入” + +相关章节: + +- `s10` +- [`s10a-message-prompt-pipeline.md`](./s10a-message-prompt-pipeline.md) + +### CompactSummary + +作用:上下文太长时,用摘要替代旧消息原文。 + +最小形状: + +```python +summary = { + "task_overview": "...", + "current_state": "...", + "key_decisions": ["..."], + "next_steps": ["..."], +} +``` + +相关章节: + +- `s06` +- `s11` + +### SystemPromptBlock + +作用:把 system prompt 从一整段大字符串,拆成若干可管理片段。 + +最小形状: + +```python +block = { + "text": "...", + "cache_scope": None, +} +``` + +你可以把它理解成: + +- `text`:这一段提示词正文 +- `cache_scope`:这一段是否可以复用缓存 + +相关章节: + +- `s10` +- [`s10a-message-prompt-pipeline.md`](./s10a-message-prompt-pipeline.md) + +### PromptParts + +作用:在真正拼成 system prompt 之前,先把各部分拆开管理。 + +最小形状: + +```python +parts = { + "core": "...", + "tools": "...", + "skills": "...", + "memory": "...", + "claude_md": "...", + "dynamic": "...", +} +``` + +相关章节: + +- `s10` + +### QueryParams + +作用:进入查询主循环时,外部一次性传进来的输入集合。 + +最小形状: + +```python +params = { + "messages": [...], + "system_prompt": "...", + "user_context": {...}, + "system_context": {...}, + "tool_use_context": {...}, + "fallback_model": None, + "max_output_tokens_override": None, + "max_turns": None, +} +``` + +它的重要点在于: + +- 这是“本次 query 的入口输入” +- 它和循环内部不断变化的状态,不是同一层 + +相关章节: + +- [`s00a-query-control-plane.md`](./s00a-query-control-plane.md) + +### QueryState + +作用:保存一条 query 在多轮循环之间不断变化的流程状态。 + +最小形状: + +```python +state = { + "messages": [...], + "tool_use_context": {...}, + "turn_count": 1, + "max_output_tokens_recovery_count": 0, + "has_attempted_reactive_compact": False, + "max_output_tokens_override": None, + "pending_tool_use_summary": None, + "stop_hook_active": False, + "transition": None, +} +``` + +这类字段的共同特点是: + +- 它们不是对话内容 +- 它们是“这一轮该怎么继续”的控制状态 + +相关章节: + +- [`s00a-query-control-plane.md`](./s00a-query-control-plane.md) +- `s11` + +### TransitionReason + +作用:记录“上一轮为什么继续了,而不是结束”。 + +最小形状: + +```python +transition = { + "reason": "next_turn", +} +``` + +在更完整的 query 状态里,这个 `reason` 常见会有这些类型: + +- `next_turn` +- `reactive_compact_retry` +- `token_budget_continuation` +- `max_output_tokens_recovery` +- `stop_hook_continuation` + +它的价值不是炫技,而是让: + +- 日志更清楚 +- 测试更清楚 +- 恢复链路更清楚 + +相关章节: + +- [`s00a-query-control-plane.md`](./s00a-query-control-plane.md) +- `s11` + +## 2. 工具、权限与 hook 执行状态 + +### ToolSpec + +作用:告诉模型“有哪些工具、每个工具要什么输入”。 + +最小形状: + +```python +tool = { + "name": "read_file", + "description": "Read file contents.", + "input_schema": {...}, +} +``` + +相关章节: + +- `s02` +- `s19` + +### ToolDispatchMap + +作用:把工具名映射到真实执行函数。 + +最小形状: + +```python +handlers = { + "read_file": run_read, + "write_file": run_write, + "bash": run_bash, +} +``` + +相关章节: + +- `s02` + +### ToolUseContext + +作用:把工具运行时需要的共享环境打成一个总线。 + +最小形状: + +```python +tool_use_context = { + "tools": handlers, + "permission_context": {...}, + "mcp_clients": [], + "messages": [...], + "app_state": {...}, + "cwd": "...", + "read_file_state": {...}, + "notifications": [], +} +``` + +这层很关键。 +因为在更完整的工具执行环境里,工具拿到的不只是 `tool_input`,还包括: + +- 当前权限环境 +- 当前消息 +- 当前 app state +- 当前 MCP client +- 当前文件读取缓存 + +相关章节: + +- [`s02a-tool-control-plane.md`](./s02a-tool-control-plane.md) +- `s07` +- `s19` + +### PermissionRule + +作用:描述某类工具调用命中后该怎么处理。 + +最小形状: + +```python +rule = { + "tool_name": "bash", + "rule_content": "rm -rf *", + "behavior": "deny", +} +``` + +相关章节: + +- `s07` + +### PermissionRuleSource + +作用:标记一条权限规则是从哪里来的。 + +最小形状: + +```python +source = ( + "userSettings" + | "projectSettings" + | "localSettings" + | "flagSettings" + | "policySettings" + | "cliArg" + | "command" + | "session" +) +``` + +这个结构的意义是: + +- 你不只知道“有什么规则” +- 还知道“这条规则是谁加进来的” + +相关章节: + +- `s07` + +### PermissionDecision + +作用:表示一次工具调用当前该允许、拒绝还是提问。 + +最小形状: + +```python +decision = { + "behavior": "allow" | "deny" | "ask", + "reason": "matched deny rule", +} +``` + +在更完整的权限流里,`ask` 结果还可能带: + +- 修改后的输入 +- 建议写回哪些规则更新 +- 一个后台自动分类检查 + +相关章节: + +- `s07` + +### PermissionUpdate + +作用:描述“这次权限确认之后,要把什么改回配置里”。 + +最小形状: + +```python +update = { + "type": "addRules" | "removeRules" | "setMode" | "addDirectories", + "destination": "userSettings" | "projectSettings" | "localSettings" | "session", + "rules": [], +} +``` + +它解决的是一个很容易被漏掉的问题: + +用户这次点了“允许”,到底只是这一次放行,还是要写回会话、项目,甚至用户级配置。 + +相关章节: + +- `s07` + +### HookContext + +作用:把某个 hook 事件发生时的上下文打包给外部脚本。 + +最小形状: + +```python +context = { + "event": "PreToolUse", + "tool_name": "bash", + "tool_input": {...}, + "tool_result": "...", +} +``` + +相关章节: + +- `s08` + +### RecoveryState + +作用:记录恢复流程已经尝试到哪里。 + +最小形状: + +```python +state = { + "continuation_attempts": 0, + "compact_attempts": 0, + "transport_attempts": 0, +} +``` + +相关章节: + +- `s11` + +## 3. 持久化工作状态 + +### TodoItem + +作用:当前会话里的轻量计划项。 + +最小形状: + +```python +todo = { + "content": "Read parser.py", + "status": "pending" | "completed", +} +``` + +相关章节: + +- `s03` + +### MemoryEntry + +作用:保存跨会话仍然有价值的信息。 + +最小形状: + +```python +memory = { + "name": "prefer_tabs", + "description": "User prefers tabs for indentation", + "type": "user" | "feedback" | "project" | "reference", + "scope": "private" | "team", + "body": "...", +} +``` + +这里最重要的不是字段多,而是边界清楚: + +- 只存不容易从当前项目状态重新推出来的东西 +- 记忆可能会过时,要验证 + +相关章节: + +- `s09` + +### TaskRecord + +作用:磁盘上的工作图任务节点。 + +最小形状: + +```python +task = { + "id": 12, + "subject": "Implement auth module", + "description": "", + "status": "pending", + "blockedBy": [], + "blocks": [], + "owner": "", + "worktree": "", +} +``` + +重点字段: + +- `blockedBy`:谁挡着我 +- `blocks`:我挡着谁 +- `owner`:谁认领了 +- `worktree`:在哪个隔离目录里做 + +相关章节: + +- `s12` +- `s17` +- `s18` +- [`s13a-runtime-task-model.md`](./s13a-runtime-task-model.md) + +### ScheduleRecord + +作用:记录未来要触发的调度任务。 + +最小形状: + +```python +schedule = { + "id": "job_001", + "cron": "0 9 * * 1", + "prompt": "Generate weekly report", + "recurring": True, + "durable": True, + "created_at": 1710000000.0, + "last_fired_at": None, +} +``` + +相关章节: + +- `s14` + +## 4. 运行时执行状态 + +### RuntimeTaskState + +作用:表示系统里一个“正在运行的执行单元”。 + +最小形状: + +```python +runtime_task = { + "id": "b8k2m1qz", + "type": "local_bash", + "status": "running", + "description": "Run pytest", + "start_time": 1710000000.0, + "end_time": None, + "output_file": ".task_outputs/b8k2m1qz.txt", + "notified": False, +} +``` + +这和 `TaskRecord` 不是一回事: + +- `TaskRecord` 管工作目标 +- `RuntimeTaskState` 管当前执行槽位 + +相关章节: + +- `s13` +- [`s13a-runtime-task-model.md`](./s13a-runtime-task-model.md) + +### TeamMember + +作用:记录一个持久队友是谁、在做什么。 + +最小形状: + +```python +member = { + "name": "alice", + "role": "coder", + "status": "idle", +} +``` + +相关章节: + +- `s15` +- `s17` + +### MessageEnvelope + +作用:队友之间传递结构化消息。 + +最小形状: + +```python +message = { + "type": "message" | "shutdown_request" | "plan_approval", + "from": "lead", + "to": "alice", + "request_id": "req_001", + "content": "...", + "payload": {}, + "timestamp": 1710000000.0, +} +``` + +相关章节: + +- `s15` +- `s16` + +### RequestRecord + +作用:追踪一个协议请求当前走到哪里。 + +最小形状: + +```python +request = { + "request_id": "req_001", + "kind": "shutdown" | "plan_review", + "status": "pending" | "approved" | "rejected" | "expired", + "from": "lead", + "to": "alice", +} +``` + +相关章节: + +- `s16` + +### WorktreeRecord + +作用:记录一个任务绑定的隔离工作目录。 + +最小形状: + +```python +worktree = { + "name": "auth-refactor", + "path": ".worktrees/auth-refactor", + "branch": "wt/auth-refactor", + "task_id": 12, + "status": "active", +} +``` + +相关章节: + +- `s18` + +### WorktreeEvent + +作用:记录 worktree 生命周期事件,便于恢复和排查。 + +最小形状: + +```python +event = { + "event": "worktree.create.after", + "task_id": 12, + "worktree": "auth-refactor", + "ts": 1710000000.0, +} +``` + +相关章节: + +- `s18` + +## 5. 外部平台与 MCP 状态 + +### ScopedMcpServerConfig + +作用:描述一个 MCP server 应该如何连接,以及它的配置来自哪个作用域。 + +最小形状: + +```python +config = { + "name": "postgres", + "type": "stdio", + "command": "npx", + "args": ["-y", "..."], + "scope": "project", +} +``` + +这个 `scope` 很重要,因为 server 配置可能来自: + +- 本地 +- 用户 +- 项目 +- 动态注入 +- 插件或托管来源 + +相关章节: + +- `s19` +- [`s02a-tool-control-plane.md`](./s02a-tool-control-plane.md) +- [`s19a-mcp-capability-layers.md`](./s19a-mcp-capability-layers.md) +- [`s19a-mcp-capability-layers.md`](./s19a-mcp-capability-layers.md) + +### MCPServerConnectionState + +作用:表示一个 MCP server 当前连到了哪一步。 + +最小形状: + +```python +server_state = { + "name": "postgres", + "type": "connected", # pending / failed / needs-auth / disabled + "config": {...}, +} +``` + +这层特别重要,因为“有没有接上”不是布尔值,而是多种状态: + +- `connected` +- `pending` +- `failed` +- `needs-auth` +- `disabled` + +相关章节: + +- `s19` +- [`s19a-mcp-capability-layers.md`](./s19a-mcp-capability-layers.md) + +### MCPToolSpec + +作用:把外部 MCP 工具转换成 agent 内部统一工具定义。 + +最小形状: + +```python +mcp_tool = { + "name": "mcp__postgres__query", + "description": "Run a SQL query", + "input_schema": {...}, +} +``` + +相关章节: + +- `s19` + +### ElicitationRequest + +作用:表示 MCP server 反过来向用户请求额外输入。 + +最小形状: + +```python +request = { + "server_name": "some-server", + "message": "Please provide additional input", + "requested_schema": {...}, +} +``` + +它提醒你一件事: + +- MCP 不只是“模型主动调工具” +- 外部 server 也可能反过来请求补充输入 + +相关章节: + +- [`s19a-mcp-capability-layers.md`](./s19a-mcp-capability-layers.md) + +## 最后用一句话把它们串起来 + +如果你只想记一条总线索,可以记这个: + +```text +messages / prompt / query state + 管本轮输入和继续理由 + +tools / permissions / hooks + 管动作怎么安全执行 + +memory / task / schedule + 管跨轮、跨会话的持久工作 + +runtime task / team / worktree + 管当前执行车道 + +mcp + 管系统怎样向外接能力 +``` + +这份总表最好配合 [`s00-architecture-overview.md`](./s00-architecture-overview.md) 和 [`entity-map.md`](./entity-map.md) 一起看。 + +## 教学边界 + +这份总表只负责做两件事: + +- 帮你确认一个状态到底属于哪一层 +- 帮你确认这个状态大概长什么样 + +它不负责穷举真实系统里的每一个字段、每一条兼容分支、每一种产品化补丁。 + +如果你已经知道某个状态归谁管、什么时候创建、什么时候销毁,再回到对应章节看执行路径,理解会顺很多。 diff --git a/docs/zh/entity-map.md b/docs/zh/entity-map.md new file mode 100644 index 000000000..4df407720 --- /dev/null +++ b/docs/zh/entity-map.md @@ -0,0 +1,199 @@ +# Entity Map (系统实体边界图) + +> 这份文档不是某一章的正文,而是一张“别再混词”的地图。 +> 到了仓库后半程,真正让读者困惑的往往不是代码,而是: +> +> **同一个系统里,为什么会同时出现这么多看起来很像、但其实不是一回事的实体。** + +## 这张图和另外几份桥接文档怎么分工 + +- 这份图先回答:一个词到底属于哪一层。 +- [`glossary.md`](./glossary.md) 先回答:这个词到底是什么意思。 +- [`data-structures.md`](./data-structures.md) 再回答:这个词落到代码里时,状态长什么样。 +- [`s13a-runtime-task-model.md`](./s13a-runtime-task-model.md) 专门补“工作图任务”和“运行时任务”的分层。 +- [`s19a-mcp-capability-layers.md`](./s19a-mcp-capability-layers.md) 专门补 MCP 平台层不是只有 tools。 + +## 先给一个总图 + +```text +对话层 + - message + - prompt block + - reminder + +动作层 + - tool call + - tool result + - hook event + +工作层 + - work-graph task + - runtime task + - protocol request + +执行层 + - subagent + - teammate + - worktree lane + +平台层 + - mcp server + - mcp capability + - memory record +``` + +## 最容易混淆的 8 对概念 + +### 1. Message vs Prompt Block + +| 实体 | 它是什么 | 它不是什么 | 常见位置 | +|---|---|---|---| +| `Message` | 对话历史中的一条消息 | 不是长期系统规则 | `messages[]` | +| `Prompt Block` | system prompt 内的一段稳定说明 | 不是某一轮刚发生的事件 | prompt builder | + +简单记法: + +- message 更像“对话内容” +- prompt block 更像“系统说明” + +### 2. Todo / Plan vs Task + +| 实体 | 它是什么 | 它不是什么 | +|---|---|---| +| `todo / plan` | 当前轮或当前阶段的过程性安排 | 不是长期持久化工作图 | +| `task` | 持久化的工作节点 | 不是某一轮的临时思路 | + +### 3. Work-Graph Task vs Runtime Task + +| 实体 | 它是什么 | 它不是什么 | +|---|---|---| +| `work-graph task` | 任务板上的工作节点 | 不是系统里活着的执行单元 | +| `runtime task` | 当前正在执行的后台/agent/monitor 槽位 | 不是依赖图节点 | + +这对概念是整个仓库后半程最关键的区分之一。 + +### 4. Subagent vs Teammate + +| 实体 | 它是什么 | 它不是什么 | +|---|---|---| +| `subagent` | 一次性委派执行者 | 不是长期在线成员 | +| `teammate` | 持久存在、可重复接活的队友 | 不是一次性摘要工具 | + +### 5. Protocol Request vs Normal Message + +| 实体 | 它是什么 | 它不是什么 | +|---|---|---| +| `normal message` | 自由文本沟通 | 不是可追踪的审批流程 | +| `protocol request` | 带 request_id 的结构化请求 | 不是随便说一句话 | + +### 6. Worktree vs Task + +| 实体 | 它是什么 | 它不是什么 | +|---|---|---| +| `task` | 说明要做什么 | 不是目录 | +| `worktree` | 说明在哪做 | 不是工作目标 | + +### 7. Memory vs CLAUDE.md + +| 实体 | 它是什么 | 它不是什么 | +|---|---|---| +| `memory` | 跨会话仍有价值、但不易从当前代码直接推出来的信息 | 不是项目规则文件 | +| `CLAUDE.md` | 长期规则、约束和说明 | 不是用户偏好或项目动态背景 | + +### 8. MCP Server vs MCP Tool + +| 实体 | 它是什么 | 它不是什么 | +|---|---|---| +| `MCP server` | 外部能力提供者 | 不是单个工具定义 | +| `MCP tool` | 某个 server 暴露出来的一项具体能力 | 不是完整平台连接本身 | + +## 一张“是什么 / 存在哪里”的速查表 + +| 实体 | 主要作用 | 典型存放位置 | +|---|---|---| +| `Message` | 当前对话历史 | `messages[]` | +| `PromptParts` | system prompt 的组装片段 | prompt builder | +| `PermissionRule` | 工具执行前的决策规则 | settings / session state | +| `HookEvent` | 某个时机触发的扩展点 | hook config | +| `MemoryEntry` | 跨会话有价值信息 | `.memory/` | +| `TaskRecord` | 持久化工作节点 | `.tasks/` | +| `RuntimeTaskState` | 正在执行的任务槽位 | runtime task manager | +| `TeamMember` | 持久队友 | `.team/config.json` | +| `MessageEnvelope` | 队友间结构化消息 | `.team/inbox/*.jsonl` | +| `RequestRecord` | 审批/关机等协议状态 | request tracker | +| `WorktreeRecord` | 隔离工作目录记录 | `.worktrees/index.json` | +| `MCPServerConfig` | 外部 server 配置 | plugin / settings | + +## 后半程推荐怎么记 + +如果你到了 `s15` 以后开始觉得名词多,可以只记这条线: + +```text +message / prompt + 管输入 + +tool / permission / hook + 管动作 + +task / runtime task / protocol + 管工作推进 + +subagent / teammate / worktree + 管执行者和执行车道 + +mcp / memory / claude.md + 管平台外延和长期上下文 +``` + +## 初学者最容易心智打结的地方 + +### 1. 把“任务”这个词用在所有层 + +这是最常见的混乱来源。 + +所以建议你在写正文时,尽量直接写全: + +- 工作图任务 +- 运行时任务 +- 后台任务 +- 协议请求 + +不要都叫“任务”。 + +### 2. 把队友和子 agent 混成一类 + +如果生命周期不同,就不是同一类实体。 + +### 3. 把 worktree 当成 task 的别名 + +一个是“做什么”,一个是“在哪做”。 + +### 4. 把 memory 当成通用笔记本 + +它不是。它只保存很特定的一类长期信息。 + +## 这份图应该怎么用 + +最好的用法不是读一遍背下来,而是: + +- 每次你发现两个词开始混 +- 先来这张图里确认它们是不是一个层级 +- 再回去读对应章节 + +如果你确认“不在一个层级”,下一步最好立刻去找它们对应的数据结构,而不是继续凭感觉读正文。 + +## 教学边界 + +这张图只解决“实体边界”这一个问题。 + +它不负责展开每个实体的全部字段,也不负责把所有产品化分支一起讲完。 + +你可以把它当成一张分层地图: + +- 先确认词属于哪一层 +- 再去对应章节看机制 +- 最后去 [`data-structures.md`](./data-structures.md) 看状态形状 + +## 一句话记住 + +**一个结构完整的系统最怕的不是功能多,而是实体边界不清;边界一清,很多复杂度会自动塌下来。** diff --git a/docs/zh/glossary.md b/docs/zh/glossary.md new file mode 100644 index 000000000..4daa80ee1 --- /dev/null +++ b/docs/zh/glossary.md @@ -0,0 +1,471 @@ +# Glossary (术语表) + +> 这份术语表只收录本仓库主线里最重要、最容易让初学者卡住的词。 +> 如果某个词你看着眼熟但说不清它到底是什么,先回这里。 + +## 推荐联读 + +如果你不是单纯查词,而是已经开始分不清“这些词分别活在哪一层”,建议按这个顺序一起看: + +- 先看 [`entity-map.md`](./entity-map.md):搞清每个实体属于哪一层。 +- 再看 [`data-structures.md`](./data-structures.md):搞清这些词真正落成什么状态结构。 +- 如果你卡在“任务”这个词上,再看 [`s13a-runtime-task-model.md`](./s13a-runtime-task-model.md)。 +- 如果你卡在 MCP 不只等于 tools,再看 [`s19a-mcp-capability-layers.md`](./s19a-mcp-capability-layers.md)。 + +## Agent + +在这套仓库里,`agent` 指的是: +**一个能根据输入做判断,并且会调用工具去完成任务的模型。** + +你可以简单理解成: + +- 模型负责思考 +- harness 负责给模型工作环境 + +## Harness + +`harness` 可以理解成“给 agent 准备好的工作台”。 + +它包括: + +- 工具 +- 文件系统 +- 权限 +- 提示词 +- 记忆 +- 任务系统 + +模型本身不是 harness。 +harness 也不是模型。 + +## Agent Loop + +`agent loop` 是系统反复执行的一条主循环: + +1. 把当前上下文发给模型 +2. 看模型是要直接回答,还是要调工具 +3. 如果调工具,就执行工具 +4. 把工具结果写回上下文 +5. 再继续下一轮 + +没有这条循环,就没有 agent 系统。 + +## Message / Messages + +`message` 是一条消息。 +`messages` 是消息列表。 + +它通常包含: + +- 用户消息 +- assistant 消息 +- tool_result 消息 + +这份列表就是 agent 最主要的工作记忆。 + +## Tool + +`tool` 是模型可以调用的一种动作。 + +例如: + +- 读文件 +- 写文件 +- 改文件 +- 跑 shell 命令 +- 搜索文本 + +模型并不直接执行系统命令。 +模型只是说“我要调哪个工具、传什么参数”,真正执行的是你的代码。 + +## Tool Schema + +`tool schema` 是工具的输入说明。 + +它告诉模型: + +- 这个工具叫什么 +- 这个工具做什么 +- 需要哪些参数 +- 参数是什么类型 + +可以把它想成“工具使用说明书”。 + +## Dispatch Map + +`dispatch map` 是一张映射表: + +```python +{ + "read_file": read_file_handler, + "write_file": write_file_handler, + "bash": bash_handler, +} +``` + +意思是: + +- 模型说要调用 `read_file` +- 代码就去表里找到 `read_file_handler` +- 然后执行它 + +## Stop Reason + +`stop_reason` 是模型这一轮为什么停下来的原因。 + +常见的有: + +- `end_turn`:模型说完了 +- `tool_use`:模型要调用工具 +- `max_tokens`:模型输出被截断了 + +它决定主循环下一步怎么走。 + +## Context + +`context` 是模型当前能看到的信息总和。 + +包括: + +- `messages` +- system prompt +- 动态补充信息 +- tool_result + +上下文不是永久记忆。 +上下文是“这一轮工作时当前摆在桌上的东西”。 + +## Compact / Compaction + +`compact` 指压缩上下文。 + +因为对话越长,模型能看到的历史就越多,成本和混乱也会一起增加。 + +压缩的目标不是“删除有用信息”,而是: + +- 保留真正关键的内容 +- 去掉重复和噪声 +- 给后面的轮次腾空间 + +## Subagent + +`subagent` 是从当前 agent 派生出来的一个子任务执行者。 + +它最重要的价值是: + +**把一个大任务放到独立上下文里处理,避免污染父上下文。** + +## Fork + +`fork` 在本仓库语境里,指一种子 agent 启动方式: + +- 不是从空白上下文开始 +- 而是先继承父 agent 的已有上下文 + +这适合“子任务必须理解当前讨论背景”的场景。 + +## Permission + +`permission` 就是“这个工具调用能不能执行”。 + +一个好的权限系统通常要回答三件事: + +- 应不应该直接拒绝 +- 能不能自动允许 +- 剩下的是不是要问用户 + +## Permission Mode + +`permission mode` 是权限系统的工作模式。 + +例如: + +- `default`:默认询问 +- `plan`:只允许读,不允许写 +- `auto`:简单安全的操作自动过,危险操作再问 + +## Hook + +`hook` 是一个插入点。 + +意思是: +在不改主循环代码的前提下,在某个时机额外执行一段逻辑。 + +例如: + +- 工具调用前先检查一下 +- 工具调用后追加一条审计信息 + +## Memory + +`memory` 是跨会话保存的信息。 + +但不是所有东西都该存 memory。 + +适合存 memory 的,通常是: + +- 用户长期偏好 +- 多次出现的重要反馈 +- 未来别的会话仍然有价值的信息 + +## System Prompt + +`system prompt` 是系统级说明。 + +它告诉模型: + +- 你是谁 +- 你能做什么 +- 你有哪些规则 +- 你应该如何协作 + +它比普通用户消息更稳定。 + +## System Reminder + +`system reminder` 是每一轮临时追加的动态提醒。 + +例如: + +- 当前目录 +- 当前日期 +- 某个本轮才需要的额外上下文 + +它和稳定的 system prompt 不是一回事。 + +## Task + +`task` 是持久化任务系统里的一个任务节点。 + +一个 task 通常不只是一句待办事项,还会带: + +- 状态 +- 描述 +- 依赖关系 +- owner + +## Dependency Graph + +`dependency graph` 指任务之间的依赖关系图。 + +最简单的理解: + +- A 做完,B 才能开始 +- C 和 D 可以并行 +- E 要等 C 和 D 都完成 + +这类结构能帮助 agent 判断: + +- 现在能做什么 +- 什么被卡住了 +- 什么能同时做 + +## Worktree + +`worktree` 是 Git 提供的一个机制: + +同一个仓库,可以在多个不同目录里同时展开多个工作副本。 + +它的价值是: + +- 并行做多个任务 +- 不互相污染文件改动 +- 便于多 agent 并行工作 + +## MCP + +`MCP` 是 Model Context Protocol。 + +你可以先把它理解成一套统一接口,让 agent 能接入外部工具。 + +它解决的核心问题是: + +- 工具不必都写死在主程序里 +- 可以通过统一协议接入外部能力 + +如果你已经知道“能接外部工具”,但开始分不清 server、connection、tool、resource、prompt 这些层,继续看: + +- [`data-structures.md`](./data-structures.md) +- [`s19a-mcp-capability-layers.md`](./s19a-mcp-capability-layers.md) + +## Runtime Task + +`runtime task` 指的是: + +> 系统当前正在运行、等待完成、或者刚刚结束的一条执行单元。 + +例如: + +- 一个后台 `pytest` +- 一个正在工作的 teammate +- 一个正在运行的 monitor + +它和 `task` 不一样。 + +- `task` 更像工作目标 +- `runtime task` 更像执行槽位 + +如果你总把这两个词混掉,不要只在正文里来回翻,直接去看: + +- [`entity-map.md`](./entity-map.md) +- [`data-structures.md`](./data-structures.md) +- [`s13a-runtime-task-model.md`](./s13a-runtime-task-model.md) + +## Teammate + +`teammate` 是长期存在的队友 agent。 + +它和 `subagent` 的区别是: + +- `subagent`:一次性委派,干完就结束 +- `teammate`:长期存在,可以反复接任务 + +如果你发现自己开始把这两个词混用,说明你需要回看: + +- `s04` +- `s15` +- `entity-map.md` + +## Protocol + +`protocol` 就是一套提前约好的协作规则。 + +它回答的是: + +- 消息应该长什么样 +- 收到以后要怎么处理 +- 批准、拒绝、超时这些状态怎么记录 + +在团队章节里,它最常见的形状是: + +```text +request + -> +response + -> +status update +``` + +## Envelope + +`envelope` 本意是“信封”。 + +在程序里,它表示: + +> 把正文和一些元信息一起包起来的一条结构化记录。 + +例如一条协议消息里,正文之外还会附带: + +- `from` +- `to` +- `request_id` +- `timestamp` + +这整包东西,就可以叫一个 `envelope`。 + +## State Machine + +`state machine` 不是很玄的高级理论。 + +你可以先把它理解成: + +> 一张“状态可以怎么变化”的规则表。 + +例如: + +```text +pending -> approved +pending -> rejected +pending -> expired +``` + +这就是一个最小状态机。 + +## Router + +`router` 可以简单理解成“分发器”。 + +它的任务是: + +- 看请求属于哪一类 +- 把它送去正确的处理路径 + +例如工具层里: + +- 本地工具走本地 handler +- `mcp__...` 工具走 MCP client + +## Control Plane + +`control plane` 可以理解成“负责协调和控制的一层”。 + +它通常不直接产出最终业务结果, +而是负责决定: + +- 谁来执行 +- 在什么环境里执行 +- 有没有权限 +- 执行后要不要通知别的模块 + +这个词第一次看到容易怕。 +但在本仓库里,你只需要把它先记成: + +> 不直接干活,负责协调怎么干活的一层。 + +## Capability + +`capability` 就是“能力项”。 + +例如在 MCP 里,能力不只可能是工具,还可能包括: + +- tools +- resources +- prompts +- elicitation + +所以 `capability` 比 `tool` 更宽。 + +## Resource + +`resource` 可以理解成: + +> 一个可读取、可引用、但不一定是“执行动作”的外部内容入口。 + +例如: + +- 一份文档 +- 一个只读配置 +- 一块可被模型读取的数据内容 + +它和 `tool` 的区别是: + +- `tool` 更像动作 +- `resource` 更像可读取内容 + +## Elicitation + +`elicitation` 可以先理解成: + +> 外部系统反过来向用户要补充输入。 + +也就是说,不再只是 agent 主动调用外部能力。 +外部能力也可能说: + +“我还缺一点信息,请你补一下。” + +## 最容易混的几对词 + +如果你是初学者,下面这几对词最值得一起记。 + +| 词对 | 最简单的区分方法 | +|---|---| +| `message` vs `system prompt` | 一个更像对话内容,一个更像系统说明 | +| `todo` vs `task` | 一个更像临时步骤,一个更像持久化工作节点 | +| `task` vs `runtime task` | 一个管目标,一个管执行 | +| `subagent` vs `teammate` | 一个一次性,一个长期存在 | +| `tool` vs `resource` | 一个更像动作,一个更像内容 | +| `permission` vs `hook` | 一个决定能不能做,一个决定要不要额外插入行为 | + +--- + +如果读文档时又遇到新词卡住,优先回这里,不要硬顶着往后读。 diff --git a/docs/zh/s00-architecture-overview.md b/docs/zh/s00-architecture-overview.md new file mode 100644 index 000000000..09fc90ae3 --- /dev/null +++ b/docs/zh/s00-architecture-overview.md @@ -0,0 +1,461 @@ +# s00: Architecture Overview (架构总览) + +> 这一章是全仓库的地图。 +> 如果你只想先知道“整个系统到底由哪些模块组成、为什么是这个学习顺序”,先读这一章。 + +## 先说结论 + +这套仓库的主线是合理的。 + +它最重要的优点,不是“章节数量多”,而是它把学习过程拆成了四个阶段: + +1. 先做出一个真的能工作的 agent。 +2. 再补安全、扩展、记忆和恢复。 +3. 再把临时清单升级成持久化任务系统。 +4. 最后再进入多 agent、隔离执行和外部工具平台。 + +这个顺序符合初学者的心智。 + +因为一个新手最需要的,不是先知道所有高级细节,而是先建立一条稳定的主线: + +`用户输入 -> 模型思考 -> 调工具 -> 拿结果 -> 继续思考 -> 完成` + +只要这条主线还没真正理解,后面的权限、hook、memory、MCP 都会变成一堆零散名词。 + +## 这套仓库到底要还原什么 + +本仓库的目标不是逐行复制任何一个生产仓库。 + +本仓库真正要还原的是: + +- 主要模块有哪些 +- 模块之间怎么协作 +- 每个模块的核心职责是什么 +- 关键状态存在哪里 +- 一条请求在系统里是怎么流动的 + +也就是说,我们追求的是: + +**设计主脉络高保真,而不是所有外围实现细节 1:1。** + +这很重要。 + +如果你是为了自己从 0 到 1 做一个类似系统,那么你真正需要掌握的是: + +- 核心循环 +- 工具机制 +- 规划与任务 +- 上下文管理 +- 权限与扩展点 +- 持久化 +- 多 agent 协作 +- 工作隔离 +- 外部工具接入 + +而不是打包、跨平台兼容、历史兼容分支或产品化胶水代码。 + +## 三条阅读原则 + +### 1. 先学最小版本,再学结构更完整的版本 + +比如子 agent。 + +最小版本只需要: + +- 父 agent 发一个子任务 +- 子 agent 用自己的 `messages` +- 子 agent 返回一个摘要 + +这已经能解决 80% 的核心问题:上下文隔离。 + +等这个最小版本你真的能写出来,再去补更完整的能力,比如: + +- 继承父上下文的 fork 模式 +- 独立权限 +- 背景运行 +- worktree 隔离 + +### 2. 每个新名词都必须先解释 + +本仓库会经常用到一些词: + +- `state machine` +- `dispatch map` +- `dependency graph` +- `frontmatter` +- `worktree` +- `MCP` + +如果你对这些词不熟,不要硬扛。 +应该立刻去看术语表:[`glossary.md`](./glossary.md) + +如果你想先知道“这套仓库到底教什么、不教什么”,建议配合看: + +- [`teaching-scope.md`](./teaching-scope.md) + +如果你想先把最关键的数据结构建立成整体地图,可以配合看: + +- [`data-structures.md`](./data-structures.md) + +如果你已经知道章节顺序没问题,但一打开本地 `agents/*.py` 就会重新乱掉,建议再配合看: + +- [`s00f-code-reading-order.md`](./s00f-code-reading-order.md) + +### 3. 不把复杂外围细节伪装成“核心机制” + +好的教学,不是把一切都讲进去。 + +好的教学,是把真正关键的东西讲完整,把不关键但很复杂的东西先拿掉。 + +所以本仓库会刻意省略一些不属于主干的内容,比如: + +- 打包与发布 +- 企业策略接线 +- 遥测 +- 多客户端表层集成 +- 历史兼容层 + +## 建议配套阅读的文档 + +除了主线章节,我建议把下面两份文档当作全程辅助地图: + +| 文档 | 用途 | +|---|---| +| [`teaching-scope.md`](./teaching-scope.md) | 帮你分清哪些内容属于教学主线,哪些只是维护者侧补充 | +| [`data-structures.md`](./data-structures.md) | 帮你集中理解整个系统的关键状态和数据结构 | +| [`s00f-code-reading-order.md`](./s00f-code-reading-order.md) | 帮你把“章节顺序”和“本地代码阅读顺序”对齐,避免重新乱翻源码 | + +如果你已经读到中后半程,想把“章节之间缺的那一层”补上,再加看下面这些桥接文档: + +| 文档 | 它补的是什么 | +|---|---| +| [`s00d-chapter-order-rationale.md`](./s00d-chapter-order-rationale.md) | 为什么这套课要按现在这个顺序讲,哪些重排会把读者心智讲乱 | +| [`s00e-reference-module-map.md`](./s00e-reference-module-map.md) | 参考仓库里真正重要的模块簇,和当前课程章节是怎样一一对应的 | +| [`s00a-query-control-plane.md`](./s00a-query-control-plane.md) | 为什么一个更完整的系统不能只靠 `messages[] + while True` | +| [`s00b-one-request-lifecycle.md`](./s00b-one-request-lifecycle.md) | 一条请求如何从用户输入一路流过 query、tools、permissions、tasks、teams、MCP 再回到主循环 | +| [`s02a-tool-control-plane.md`](./s02a-tool-control-plane.md) | 为什么工具层不只是 `tool_name -> handler` | +| [`s10a-message-prompt-pipeline.md`](./s10a-message-prompt-pipeline.md) | 为什么 system prompt 不是模型完整输入的全部 | +| [`s13a-runtime-task-model.md`](./s13a-runtime-task-model.md) | 为什么任务板里的 task 和正在运行的 task 不是一回事 | +| [`s19a-mcp-capability-layers.md`](./s19a-mcp-capability-layers.md) | 为什么 MCP 正文先讲 tools-first,但平台层还要再补一张地图 | +| [`entity-map.md`](./entity-map.md) | 帮你把 message、task、runtime task、subagent、teammate、worktree、MCP server 这些实体彻底分开 | + +## 四阶段学习路径 + +### 阶段 1:核心单 agent (`s01-s06`) + +目标:先做出一个能干活的 agent。 + +| 章节 | 学什么 | 解决什么问题 | +|---|---|---| +| `s01` | Agent Loop | 没有循环,就没有 agent | +| `s02` | Tool Use | 让模型从“会说”变成“会做” | +| `s03` | Todo / Planning | 防止大任务乱撞 | +| `s04` | Subagent | 防止上下文被大任务污染 | +| `s05` | Skills | 按需拿知识,不把所有知识塞进提示词 | +| `s06` | Context Compact | 防止上下文无限膨胀 | + +这一阶段结束后,你已经有了一个真正可运行的 coding agent 雏形。 + +### 阶段 2:生产加固 (`s07-s11`) + +目标:让 agent 不只是能跑,而是更安全、更稳、更可扩展。 + +| 章节 | 学什么 | 解决什么问题 | +|---|---|---| +| `s07` | Permission System | 危险操作先过权限关 | +| `s08` | Hook System | 不改主循环也能扩展行为 | +| `s09` | Memory System | 让真正有价值的信息跨会话存在 | +| `s10` | System Prompt | 把系统说明、工具、约束组装成稳定输入 | +| `s11` | Error Recovery | 出错后能恢复,而不是直接崩溃 | + +### 阶段 3:任务管理 (`s12-s14`) + +目标:把“聊天中的清单”升级成“磁盘上的任务图”。 + +| 章节 | 学什么 | 解决什么问题 | +|---|---|---| +| `s12` | Task System | 大任务要有持久结构 | +| `s13` | Background Tasks | 慢操作不应该卡住前台思考 | +| `s14` | Cron Scheduler | 让系统能在未来自动做事 | + +### 阶段 4:多 agent 与外部系统 (`s15-s19`) + +目标:从单 agent 升级成真正的平台。 + +| 章节 | 学什么 | 解决什么问题 | +|---|---|---| +| `s15` | Agent Teams | 让多个 agent 协作 | +| `s16` | Team Protocols | 让协作有统一规则 | +| `s17` | Autonomous Agents | 让 agent 自己找活、认领任务 | +| `s18` | Worktree Isolation | 并行工作时互不踩目录 | +| `s19` | MCP & Plugin | 接入外部工具与外部能力 | + +## 章节速查表:每章到底新增了哪一层状态 + +很多读者读到中途会开始觉得: + +- 这一章到底是在加工具,还是在加状态 +- 这个机制是“输入层”的,还是“执行层”的 +- 学完这一章以后,我手里到底多了一个什么东西 + +所以这里给一张全局速查表。 +读每章以前,先看这一行;读完以后,再回来检查自己是不是真的吃透了这一行。 + +| 章节 | 新增的核心结构 | 它接在系统哪一层 | 学完你应该会什么 | +|---|---|---|---| +| `s01` | `messages` / `LoopState` | 主循环 | 手写一个最小 agent 闭环 | +| `s02` | `ToolSpec` / `ToolDispatchMap` | 工具层 | 把模型意图路由成真实动作 | +| `s03` | `TodoItem` / `PlanState` | 过程规划层 | 让 agent 按步骤推进,而不是乱撞 | +| `s04` | `SubagentContext` | 执行隔离层 | 把探索性工作丢进干净子上下文 | +| `s05` | `SkillRegistry` / `SkillContent` | 知识注入层 | 只在需要时加载额外知识 | +| `s06` | `CompactSummary` / `PersistedOutput` | 上下文管理层 | 控制上下文大小又不丢主线 | +| `s07` | `PermissionRule` / `PermissionDecision` | 安全控制层 | 让危险动作先经过决策管道 | +| `s08` | `HookEvent` / `HookResult` | 扩展控制层 | 不改主循环也能插入扩展逻辑 | +| `s09` | `MemoryEntry` / `MemoryStore` | 持久上下文层 | 只把真正跨会话有价值的信息留下 | +| `s10` | `PromptParts` / `SystemPromptBlock` | 输入组装层 | 把模型输入拆成可管理的管道 | +| `s11` | `RecoveryState` / `TransitionReason` | 恢复控制层 | 出错后知道为什么继续、怎么继续 | +| `s12` | `TaskRecord` / `TaskStatus` | 工作图层 | 把临时清单升级成持久化任务图 | +| `s13` | `RuntimeTaskState` / `Notification` | 运行时执行层 | 让慢任务后台运行、稍后回送结果 | +| `s14` | `ScheduleRecord` / `CronTrigger` | 定时触发层 | 让时间本身成为工作触发器 | +| `s15` | `TeamMember` / `MessageEnvelope` | 多 agent 基础层 | 让队友长期存在、反复接活 | +| `s16` | `ProtocolEnvelope` / `RequestRecord` | 协作协议层 | 让团队从自由聊天升级成结构化协作 | +| `s17` | `ClaimPolicy` / `AutonomyState` | 自治调度层 | 让 agent 空闲时自己找活、恢复工作 | +| `s18` | `WorktreeRecord` / `TaskBinding` | 隔离执行层 | 给并行任务分配独立工作目录 | +| `s19` | `MCPServerConfig` / `CapabilityRoute` | 外部能力层 | 把外部能力并入系统主控制面 | + +## 整个系统的大图 + +先看最重要的一张图: + +```text +User + | + v +messages[] + | + v ++-------------------------+ +| Agent Loop (s01) | +| | +| 1. 组装输入 | +| 2. 调模型 | +| 3. 看 stop_reason | +| 4. 如果要调工具就执行 | +| 5. 把结果写回 messages | +| 6. 继续下一轮 | ++-------------------------+ + | + +------------------------------+ + | | + v v +Tool Pipeline Context / State +(s02, s07, s08) (s03, s06, s09, s10, s11) + | | + v v +Tasks / Teams / Worktree / MCP (s12-s19) +``` + +你可以把它理解成三层: + +### 第一层:主循环 + +这是系统心脏。 + +它只做一件事: +**不停地推动“思考 -> 行动 -> 观察 -> 再思考”的循环。** + +### 第二层:横切机制 + +这些机制不是替代主循环,而是“包在主循环周围”: + +- 权限 +- hooks +- memory +- prompt 组装 +- 错误恢复 +- 上下文压缩 + +它们的作用,是让主循环更安全、更稳定、更聪明。 + +### 第三层:更大的工作平台 + +这些机制把单 agent 升级成更完整的系统: + +- 任务图 +- 后台任务 +- 多 agent 团队 +- worktree 隔离 +- MCP 外部工具 + +## 你真正需要掌握的关键状态 + +理解 agent,最重要的不是背很多功能名,而是知道**状态放在哪里**。 + +下面是这个仓库里最关键的几类状态: + +### 1. 对话状态:`messages` + +这是 agent 当前上下文的主体。 + +它保存: + +- 用户说了什么 +- 模型回复了什么 +- 调用了哪些工具 +- 工具返回了什么 + +你可以把它想成 agent 的“工作记忆”。 + +### 2. 工具注册表:`tools` / `handlers` + +这是一张“工具名 -> Python 函数”的映射表。 + +这类结构常被叫做 `dispatch map`。 + +意思很简单: + +- 模型说“我要调用 `read_file`” +- 代码就去表里找 `read_file` 对应的函数 +- 找到以后执行 + +### 3. 计划与任务状态:`todo` / `tasks` + +这部分保存: + +- 当前有哪些事要做 +- 哪些已经完成 +- 哪些被别的任务阻塞 +- 哪些可以并行 + +### 4. 权限与策略状态 + +这部分保存: + +- 当前权限模式是什么 +- 允许规则有哪些 +- 拒绝规则有哪些 +- 最近是否连续被拒绝 + +### 5. 持久化状态 + +这部分保存那些“不该跟着一次对话一起消失”的东西: + +- memory 文件 +- task 文件 +- transcript +- background task 输出 +- worktree 绑定信息 + +## 如果你想做出结构完整的版本,至少要有哪些数据结构 + +如果你的目标是自己写一个结构完整、接近真实主脉络的类似系统,最低限度要把下面这些数据结构设计清楚: + +```python +class AppState: + messages: list + tools: dict + tool_schemas: list + + todo: object | None + tasks: object | None + + permissions: object | None + hooks: object | None + memories: object | None + prompt_builder: object | None + + compact_state: dict + recovery_state: dict + + background: object | None + cron: object | None + + teammates: object | None + worktree_session: dict | None + mcp_clients: dict +``` + +这不是要求你一开始就把这些全写完。 + +这张表的作用只是告诉你: + +**一个像样的 agent 系统,不只是 `messages + tools`。** + +它最终会长成一个带很多子模块的状态系统。 + +## 一条请求是怎么流动的 + +```text +1. 用户发来任务 +2. 系统组装 prompt 和上下文 +3. 模型返回普通文本,或者返回 tool_use +4. 如果返回 tool_use: + - 先过 permission + - 再过 hook + - 然后执行工具 + - 把 tool_result 写回 messages +5. 主循环继续 +6. 如果任务太大: + - 可能写入 todo / tasks + - 可能派生 subagent + - 可能触发 compact + - 可能走 background / team / worktree / MCP +7. 直到模型结束这一轮 +``` + +这条流是全仓库最重要的主脉络。 + +你在后面所有章节里看到的机制,本质上都只是插在这条流的不同位置。 + +## 读者最容易混淆的几组概念 + +### `Todo` 和 `Task` 不是一回事 + +- `Todo`:轻量、临时、偏会话内 +- `Task`:持久化、带状态、带依赖关系 + +### `Memory` 和 `Context` 不是一回事 + +- `Context`:这一轮工作临时需要的信息 +- `Memory`:未来别的会话也可能仍然有价值的信息 + +### `Subagent` 和 `Teammate` 不是一回事 + +- `Subagent`:通常是当前 agent 派生出来的一次性帮手 +- `Teammate`:更偏向长期存在于团队中的协作角色 + +### `Prompt` 和 `System Reminder` 不是一回事 + +- `System Prompt`:较稳定的系统级输入 +- `System Reminder`:每轮动态变化的补充上下文 + +## 这套仓库刻意省略了什么 + +为了让初学者能顺着学下去,本仓库不会把下面这些内容塞进主线: + +- 产品级启动流程里的全部外围初始化 +- 真实商业产品中的账号、策略、遥测、灰度等逻辑 +- 只服务于兼容性和历史负担的复杂分支 +- 某些非常复杂但教学收益很低的边角机制 + +这不是因为这些东西“不存在”。 + +而是因为对一个从 0 到 1 造类似系统的读者来说,主干先于枝叶。 + +## 这一章之后怎么读 + +推荐顺序: + +1. 先读 `s01` 和 `s02` +2. 然后读 `s03` 到 `s06` +3. 进入 `s07` 到 `s10` +4. 接着补 `s11` +5. 最后再读 `s12` 到 `s19` + +如果你在某一章觉得名词开始打结,回来看这一章和术语表就够了。 + +--- + +**一句话记住全仓库:** + +先做出能工作的最小循环,再一层一层给它补上规划、隔离、安全、记忆、任务、协作和外部能力。 diff --git a/docs/zh/s00a-query-control-plane.md b/docs/zh/s00a-query-control-plane.md new file mode 100644 index 000000000..8f61f2a36 --- /dev/null +++ b/docs/zh/s00a-query-control-plane.md @@ -0,0 +1,318 @@ +# s00a: Query Control Plane (查询控制平面) + +> 这不是新的主线章节,而是一份桥接文档。 +> 它用来回答一个问题: +> +> **为什么一个结构更完整的 agent,不会只靠 `messages[]` 和一个 `while True` 就够了?** + +## 这一篇为什么要存在 + +主线里的 `s01` 会先教你做出一个最小可运行循环: + +```text +用户输入 + -> +模型回复 + -> +如果要调工具就执行 + -> +把结果喂回去 + -> +继续下一轮 +``` + +这条主线是对的,而且必须先学这个。 + +但当系统开始长功能以后,真正支撑一个完整 harness 的,不再只是“循环”本身,而是: + +**一层专门负责管理查询过程的控制平面。** + +这一层在真实系统里通常会统一处理: + +- 当前对话消息 +- 当前轮次 +- 为什么继续下一轮 +- 是否正在恢复错误 +- 是否已经压缩过上下文 +- 是否需要切换输出预算 +- hook 是否暂时影响了结束条件 + +如果不把这层讲出来,读者虽然能做出一个能跑的 demo,但很难自己把系统推到接近 95%-99% 的完成度。 + +## 先解释几个名词 + +### 什么是 query + +这里的 `query` 不是“数据库查询”。 + +这里说的 query,更接近: + +> 系统为了完成用户当前这一次请求,而运行的一整段主循环过程。 + +也就是说: + +- 用户说一句话 +- 系统可能要经过很多轮模型调用和工具调用 +- 最后才结束这一次请求 + +这整段过程,就可以看成一条 query。 + +### 什么是控制平面 + +`控制平面` 这个词第一次看会有点抽象。 + +它的意思其实很简单: + +> 不是直接做业务动作,而是负责协调、调度、决定流程怎么往下走的一层。 + +在这里: + +- 模型回复内容,算“业务内容” +- 工具执行结果,算“业务动作” +- 决定“要不要继续下一轮、为什么继续、现在属于哪种继续”,这层就是控制平面 + +### 什么是 transition + +`transition` 可以翻成“转移原因”。 + +它回答的是: + +> 上一轮为什么没有结束,而是继续下一轮了? + +例如: + +- 因为工具刚执行完 +- 因为输出被截断,要续写 +- 因为刚做完压缩,要重试 +- 因为 hook 要求继续 +- 因为预算还允许继续 + +## 最小心智模型 + +先把 query 控制平面想成 3 层: + +```text +1. 输入层 + - messages + - system prompt + - user/system context + +2. 控制层 + - 当前状态 state + - 当前轮 turn + - 当前继续原因 transition + - 恢复/压缩/预算等标记 + +3. 执行层 + - 调模型 + - 执行工具 + - 写回消息 +``` + +它的工作不是“替代主循环”,而是: + +**让主循环从一个小 demo,升级成一个能管理很多分支和状态的系统。** + +## 为什么只靠 `messages[]` 不够 + +很多初学者第一次实现 agent 时,会把所有状态都堆进 `messages[]`。 + +这在最小 demo 里没问题。 + +但一旦系统长出下面这些能力,就不够了: + +- 你要知道自己是不是已经做过一次 reactive compact +- 你要知道输出被截断已经续写了几次 +- 你要知道这次继续是因为工具,还是因为错误恢复 +- 你要知道当前轮是否启用了特殊输出预算 + +这些信息不是“对话内容”,而是“流程控制状态”。 + +所以它们不该都硬塞进 `messages[]` 里。 + +## 关键数据结构 + +### 1. QueryParams + +这是进入 query 引擎时的外部输入。 + +最小形状可以这样理解: + +```python +params = { + "messages": [...], + "system_prompt": "...", + "user_context": {...}, + "system_context": {...}, + "tool_use_context": {...}, + "fallback_model": None, + "max_output_tokens_override": None, + "max_turns": None, +} +``` + +它的作用是: + +- 带进来这次查询一开始已知的输入 +- 这些值大多不在每轮里随便乱改 + +### 2. QueryState + +这才是跨迭代真正会变化的部分。 + +最小教学版建议你把它显式做成一个结构: + +```python +state = { + "messages": [...], + "tool_use_context": {...}, + "continuation_count": 0, + "has_attempted_compact": False, + "max_output_tokens_override": None, + "stop_hook_active": False, + "turn_count": 1, + "transition": None, +} +``` + +它的价值在于: + +- 把“会变的流程状态”集中放在一起 +- 让每个 continue site 修改的是同一份 state,而不是散落在很多局部变量里 + +### 3. TransitionReason + +建议你单独定义一组继续原因: + +```python +TRANSITIONS = ( + "tool_result_continuation", + "max_tokens_recovery", + "compact_retry", + "transport_retry", + "stop_hook_continuation", + "budget_continuation", +) +``` + +这不是为了炫技。 + +它的作用很实在: + +- 日志更清楚 +- 调试更清楚 +- 测试更清楚 +- 教学更清楚 + +## 最小实现 + +### 第一步:把外部输入和内部状态分开 + +```python +def query(params): + state = { + "messages": params["messages"], + "tool_use_context": params["tool_use_context"], + "continuation_count": 0, + "has_attempted_compact": False, + "max_output_tokens_override": params.get("max_output_tokens_override"), + "turn_count": 1, + "transition": None, + } +``` + +### 第二步:每一轮先读 state,再决定如何执行 + +```python +while True: + messages = state["messages"] + transition = state["transition"] + turn_count = state["turn_count"] + + response = call_model(...) + ... +``` + +### 第三步:所有“继续下一轮”的地方都写回 state + +```python +if response.stop_reason == "tool_use": + state["messages"] = append_tool_results(...) + state["transition"] = "tool_result_continuation" + state["turn_count"] += 1 + continue + +if response.stop_reason == "max_tokens": + state["messages"].append({"role": "user", "content": CONTINUE_MESSAGE}) + state["continuation_count"] += 1 + state["transition"] = "max_tokens_recovery" + continue +``` + +这一点非常关键。 + +**不要只做 `continue`,要知道自己为什么 continue。** + +## 一张真正清楚的心智图 + +```text +params + | + v +init state + | + v +query loop + | + +-- normal assistant end --------------> terminal + | + +-- tool_use --------------------------> write tool_result -> transition=tool_result_continuation + | + +-- max_tokens ------------------------> inject continue -> transition=max_tokens_recovery + | + +-- prompt too long -------------------> compact -> transition=compact_retry + | + +-- transport error -------------------> backoff -> transition=transport_retry + | + +-- stop hook asks to continue --------> transition=stop_hook_continuation +``` + +## 它和 `s01`、`s11` 的关系 + +- `s01` 负责建立“最小主循环” +- `s11` 负责建立“错误恢复分支” +- 这一篇负责把两者再往上抽象一层,解释为什么一个更完整的系统会出现一个 query control plane + +所以这篇不是替代主线,而是把主线补完整。 + +## 初学者最容易犯的错 + +### 1. 把所有控制状态都塞进消息里 + +这样日志和调试都会很难看,也会让消息层和控制层混在一起。 + +### 2. `continue` 了,但没有记录为什么继续 + +短期看起来没问题,系统一复杂就会变成黑盒。 + +### 3. 每个分支都直接改很多局部变量 + +这样后面你很难看出“哪些状态是跨轮共享的”。 + +### 4. 把 query loop 讲成“只是一个 while True” + +这对最小 demo 是真话,对一个正在长出控制面的 harness 就不是完整真话了。 + +## 教学边界 + +这篇最重要的,不是把所有控制状态一次列满,而是先让你守住三件事: + +- query loop 不只是 `while True`,而是一条带着共享状态往前推进的控制面 +- 每次 `continue` 都应该有明确原因,而不是黑盒跳转 +- 消息层、工具回写、压缩恢复、重试恢复,最终都要回到同一份 query 状态上 + +更细的 `transition taxonomy`、预算跟踪、prefetch 等扩展,可以放到你把这条最小控制面真正手搓稳定以后再补。 + +## 一句话记住 + +**更完整的 query loop 不只是“循环”,而是“拿着一份跨轮状态不断推进的查询控制平面”。** diff --git a/docs/zh/s00b-one-request-lifecycle.md b/docs/zh/s00b-one-request-lifecycle.md new file mode 100644 index 000000000..e9fcb3edb --- /dev/null +++ b/docs/zh/s00b-one-request-lifecycle.md @@ -0,0 +1,424 @@ +# s00b: One Request Lifecycle (一次请求的完整生命周期) + +> 这是一份桥接文档。 +> 它不替代主线章节,而是把整套系统串成一条真正连续的执行链。 +> +> 它要回答的问题是: +> +> **用户的一句话,进入系统以后,到底是怎样一路流动、分发、执行、再回到主循环里的?** + +## 为什么必须补这一篇 + +很多读者在按顺序看教程时,会逐章理解: + +- `s01` 讲循环 +- `s02` 讲工具 +- `s03` 讲规划 +- `s07` 讲权限 +- `s09` 讲 memory +- `s12-s19` 讲任务、多 agent、MCP + +每章单看都能懂。 + +但一旦开始自己实现,就会很容易卡住: + +- 这些模块到底谁先谁后? +- 一条请求进来时,先走 prompt,还是先走 memory? +- 工具执行前,权限和 hook 在哪一层? +- task、runtime task、teammate、worktree、MCP 到底是在一次请求里的哪个阶段介入? + +所以你需要一张“纵向流程图”。 + +## 先给一条最重要的总图 + +```text +用户请求 + | + v +Query State 初始化 + | + v +组装 system prompt / messages / reminders + | + v +调用模型 + | + +-- 普通回答 -------------------------------> 结束本次请求 + | + +-- tool_use + | + v + Tool Router + | + +-- 权限判断 + +-- Hook 拦截/注入 + +-- 本地工具 / MCP / agent / task / team + | + v + 执行结果 + | + +-- 可能写入 task / runtime task / memory / worktree 状态 + | + v + tool_result 写回 messages + | + v + Query State 更新 + | + v + 下一轮继续 +``` + +你可以把整条链先理解成三层: + +1. `Query Loop` +2. `Tool Control Plane` +3. `Platform State` + +## 第 1 段:用户请求进入查询控制平面 + +当用户说: + +```text +修复 tests/test_auth.py 的失败,并告诉我原因 +``` + +系统最先做的,不是立刻跑工具,而是先为这次请求建立一份查询状态。 + +最小可以理解成: + +```python +query_state = { + "messages": [{"role": "user", "content": user_text}], + "turn_count": 1, + "transition": None, + "tool_use_context": {...}, +} +``` + +这里的重点是: + +**这次请求不是“单次 API 调用”,而是一段可能包含很多轮的查询过程。** + +如果你对这一层还不够熟,先回看: + +- [`s01-the-agent-loop.md`](./s01-the-agent-loop.md) +- [`s00a-query-control-plane.md`](./s00a-query-control-plane.md) + +## 第 2 段:组装本轮真正送给模型的输入 + +主循环不会直接把原始 `messages` 裸发出去。 + +在更完整的系统里,它通常会先组装: + +- system prompt blocks +- 规范化后的 messages +- memory section +- 当前轮 reminder +- 工具清单 + +也就是说,真正发给模型的通常是: + +```text +system prompt ++ normalized messages ++ tools ++ optional reminders / attachments +``` + +这里涉及的章节是: + +- `s09` memory +- `s10` system prompt +- `s10a` message & prompt pipeline + +这一段的核心心智是: + +**system prompt 不是全部输入,它只是输入管道中的一段。** + +## 第 3 段:模型产出两类东西 + +模型这一轮的输出,最关键地分成两种: + +### 第一种:普通回复 + +如果模型直接给出结论或说明,本次请求可能就结束了。 + +### 第二种:动作意图 + +也就是工具调用。 + +例如: + +```text +read_file("tests/test_auth.py") +bash("pytest tests/test_auth.py -q") +todo([...]) +load_skill("code-review") +task_create(...) +mcp__postgres__query(...) +``` + +这时候系统真正收到的,不只是“文本”,而是: + +> 模型想让真实世界发生某些动作。 + +## 第 4 段:工具路由层接管动作意图 + +一旦出现 `tool_use`,系统就进入工具控制平面。 + +这一层至少要回答: + +1. 这是什么工具? +2. 它应该路由到哪类能力来源? +3. 执行前要不要先过权限? +4. hook 有没有要拦截或补充? +5. 它执行时能访问哪些共享状态? + +最小图可以这样看: + +```text +tool_use + | + v +Tool Router + | + +-- native tool handler + +-- MCP client + +-- agent/team/task handler +``` + +如果你对这一层不够清楚,回看: + +- [`s02-tool-use.md`](./s02-tool-use.md) +- [`s02a-tool-control-plane.md`](./s02a-tool-control-plane.md) + +## 第 5 段:权限系统决定“能不能执行” + +不是所有动作意图都应该直接变成真实执行。 + +例如: + +- 写文件 +- 跑 bash +- 改工作目录 +- 调外部服务 + +这时会先进入权限判断: + +```text +deny rules + -> mode + -> allow rules + -> ask user +``` + +权限系统处理的是: + +> 这次动作是否允许发生。 + +相关章节: + +- [`s07-permission-system.md`](./s07-permission-system.md) + +## 第 6 段:Hook 可以在边上做扩展 + +通过权限检查以后,系统还可能在工具执行前后跑 hook。 + +你可以把 hook 理解成: + +> 不改主循环主干,也能插入自定义行为的扩展点。 + +例如: + +- 执行前记录日志 +- 执行后做额外检查 +- 根据结果注入额外提醒 + +相关章节: + +- [`s08-hook-system.md`](./s08-hook-system.md) + +## 第 7 段:真正执行动作,并影响不同层的状态 + +这是很多人最容易低估的一段。 + +工具执行结果,不只是“一段文本输出”。 + +它还可能修改系统别的状态层。 + +### 例子 1:规划状态 + +如果工具是 `todo`,它会更新的是当前会话计划。 + +相关章节: + +- [`s03-todo-write.md`](./s03-todo-write.md) + +### 例子 2:持久任务图 + +如果工具是 `task_create` / `task_update`,它会修改磁盘上的任务板。 + +相关章节: + +- [`s12-task-system.md`](./s12-task-system.md) + +### 例子 3:运行时任务 + +如果工具启动了后台 bash、后台 agent 或监控任务,它会创建 runtime task。 + +相关章节: + +- [`s13-background-tasks.md`](./s13-background-tasks.md) +- [`s13a-runtime-task-model.md`](./s13a-runtime-task-model.md) + +### 例子 4:多 agent / teammate + +如果工具是 `delegate`、`spawn_agent` 一类,它会在平台层生成新的执行单元。 + +相关章节: + +- [`s15-agent-teams.md`](./s15-agent-teams.md) +- [`s16-team-protocols.md`](./s16-team-protocols.md) +- [`s17-autonomous-agents.md`](./s17-autonomous-agents.md) + +### 例子 5:worktree + +如果系统要为某个任务提供隔离工作目录,这会影响文件系统级执行环境。 + +相关章节: + +- [`s18-worktree-task-isolation.md`](./s18-worktree-task-isolation.md) + +### 例子 6:MCP + +如果调用的是外部 MCP 能力,那么执行主体可能根本不在本地 handler,而在外部能力端。 + +相关章节: + +- [`s19-mcp-plugin.md`](./s19-mcp-plugin.md) +- [`s19a-mcp-capability-layers.md`](./s19a-mcp-capability-layers.md) + +## 第 8 段:执行结果被包装回消息流 + +不管执行落在哪一层,最后都要回到同一个位置: + +```text +tool_result -> messages +``` + +这是整个系统最核心的闭环。 + +因为无论工具背后多复杂,模型下一轮真正能继续工作的依据,仍然是: + +> 系统把执行结果重新写回了它可见的消息流。 + +这也是为什么 `s01` 永远是根。 + +## 第 9 段:主循环根据结果决定下一轮是否继续 + +当 `tool_result` 写回以后,查询状态也会一起更新: + +- `messages` 变了 +- `turn_count` 增加了 +- `transition` 被记录成某种续行原因 + +这时系统就进入下一轮。 + +如果中间发生下面这些情况,控制平面还会继续介入: + +- 上下文太长,需要压缩 +- 输出被截断,需要续写 +- 请求失败,需要恢复 + +相关章节: + +- [`s06-context-compact.md`](./s06-context-compact.md) +- [`s11-error-recovery.md`](./s11-error-recovery.md) + +## 第 10 段:哪些信息不会跟着一次请求一起结束 + +这也是非常容易混的地方。 + +一次请求结束后,并不是所有状态都随之消失。 + +### 会跟着当前请求结束的 + +- 当前轮 messages 中的临时推进过程 +- 会话内 todo 状态 +- 当前轮 reminder + +### 可能跨请求继续存在的 + +- memory +- 持久任务图 +- runtime task 输出 +- worktree +- MCP 连接状态 + +所以你要逐渐学会区分: + +```text +query-scope state +session-scope state +project-scope state +platform-scope state +``` + +## 用一个完整例子串一次 + +还是用这个请求: + +```text +修复 tests/test_auth.py 的失败,并告诉我原因 +``` + +系统可能会这样流动: + +1. 用户请求进入 `QueryState` +2. system prompt + memory + tools 被组装好 +3. 模型先调用 `todo`,写出三步计划 +4. 模型调用 `read_file("tests/test_auth.py")` +5. 工具路由到本地文件读取 handler +6. 读取结果包装成 `tool_result` 写回消息流 +7. 下一轮模型调用 `bash("pytest tests/test_auth.py -q")` +8. 权限系统判断这条命令是否可执行 +9. 执行测试,输出太长则先落盘并留预览 +10. 失败日志回到消息流 +11. 模型再读实现文件并修改代码 +12. 修改后再跑测试 +13. 如果对话变长,`s06` 触发压缩 +14. 如果任务被拆给子 agent,`s15-s17` 介入 +15. 最后模型输出结论,本次请求结束 + +你会发现: + +**整套系统再复杂,也始终没有脱离“输入 -> 动作意图 -> 执行 -> 结果写回 -> 下一轮”这条主骨架。** + +## 读这篇时最该记住的三件事 + +### 1. 所有模块都不是平铺摆在那里的 + +它们是在一次请求的不同阶段依次介入的。 + +### 2. 真正的闭环只有一个 + +那就是: + +```text +tool_result 回到 messages +``` + +### 3. 很多高级机制,本质上只是围绕这条闭环加的保护层 + +例如: + +- 权限是执行前保护层 +- hook 是扩展层 +- compact 是上下文预算保护层 +- recovery 是出错后的恢复层 +- task/team/worktree/MCP 是更大的平台能力层 + +## 一句话记住 + +**一次请求的完整生命周期,本质上就是:系统围绕同一条主循环,把不同模块按阶段接进来,最终持续把真实执行结果送回模型继续推理。** diff --git a/docs/zh/s00c-query-transition-model.md b/docs/zh/s00c-query-transition-model.md new file mode 100644 index 000000000..cbd036282 --- /dev/null +++ b/docs/zh/s00c-query-transition-model.md @@ -0,0 +1,331 @@ +# s00c: Query Transition Model (查询转移模型) + +> 这篇桥接文档专门解决一个问题: +> +> **为什么一个只会 `continue` 的 agent,不足以支撑完整系统,而必须显式知道“为什么继续到下一轮”?** + +## 这一篇为什么要存在 + +主线里: + +- `s01` 先教你最小循环 +- `s06` 开始教上下文压缩 +- `s11` 开始教错误恢复 + +这些都对。 +但如果你只分别学这几章,脑子里很容易还是停留在一种过于粗糙的理解: + +> “反正 `continue` 了就继续呗。” + +这在最小 demo 里能跑。 +但当系统开始长出恢复、压缩和外部控制以后,这样理解会很快失灵。 + +因为系统继续下一轮的原因其实很多,而且这些原因不是一回事: + +- 工具刚执行完,要把结果喂回模型 +- 输出被截断了,要续写 +- 上下文刚压缩完,要重试 +- 运输层刚超时了,要退避后重试 +- stop hook 要求当前 turn 先不要结束 +- token budget 还允许继续推进 + +如果你不把这些“继续原因”从一开始拆开,后面会出现三个大问题: + +- 日志看不清 +- 测试不好写 +- 教学心智会越来越模糊 + +## 先解释几个名词 + +### 什么叫 transition + +这里的 `transition`,你可以先把它理解成: + +> 上一轮为什么转移到了下一轮。 + +它不是“消息内容”,而是“流程原因”。 + +### 什么叫 continuation + +continuation 就是: + +> 这条 query 当前还没有结束,要继续推进。 + +但 continuation 不止一种。 + +### 什么叫 query boundary + +query boundary 就是一轮和下一轮之间的边界。 + +每次跨过这个边界,系统最好都知道: + +- 这次为什么继续 +- 这次继续前有没有修改状态 +- 这次继续后应该怎么读主循环 + +## 最小心智模型 + +先不要把 query 想成一条线。 + +更接近真实情况的理解是: + +```text +一条 query + = 一组“继续原因”串起来的状态转移 +``` + +例如: + +```text +用户输入 + -> +模型产生 tool_use + -> +工具执行完 + -> +tool_result_continuation + -> +模型输出过长 + -> +max_tokens_recovery + -> +压缩后继续 + -> +compact_retry + -> +最终结束 +``` + +这样看,你会更容易理解: + +**系统不是单纯在 while loop 里转圈,而是在一串显式的转移原因里推进。** + +## 关键数据结构 + +### 1. QueryState 里的 `transition` + +最小版建议就把这类字段显式放进状态里: + +```python +state = { + "messages": [...], + "turn_count": 3, + "has_attempted_compact": False, + "continuation_count": 1, + "transition": None, +} +``` + +这里的 `transition` 不是可有可无。 + +它的意义是: + +- 当前这轮为什么会出现 +- 下一轮日志应该怎么解释 +- 测试时应该断言哪条路径被走到 + +### 2. TransitionReason + +教学版最小可以先这样分: + +```python +TRANSITIONS = ( + "tool_result_continuation", + "max_tokens_recovery", + "compact_retry", + "transport_retry", + "stop_hook_continuation", + "budget_continuation", +) +``` + +这几种原因的本质不一样: + +- `tool_result_continuation` + 是正常主线继续 +- `max_tokens_recovery` + 是输出被截断后的恢复继续 +- `compact_retry` + 是上下文处理后的恢复继续 +- `transport_retry` + 是基础设施抖动后的恢复继续 +- `stop_hook_continuation` + 是外部控制逻辑阻止本轮结束 +- `budget_continuation` + 是系统主动利用预算继续推进 + +### 3. Continuation Budget + +更完整的 query 状态不只会说“继续”,还会限制: + +- 最多续写几次 +- 最多压缩后重试几次 +- 某类恢复是不是已经尝试过 + +例如: + +```python +state = { + "max_output_tokens_recovery_count": 2, + "has_attempted_reactive_compact": True, +} +``` + +这些字段的本质都是: + +> continuation 不是无限制的。 + +## 最小实现 + +### 第一步:把 continue site 显式化 + +很多初学者写主循环时,所有继续逻辑都长这样: + +```python +continue +``` + +教学版应该往前走一步: + +```python +state["transition"] = "tool_result_continuation" +continue +``` + +### 第二步:不同继续原因,配不同状态修改 + +```python +if response.stop_reason == "tool_use": + state["messages"] = append_tool_results(...) + state["turn_count"] += 1 + state["transition"] = "tool_result_continuation" + continue + +if response.stop_reason == "max_tokens": + state["messages"].append({ + "role": "user", + "content": CONTINUE_MESSAGE, + }) + state["max_output_tokens_recovery_count"] += 1 + state["transition"] = "max_tokens_recovery" + continue +``` + +重点不是“多写一行”。 + +重点是: + +**每次继续之前,你都要知道自己做了什么状态更新,以及为什么继续。** + +### 第三步:把恢复继续和正常继续分开 + +```python +if should_retry_transport(error): + time.sleep(backoff(...)) + state["transition"] = "transport_retry" + continue + +if should_recompact(error): + state["messages"] = compact_messages(state["messages"]) + state["transition"] = "compact_retry" + continue +``` + +这时候你就开始得到一条非常清楚的控制链: + +```text +继续 + 不再是一个动作 + 而是一类带原因的转移 +``` + +## 一张真正应该建立的图 + +```text +query loop + | + +-- tool executed --------------------> transition = tool_result_continuation + | + +-- output truncated -----------------> transition = max_tokens_recovery + | + +-- compact just happened -----------> transition = compact_retry + | + +-- network / transport retry -------> transition = transport_retry + | + +-- stop hook blocked termination ---> transition = stop_hook_continuation + | + +-- budget says keep going ----------> transition = budget_continuation +``` + +## 它和逆向仓库主脉络为什么对得上 + +如果你去看更完整系统的查询入口,会发现它真正难的地方从来不是: + +- 再调一次模型 + +而是: + +- 什么时候该继续 +- 继续前改哪份状态 +- 继续属于哪一种路径 + +所以这篇桥接文档讲的,不是额外装饰,而是完整 query engine 的主骨架之一。 + +## 它和主线章节怎么接 + +- `s01` 让你先把 loop 跑起来 +- `s06` 让你知道为什么上下文管理会介入继续路径 +- `s11` 让你知道为什么恢复路径不是一种 +- 这篇则把“继续原因”统一抬成显式状态 + +所以你可以把它理解成: + +> 给前后几章之间补上一条“为什么继续”的统一主线。 + +## 初学者最容易犯的错 + +### 1. 只有 `continue`,没有 `transition` + +这样日志和测试都会越来越难看。 + +### 2. 把所有继续都当成一种 + +这样会把: + +- 正常主线继续 +- 错误恢复继续 +- 压缩后重试 + +全部混成一锅。 + +### 3. 没有 continuation budget + +没有预算,系统就会在某些坏路径里无限试下去。 + +### 4. 把 `transition` 写进消息文本,而不是流程状态 + +消息是给模型看的。 +`transition` 是给系统自己看的。 + +### 5. 压缩、恢复、hook 都发生了,却没有统一的查询状态 + +这会导致控制逻辑散落在很多局部变量里,越长越乱。 + +## 教学边界 + +这篇最重要的,不是一次枚举完所有 transition 名字,而是先让你守住三件事: + +- `continue` 最好总能对应一个显式的 `transition reason` +- 正常继续、恢复继续、压缩后重试,不应该被混成同一种路径 +- continuation 需要预算和状态,而不是无限重来 + +只要这三点成立,你就已经能把 `s01 / s06 / s11` 真正串成一条完整主线。 +更细的 transition taxonomy、预算策略和日志分类,可以放到你把最小 query 状态机写稳以后再补。 + +## 读完这一篇你应该能说清楚 + +至少能完整说出这句话: + +> 一条 query 不是简单 while loop,而是一串显式 continuation reason 驱动的状态转移。 + +如果这句话你已经能稳定说清,那么你再回头看 `s11`、`s19`,心智会顺很多。 diff --git a/docs/zh/s00d-chapter-order-rationale.md b/docs/zh/s00d-chapter-order-rationale.md new file mode 100644 index 000000000..487c4e3a6 --- /dev/null +++ b/docs/zh/s00d-chapter-order-rationale.md @@ -0,0 +1,513 @@ +# s00d: Chapter Order Rationale (为什么是这个章节顺序) + +> 这份文档不讲某一个机制本身。 +> 它专门回答一个更基础的问题: +> +> **为什么这套仓库要按现在这个顺序教,而不是按源码目录顺序、功能热闹程度,或者“哪里复杂先讲哪里”。** + +## 先说结论 + +当前这套 `s01 -> s19` 的主线顺序,整体上是合理的。 + +它最大的优点不是“覆盖面广”,而是: + +- 先建立最小闭环 +- 再补横切控制面 +- 再补持久化工作层 +- 最后才扩成多 agent 平台和外部能力总线 + +这个顺序适合教学,因为它遵守的不是“源码文件先后”,而是: + +**机制依赖顺序。** + +也就是: + +- 后一章需要建立在前一章已经清楚的心智之上 +- 同一层的新概念尽量一起讲完 +- 不把高阶平台能力提前压给还没建立主闭环的读者 + +如果要把这套课程改到更接近满分,一个很重要的标准不是“加更多内容”,而是: + +**让读者始终知道这一章为什么现在学,而不是上一章或下一章。** + +这份文档就是干这件事的。 + +## 这份顺序到底按什么排 + +不是按这些排: + +- 不是按逆向源码里文件顺序排 +- 不是按实现难度排 +- 不是按功能看起来酷不酷排 +- 不是按产品里出现得早不早排 + +它真正按的是四条依赖线: + +1. `主闭环依赖` +2. `控制面依赖` +3. `工作状态依赖` +4. `平台边界依赖` + +你可以先把整套课粗暴地看成下面这条线: + +```text +先让 agent 能跑 + -> 再让它不乱跑 + -> 再让它能长期跑 + -> 最后让它能分工跑、隔离跑、接外部能力跑 +``` + +这才是当前章节顺序最核心的逻辑。 + +## 一张总图:章节之间真正的依赖关系 + +```text +s00 总览与地图 + | + v +s01 主循环 + -> +s02 工具执行 + -> +s03 会话计划 + -> +s04 子任务隔离 + -> +s05 按需知识注入 + -> +s06 上下文压缩 + +s06 之后,单 agent 主骨架成立 + | + v +s07 权限闸门 + -> +s08 生命周期 Hook + -> +s09 跨会话记忆 + -> +s10 Prompt / 输入装配 + -> +s11 恢复与续行 + +s11 之后,单 agent 的高完成度控制面成立 + | + v +s12 持久任务图 + -> +s13 运行时后台槽位 + -> +s14 时间触发器 + +s14 之后,工作系统从“聊天过程”升级成“可持续运行时” + | + v +s15 持久队友 + -> +s16 协议化协作 + -> +s17 自治认领 + -> +s18 worktree 执行车道 + -> +s19 外部能力总线 +``` + +如果你记不住所有章节,只记住每段结束后的“系统里多了什么”: + +- `s06` 结束:你有了能工作的单 agent +- `s11` 结束:你有了更稳、更可控的单 agent +- `s14` 结束:你有了能长期推进工作的运行时 +- `s19` 结束:你有了接近完整的平台边界 + +## 为什么 `s01-s06` 必须先成一整段 + +### `s01` 必须最先 + +因为它定义的是: + +- 这套系统的最小入口 +- 每一轮到底怎么推进 +- 工具结果为什么能再次进入模型 + +如果连这一条都没建立,后面所有内容都会变成“往空气里挂功能”。 + +### `s02` 必须紧跟 `s01` + +因为没有工具,agent 只是会说,不是真的会做。 + +开发者第一次真正感受到“harness 在做什么”,往往就是在 `s02`: + +- 模型产出 `tool_use` +- 系统找到 handler +- 执行工具 +- 回写 `tool_result` + +这是整个仓库第一条真正的“行动回路”。 + +### `s03` 放在 `s04` 前面是对的 + +很多人会直觉上想先讲 subagent,因为它更“高级”。 + +但教学上不该这样排。 + +原因很简单: + +- `s03` 先解决“当前 agent 自己怎么不乱撞” +- `s04` 再解决“哪些工作要交给别的执行者” + +如果主 agent 连本地计划都没有,就提前进入子 agent,读者只会觉得: + +- 为什么要委派 +- 委派和待办到底是什么关系 +- 哪些是主流程,哪些是探索性流程 + +都不清楚。 + +所以: + +**先有本地计划,再有上下文隔离委派。** + +### `s05` 放在 `s06` 前面是对的 + +这两个章节很多人会低估。 + +实际上它们解决的是同一类问题的前后两半: + +- `s05` 解决:知识不要一开始全塞进来 +- `s06` 解决:已经塞进来的上下文怎么控制体积 + +如果先讲压缩,再讲技能加载,读者容易误会成: + +- 上下文膨胀主要靠“事后压缩”解决 + +但更合理的心智应该是: + +1. 先减少不必要进入上下文的东西 +2. 再处理已经进入上下文、且必须继续保留的东西 + +所以 `s05 -> s06` 的顺序很合理。 + +## 为什么 `s07-s11` 应该成一整段“控制面加固” + +这五章看起来分散,实际上它们共同在回答同一个问题: + +**主循环已经能跑了,但要怎样才能跑得稳、跑得可控、跑得更像一个完整系统。** + +### `s07` 权限必须早于 `s08` Hook + +因为权限是在问: + +- 这件事能不能做 +- 这件事做到哪一步要停 +- 这件事要不要先问用户 + +Hook 是在问: + +- 系统这个时刻要不要额外做点什么 + +如果先讲 Hook,再讲权限,读者很容易误会: + +- 安全判断也只是某个 hook + +但实际上不是。 + +更清楚的教学顺序应该是: + +1. 先建立“执行前必须先过闸门”的概念 +2. 再建立“主循环周围可以挂扩展点”的概念 + +也就是: + +**先 gate,再 extend。** + +### `s09` 记忆放在 `s10` Prompt 前面是对的 + +这是整套课程里很关键的一条顺序。 + +很多人容易反过来讲,先讲 prompt,再讲 memory。 + +但对开发者心智更友好的顺序其实是现在这样: + +- `s09` 先讲“长期信息从哪里来、哪些值得留下” +- `s10` 再讲“这些来源最终怎样被组装进模型输入” + +也就是说: + +- `memory` 先回答“内容源是什么” +- `prompt pipeline` 再回答“这些内容源怎么装配” + +如果反过来,读者会在 `s10` 里不断追问: + +- 为什么这里会有 memory block +- 这块内容到底是谁准备的 +- 它和 messages、CLAUDE.md、skills 的边界在哪里 + +所以这一条顺序不要乱换。 + +### `s11` 放在这一段结尾很合理 + +因为恢复与续行不是单独一层业务功能,而是: + +- 对前面所有输入、执行、状态、权限、压缩分支的总回收 + +它天然适合做“控制面阶段的收口章”。 + +只有当读者已经知道: + +- 一轮输入怎么组装 +- 执行时会走哪些分支 +- 发生什么状态变化 + +他才真正看得懂恢复系统在恢复什么。 + +## 为什么 `s12-s14` 必须先讲“任务图”,再讲“后台运行”,最后讲“定时触发” + +这是后半程最容易排错的一段。 + +### `s12` 必须先于 `s13` + +因为 `s12` 解决的是: + +- 事情本身是什么 +- 依赖关系是什么 +- 哪个工作节点已完成、未完成、阻塞中 + +而 `s13` 解决的是: + +- 某个执行单元现在是不是正在后台跑 +- 跑到什么状态 +- 结果怎么回流 + +也就是: + +- `task` 是工作目标 +- `runtime task` 是执行槽位 + +如果没有 `s12` 先铺开 durable work graph,读者到了 `s13` 会把后台任务误当成任务系统本体。 + +这会直接导致后面: + +- cron 概念混乱 +- teammate 认领概念混乱 +- worktree lane 概念混乱 + +所以这里一定要守住: + +**先有目标,再有执行体。** + +### `s14` 必须紧跟 `s13` + +因为 cron 本质上不是又一种任务。 + +它只是回答: + +**如果现在不是用户当场触发,而是由时间触发一次执行,该怎么接到现有运行时里。** + +也就是说: + +- 没有 runtime slot,cron 没地方发车 +- 没有 task graph,cron 不知道在触发什么工作 + +所以最合理顺序一定是: + +`task graph -> runtime slot -> schedule trigger` + +## 为什么 `s15-s19` 要按“队友 -> 协议 -> 自治 -> 隔离车道 -> 外部能力”排 + +这一段如果顺序乱了,读者最容易开始觉得: + +- 队友、协议、任务、worktree、MCP 全都像“高级功能堆叠” + +但其实它们之间有很强的前后依赖。 + +### `s15` 先定义“谁在系统里长期存在” + +这一章先把对象立起来: + +- 队友是谁 +- 他们有没有身份 +- 他们是不是可以持续存在 + +如果连 actor 都还没清楚,协议对象就无从谈起。 + +### `s16` 再定义“这些 actor 之间按什么规则说话” + +协议层不应该早于 actor 层。 + +因为协议不是凭空存在的。 + +它一定是服务于: + +- 请求谁 +- 谁审批 +- 谁响应 +- 如何回执 + +所以: + +**先有队友,再有协议。** + +### `s17` 再进入“队友自己找活” + +自治不是“又多一种 agent 功能”。 + +自治其实是建立在前两章之上的: + +- 前提 1:队友是长期存在的 +- 前提 2:队友之间有可追踪的协作规则 + +只有这两个前提都建立了,自治认领才不会讲成一团雾。 + +### `s18` 为什么在 `s19` 前面 + +因为在平台层里,worktree 是执行隔离边界,MCP 是能力边界。 + +对开发者自己手搓系统来说,更应先搞清: + +- 多个执行者如何不互相踩目录 +- 一个任务与一个执行车道如何绑定 + +这些是“本地多执行者平台”先要解决的问题。 + +把这个问题讲完后,再去讲: + +- 外部 server +- 外部 tool +- capability route + +开发者才不会把“MCP 很强”误解成“本地平台边界可以先不管”。 + +### `s19` 放最后是对的 + +因为它本质上是平台边界的最外层。 + +它关心的是: + +- 本地系统之外的能力如何并入 +- 外部 server 和本地 tool 如何统一纳入 capability bus + +这个东西只有在前面这些边界都已经清楚后,读者才真的能吸收: + +- 本地 actor +- 本地 work lane +- 本地 task / runtime state +- 外部 capability provider + +分别是什么。 + +## 五种最容易让课程变差的“错误重排” + +### 错误 1:把 `s04` 提到 `s03` 前面 + +坏处: + +- 读者先学会“把活丢出去” +- 却还没学会“本地怎么规划” + +最后 subagent 只会变成“遇事就开新 agent”的逃避按钮。 + +### 错误 2:把 `s10` 提到 `s09` 前面 + +坏处: + +- 输入装配先讲了 +- 但输入源的边界还没立住 + +结果 prompt pipeline 会看起来像一堆神秘字符串拼接。 + +### 错误 3:把 `s13` 提到 `s12` 前面 + +坏处: + +- 读者会把后台执行槽位误认成工作任务本体 +- 后面 cron、自治认领、worktree 都会越来越混 + +### 错误 4:把 `s17` 提到 `s15` 或 `s16` 前面 + +坏处: + +- 还没定义持久队友 +- 也还没定义结构化协作规则 +- 就先讲自治认领 + +最后“自治”会被理解成模糊的自动轮询魔法。 + +### 错误 5:把 `s19` 提到 `s18` 前面 + +坏处: + +- 读者会先被外部能力系统吸引注意力 +- 却还没真正看清本地多执行者平台怎么稳定成立 + +这会让整个课程后半程“看起来很大”,但“落到自己实现时没有抓手”。 + +## 如果你自己手搓,可以在哪些地方先停 + +这套课不是说一定要一次把 `s01-s19` 全做完。 + +更稳的实现节奏是: + +### 里程碑 A:先做到 `s06` + +你已经有: + +- 主循环 +- 工具 +- 计划 +- 子任务隔离 +- 技能按需注入 +- 上下文压缩 + +这已经足够做出一个“能用的单 agent 原型”。 + +### 里程碑 B:再做到 `s11` + +你多了: + +- 权限 +- Hook +- Memory +- Prompt pipeline +- 错误恢复 + +到这里,单 agent 系统已经接近“高完成度教学实现”。 + +### 里程碑 C:做到 `s14` + +你多了: + +- durable task +- background runtime slot +- cron trigger + +到这里,系统开始脱离“只会跟着当前会话走”的状态。 + +### 里程碑 D:做到 `s19` + +这时再进入: + +- persistent teammate +- protocol +- autonomy +- worktree lane +- MCP / plugin + +这时你手里才是接近完整的平台结构。 + +## 维护者在重排章节前该问自己什么 + +如果你准备改顺序,先问下面这些问题: + +1. 这一章依赖的前置概念,前面有没有已经讲清? +2. 这次重排会不会让两个同名但不同层的概念更容易混? +3. 这一章新增的是“目标状态”“运行状态”“执行者”还是“外部能力”? +4. 如果把它提前,读者会不会只记住名词,反而抓不到最小实现? +5. 这次重排是在服务开发者实现路径,还是只是在模仿某个源码目录顺序? +6. 读者按当前章节学完以后,本地代码到底该按什么顺序打开,这条代码阅读顺序有没有一起讲清? + +如果第 5 个问题的答案偏向后者,那大概率不该改。 + +## 一句话记住 + +**好的章节顺序,不是把所有机制排成一列,而是让每一章都像前一章自然长出来的下一层。** diff --git a/docs/zh/s00e-reference-module-map.md b/docs/zh/s00e-reference-module-map.md new file mode 100644 index 000000000..dedfcd1ae --- /dev/null +++ b/docs/zh/s00e-reference-module-map.md @@ -0,0 +1,215 @@ +# s00e: 参考仓库模块映射图 + +> 这是一份给维护者和认真学习者用的校准文档。 +> 它不是让读者逐行读逆向源码。 +> +> 它只回答一个很关键的问题: +> +> **如果把参考仓库里真正重要的模块簇,和当前教学仓库的章节顺序对照起来看,现在这套课程顺序到底合不合理?** + +## 先说结论 + +合理。 + +当前这套 `s01 -> s19` 的顺序,整体上是对的,而且比“按源码目录顺序讲”更接近真实系统的设计主干。 + +原因很简单: + +- 参考仓库里目录很多 +- 但真正决定系统骨架的,是少数几簇控制、状态、任务、团队、隔离执行和外部能力模块 +- 这些高信号模块,和当前教学仓库的四阶段主线基本是对齐的 + +所以正确动作不是把教程改成“跟着源码树走”。 + +正确动作是: + +- 保留现在这条按依赖关系展开的主线 +- 把它和参考仓库的映射关系讲明白 +- 继续把低价值的产品外围细节挡在主线外 + +## 这份对照是怎么做的 + +这次对照主要看的是参考仓库里真正决定系统骨架的部分,例如: + +- `Tool.ts` +- `state/AppStateStore.ts` +- `coordinator/coordinatorMode.ts` +- `memdir/*` +- `services/SessionMemory/*` +- `services/toolUseSummary/*` +- `constants/prompts.ts` +- `tasks/*` +- `tools/TodoWriteTool/*` +- `tools/AgentTool/*` +- `tools/ScheduleCronTool/*` +- `tools/EnterWorktreeTool/*` +- `tools/ExitWorktreeTool/*` +- `tools/MCPTool/*` +- `services/mcp/*` +- `plugins/*` +- `hooks/toolPermission/*` + +这些已经足够判断“设计主脉络”。 + +没有必要为了教学,再把每个命令目录、兼容分支、UI 细节和产品接线全部拖进正文。 + +## 真正的映射关系 + +| 参考仓库模块簇 | 典型例子 | 对应教学章节 | 为什么这样放是对的 | +|---|---|---|---| +| 查询主循环 + 控制状态 | `Tool.ts`、`AppStateStore.ts`、query / coordinator 状态 | `s00`、`s00a`、`s00b`、`s01`、`s11` | 真实系统绝不只是 `messages[] + while True`。教学上先讲最小循环,再补控制平面,是对的。 | +| 工具路由与执行面 | `Tool.ts`、原生 tools、tool context、执行辅助逻辑 | `s02`、`s02a`、`s02b` | 参考仓库明确把 tools 做成统一执行面,不只是玩具版分发表。当前拆法是合理的。 | +| 会话规划 | `TodoWriteTool` | `s03` | 这是“当前会话怎么不乱撞”的小结构,应该早于持久任务图。 | +| 一次性委派 | `AgentTool` 的最小子集 | `s04` | 参考仓库的 agent 体系很大,但教学仓库先教“新上下文 + 子任务 + 摘要返回”这个最小正确版本,是对的。 | +| 技能发现与按需加载 | `DiscoverSkillsTool`、`skills/*`、相关 prompt 片段 | `s05` | 技能不是花哨外挂,而是知识注入层,所以应早于 prompt 复杂化和上下文压力。 | +| 上下文压力与压缩 | `services/toolUseSummary/*`、`services/contextCollapse/*`、compact 逻辑 | `s06` | 参考仓库明确存在显式压缩机制,把这一层放在平台化能力之前完全正确。 | +| 权限闸门 | `types/permissions.ts`、`hooks/toolPermission/*`、审批处理器 | `s07` | 执行安全是明确闸门,不是“某个 hook 顺手干的事”,所以必须早于 hook。 | +| Hook 与侧边扩展 | `types/hooks.ts`、hook runner、生命周期接线 | `s08` | 参考仓库把扩展点和权限分开。教学顺序保持“先 gate,再 extend”是对的。 | +| 持久记忆选择 | `memdir/*`、`services/SessionMemory/*`、记忆提取与筛选 | `s09` | 参考仓库把 memory 处理成“跨会话、选择性装配”的层,不是通用笔记本。 | +| Prompt 组装 | `constants/prompts.ts`、prompt sections、memory prompt 注入 | `s10`、`s10a` | 参考仓库明显把输入拆成多个 section。教学版把 prompt 讲成流水线,而不是一段大字符串,是正确的。 | +| 恢复与续行 | query transition、retry 分支、compact retry、token recovery | `s11`、`s00c` | 真实系统里“为什么继续下一轮”是显式存在的,所以恢复应当晚于 loop / tools / compact / permissions / memory / prompt。 | +| 持久工作图 | 任务记录、任务板、依赖解锁 | `s12` | 当前教程把“持久任务目标”和“会话内待办”分开,是对的。 | +| 活着的运行时任务 | `tasks/types.ts`、`LocalShellTask`、`LocalAgentTask`、`RemoteAgentTask`、`MonitorMcpTask` | `s13`、`s13a` | 参考仓库里 runtime task 是明确的联合类型,这强烈证明 `TaskRecord` 和 `RuntimeTaskState` 必须分开教。 | +| 定时触发 | `ScheduleCronTool/*`、`useScheduledTasks` | `s14` | 调度是建在 runtime work 之上的新启动条件,放在 `s13` 后非常合理。 | +| 持久队友 | `InProcessTeammateTask`、team tools、agent registry | `s15` | 参考仓库清楚地从一次性 subagent 继续长成长期 actor。把 teammate 放到后段是对的。 | +| 结构化团队协作 | send-message 流、request tracking、coordinator mode | `s16` | 协议必须建立在“已有持久 actor”之上,所以不能提前。 | +| 自治认领与恢复 | coordinator mode、任务认领、异步 worker 生命周期、resume 逻辑 | `s17` | 参考仓库里的 autonomy 不是魔法,而是建立在 actor、任务和协议之上的。 | +| Worktree 执行车道 | `EnterWorktreeTool`、`ExitWorktreeTool`、agent worktree 辅助逻辑 | `s18` | 参考仓库把 worktree 当作执行边界 + 收尾状态来处理。当前放在 tasks / teams 后是正确的。 | +| 外部能力总线 | `MCPTool`、`services/mcp/*`、`plugins/*`、MCP resources / prompts / tools | `s19`、`s19a` | 参考仓库把 MCP / plugin 放在平台最外层边界。把它放最后是合理的。 | + +## 这份对照最能证明的 5 件事 + +### 1. `s03` 应该继续放在 `s12` 前面 + +参考仓库里同时存在: + +- 小范围的会话计划 +- 大范围的持久任务 / 运行时系统 + +它们不是一回事。 + +所以教学顺序应当继续保持: + +`会话内计划 -> 持久任务图` + +### 2. `s09` 应该继续放在 `s10` 前面 + +参考仓库里的输入装配,明确把 memory 当成输入来源之一。 + +也就是说: + +- `memory` 先回答“内容从哪里来” +- `prompt pipeline` 再回答“这些内容怎么组装进去” + +所以先讲 `s09`,再讲 `s10`,顺序不要反过来。 + +### 3. `s12` 必须早于 `s13` + +`tasks/types.ts` 这类运行时任务联合类型,是这次对照里最强的证据之一。 + +它非常清楚地说明: + +- 持久化的工作目标 +- 当前活着的执行槽位 + +必须是两层不同状态。 + +如果先讲 `s13`,读者几乎一定会把这两层混掉。 + +### 4. `s15 -> s16 -> s17` 的顺序是对的 + +参考仓库里明确能看到: + +- 持久 actor +- 结构化协作 +- 自治认领 / 恢复 + +自治必须建立在前两者之上,所以当前顺序合理。 + +### 5. `s18` 应该继续早于 `s19` + +参考仓库把 worktree 当作本地执行边界机制。 + +这应该先于: + +- 外部能力提供者 +- MCP server +- plugin 装配面 + +被讲清。 + +否则读者会误以为“外部能力系统比本地执行边界更核心”。 + +## 这套教学仓库仍然不该抄进主线的内容 + +参考仓库里有很多真实但不应该占据主线的内容,例如: + +- CLI 命令面的完整铺开 +- UI 渲染细节 +- 遥测与分析分支 +- 远程 / 企业产品接线 +- 平台兼容层 +- 文件名、函数名、行号级 trivia + +这些不是假的。 + +但它们不该成为 0 到 1 教学路径的中心。 + +## 当前教学最容易漂掉的地方 + +### 1. 不要把 subagent 和 teammate 混成一个模糊概念 + +参考仓库里的 `AgentTool` 横跨了: + +- 一次性委派 +- 后台 worker +- 持久 worker / teammate +- worktree 隔离 worker + +这恰恰说明教学仓库应该继续拆开讲: + +- `s04` +- `s15` +- `s17` +- `s18` + +不要在早期就把这些东西混成一个“大 agent 能力”。 + +### 2. 不要把 worktree 教成“只是 git 小技巧” + +参考仓库里有 closeout、resume、cleanup、dirty-check 等状态。 + +所以 `s18` 必须继续讲清: + +- lane 身份 +- task 绑定 +- keep / remove 收尾 +- 恢复与清理 + +而不是只讲 `git worktree add`。 + +### 3. 不要把 MCP 缩成“远程 tools” + +参考仓库里明显不只有工具,还有: + +- resources +- prompts +- elicitation / connection state +- plugin 中介层 + +所以 `s19` 可以继续用 tools-first 的教学路径切入,但一定要补平台边界那一层地图。 + +## 最终判断 + +如果只拿“章节顺序是否贴近参考仓库的设计主干”这个问题来打分,那么当前这套顺序是过关而且方向正确的。 + +真正还能继续加分的地方,不再是再做一次大重排,而是: + +- 把桥接文档补齐 +- 把实体边界讲得更硬 +- 把多语言内容统一到同一个心智层次 +- 让 web 页面把这套学习地图展示得更清楚 + +## 一句话记住 + +**最好的教学顺序,不是源码文件出现的顺序,而是一个初学实现者真正能顺着依赖关系把系统重建出来的顺序。** diff --git a/docs/zh/s00f-code-reading-order.md b/docs/zh/s00f-code-reading-order.md new file mode 100644 index 000000000..e02c1a37e --- /dev/null +++ b/docs/zh/s00f-code-reading-order.md @@ -0,0 +1,275 @@ +# s00f: 本仓库代码阅读顺序 + +> 这份文档不是让你“多看代码”。 +> 它专门解决另一个问题: +> +> **当你已经知道章节顺序是对的以后,本仓库代码到底应该按什么顺序读,才不会把心智重新读乱。** + +## 先说结论 + +不要这样读代码: + +- 不要从文件最长的那一章开始 +- 不要随机点一个你觉得“高级”的章节开始 +- 不要先钻 `web/` 再回头猜主线 +- 不要把 19 个 `agents/*.py` 当成一个源码池乱翻 + +最稳的读法只有一句话: + +**文档顺着章节读,代码也顺着章节读。** + +而且每一章的代码,都先按同一个模板看: + +1. 先看状态结构 +2. 再看工具定义或注册表 +3. 再看“这一轮怎么推进”的主函数 +4. 最后才看 CLI 入口和试运行方式 + +## 为什么需要这份文档 + +很多读者不是看不懂某一章文字,而是会在真正打开代码以后重新乱掉。 + +典型症状是: + +- 一上来先盯住 300 行以上的文件底部 +- 先看一堆 `run_*` 函数,却不知道它们挂在哪条主线上 +- 先看“最复杂”的平台章节,然后觉得前面的章节好像都太简单 +- 把 `task`、`runtime task`、`teammate`、`worktree` 在代码里重新混成一团 + +这份阅读顺序就是为了防止这种情况。 + +## 读每个 agent 文件时,都先按同一个模板 + +不管你打开的是哪一章,本仓库里的 `agents/sXX_*.py` 都建议先按下面顺序读: + +### 第一步:先看文件头注释 + +先回答两个问题: + +- 这一章到底在教什么 +- 它故意没有教什么 + +如果连这一步都没建立,后面你会把每个函数都看成同等重要。 + +### 第二步:先看状态结构或管理器类 + +优先找这些东西: + +- `LoopState` +- `PlanningState` +- `CompactState` +- `TaskManager` +- `BackgroundManager` +- `TeammateManager` +- `WorktreeManager` + +原因很简单: + +**先知道系统到底记住了什么,后面才看得懂它为什么要这样流动。** + +### 第三步:再看工具列表或注册表 + +优先找这些入口: + +- `TOOLS` +- `TOOL_HANDLERS` +- 各种 `run_*` +- `build_tool_pool()` + +这一层回答的是: + +- 模型到底能调用什么 +- 这些调用会落到哪条执行面上 + +### 第四步:最后才看主推进函数 + +重点函数通常长这样: + +- `run_one_turn(...)` +- `agent_loop(...)` +- 某个 `handle_*` + +这一步要回答的是: + +- 这一章新机制到底接在主循环哪一环 +- 哪个分支是新增的 +- 新状态是在哪里写入、回流、继续的 + +### 第五步:最后再看 `if __name__ == "__main__"` + +CLI 入口当然有用,但它不应该成为第一屏。 + +因为它通常只是在做: + +- 读用户输入 +- 初始化状态 +- 调用 `agent_loop` + +真正决定一章心智主干的,不在这里。 + +## 阶段 1:`s01-s06` 应该怎样读代码 + +这一段不是在学“很多功能”,而是在学: + +**一个单 agent 主骨架到底怎样成立。** + +| 章节 | 文件 | 先看什么 | 再看什么 | 读完要确认什么 | +|---|---|---|---|---| +| `s01` | `agents/s01_agent_loop.py` | `LoopState` | `TOOLS` -> `execute_tool_calls()` -> `run_one_turn()` -> `agent_loop()` | 你已经能看懂 `messages -> model -> tool_result -> next turn` | +| `s02` | `agents/s02_tool_use.py` | `safe_path()` | `run_read()` / `run_write()` / `run_edit()` -> `TOOL_HANDLERS` -> `agent_loop()` | 你已经能看懂“主循环不变,工具靠分发面增长” | +| `s03` | `agents/s03_todo_write.py` | `PlanItem` / `PlanningState` / `TodoManager` | `todo` 相关 handler -> reminder 注入 -> `agent_loop()` | 你已经能看懂“会话计划状态”怎么外显化 | +| `s04` | `agents/s04_subagent.py` | `AgentTemplate` | `run_subagent()` -> 父 `agent_loop()` | 你已经能看懂“子智能体首先是上下文隔离” | +| `s05` | `agents/s05_skill_loading.py` | `SkillManifest` / `SkillDocument` / `SkillRegistry` | `get_descriptions()` / `get_content()` -> `agent_loop()` | 你已经能看懂“先发现、再按需加载” | +| `s06` | `agents/s06_context_compact.py` | `CompactState` | `persist_large_output()` -> `micro_compact()` -> `compact_history()` -> `agent_loop()` | 你已经能看懂“压缩不是删历史,而是转移细节” | + +### 这一段最值得反复看的 3 个代码点 + +1. `state` 在哪里第一次从“聊天内容”升级成“显式系统状态” +2. `tool_result` 是怎么一直保持为统一回流接口的 +3. 新机制是怎样接进 `agent_loop()` 而不是把 `agent_loop()` 重写烂的 + +### 这一段读完后,最好的动作 + +不要立刻去看 `s07`。 + +先自己从空目录手写一遍下面这些最小件: + +- 一个 loop +- 一个 dispatch map +- 一个会话计划状态 +- 一个一次性子任务隔离 +- 一个按需技能加载 +- 一个最小压缩层 + +## 阶段 2:`s07-s11` 应该怎样读代码 + +这一段不是在学“又多了五种功能”。 + +它真正是在学: + +**单 agent 的控制面是怎样长出来的。** + +| 章节 | 文件 | 先看什么 | 再看什么 | 读完要确认什么 | +|---|---|---|---|---| +| `s07` | `agents/s07_permission_system.py` | `BashSecurityValidator` / `PermissionManager` | 权限判定入口 -> `run_bash()` -> `agent_loop()` | 你已经能看懂“先 gate,再 execute” | +| `s08` | `agents/s08_hook_system.py` | `HookManager` | hook 注册与触发 -> `agent_loop()` | 你已经能看懂 hook 是固定时机的插口,不是散落 if | +| `s09` | `agents/s09_memory_system.py` | `MemoryManager` / `DreamConsolidator` | `run_save_memory()` -> `build_system_prompt()` -> `agent_loop()` | 你已经能看懂 memory 是长期信息层,不是上下文垃圾桶 | +| `s10` | `agents/s10_system_prompt.py` | `SystemPromptBuilder` | `build_system_reminder()` -> `agent_loop()` | 你已经能看懂输入是流水线,不是单块 prompt | +| `s11` | `agents/s11_error_recovery.py` | `estimate_tokens()` / `auto_compact()` / `backoff_delay()` | 各恢复分支 -> `agent_loop()` | 你已经能看懂“恢复以后怎样继续下一轮” | + +### 这一段读代码时,最容易重新读乱的地方 + +1. 把权限和 hook 混成一类 +2. 把 memory 和 prompt 装配混成一类 +3. 把 `s11` 看成很多异常判断,而不是“续行控制” + +如果你开始混,先回: + +- `docs/zh/s00a-query-control-plane.md` +- `docs/zh/s10a-message-prompt-pipeline.md` +- `docs/zh/s00c-query-transition-model.md` + +## 阶段 3:`s12-s14` 应该怎样读代码 + +这一段开始,代码理解的关键不再是“工具多了什么”,而是: + +**系统第一次真正长出会话外工作状态和运行时槽位。** + +| 章节 | 文件 | 先看什么 | 再看什么 | 读完要确认什么 | +|---|---|---|---|---| +| `s12` | `agents/s12_task_system.py` | `TaskManager` | 任务创建、依赖、解锁 -> `agent_loop()` | 你已经能看懂 task 是持久工作图,不是 todo | +| `s13` | `agents/s13_background_tasks.py` | `NotificationQueue` / `BackgroundManager` | 后台执行登记 -> 通知排空 -> `agent_loop()` | 你已经能看懂 background task 是运行槽位 | +| `s14` | `agents/s14_cron_scheduler.py` | `CronLock` / `CronScheduler` | `cron_matches()` -> schedule 触发 -> `agent_loop()` | 你已经能看懂调度器只负责“未来何时开始” | + +### 这一段读代码时一定要守住的边界 + +- `task` 是工作目标 +- `runtime task` 是正在跑的执行槽位 +- `schedule` 是何时触发工作 + +只要这三层在代码里重新混掉,后面 `s15-s19` 会一起变难。 + +## 阶段 4:`s15-s19` 应该怎样读代码 + +这一段不要当成“功能狂欢”去读。 + +它真正建立的是: + +**平台边界。** + +| 章节 | 文件 | 先看什么 | 再看什么 | 读完要确认什么 | +|---|---|---|---|---| +| `s15` | `agents/s15_agent_teams.py` | `MessageBus` / `TeammateManager` | 队友名册、邮箱、独立循环 -> `agent_loop()` | 你已经能看懂 teammate 是长期 actor,不是一次性 subagent | +| `s16` | `agents/s16_team_protocols.py` | `RequestStore` / `TeammateManager` | `handle_shutdown_request()` / `handle_plan_review()` -> `agent_loop()` | 你已经能看懂 request-response + `request_id` | +| `s17` | `agents/s17_autonomous_agents.py` | `RequestStore` / `TeammateManager` | `is_claimable_task()` / `claim_task()` / `ensure_identity_context()` -> `agent_loop()` | 你已经能看懂自治主线:空闲检查 -> 安全认领 -> 恢复工作 | +| `s18` | `agents/s18_worktree_task_isolation.py` | `TaskManager` / `WorktreeManager` / `EventBus` | `worktree_enter` 相关生命周期 -> `agent_loop()` | 你已经能看懂 task 管目标,worktree 管执行车道 | +| `s19` | `agents/s19_mcp_plugin.py` | `CapabilityPermissionGate` / `MCPClient` / `PluginLoader` / `MCPToolRouter` | `build_tool_pool()` / `handle_tool_call()` / `normalize_tool_result()` -> `agent_loop()` | 你已经能看懂外部能力如何接回同一控制面 | + +### 这一段最容易误读的地方 + +1. 把 `s15` 的 teammate 当成 `s04` 的 subagent 放大版 +2. 把 `s17` 自治看成“agent 自己乱跑” +3. 把 `s18` worktree 看成一个 git 小技巧 +4. 把 `s19` MCP 缩成“只是远程 tools” + +## 代码阅读时,哪些文件不要先看 + +如果你的目标是建立主线心智,下面这些内容不要先看: + +- `web/` 里的可视化实现细节 +- `web/src/data/generated/*` +- `.next/` 或其他构建产物 +- `agents/s_full.py` + +原因不是它们没价值。 + +而是: + +- `web/` 解决的是展示与学习界面 +- `generated` 是抽取结果,不是机制本身 +- `s_full.py` 是整合参考,不适合第一次建立边界 + +## 最推荐的“文档 + 代码 + 运行”循环 + +每一章最稳的学习动作不是只看文档,也不是只看代码。 + +推荐固定走这一套: + +1. 先读这一章正文 +2. 再读这一章的桥接资料 +3. 再打开对应 `agents/sXX_*.py` +4. 按“状态 -> 工具 -> 主推进函数 -> CLI 入口”的顺序看 +5. 跑一次这章的 demo +6. 自己从空目录重写一个最小版本 + +只要你每章都这样走一次,代码理解会非常稳。 + +## 初学者最容易犯的 6 个代码阅读错误 + +### 1. 先看最长文件 + +这通常只会先把自己看晕。 + +### 2. 先盯 `run_bash()` 这种工具细节 + +工具实现细节不是主干。 + +### 3. 不先找状态结构 + +这样你永远不知道系统到底记住了什么。 + +### 4. 把 `agent_loop()` 当成唯一重点 + +主循环当然重要,但每章真正新增的边界,往往在状态容器和分支入口。 + +### 5. 读完代码不跑 demo + +不实际跑一次,很难建立“这一章到底新增了哪条回路”的感觉。 + +### 6. 一口气连看三四章代码,不停下来自己重写 + +这样最容易出现“我好像都看过,但其实自己不会写”的错觉。 + +## 一句话记住 + +**代码阅读顺序也必须服从教学顺序:先看边界,再看状态,再看主线如何推进,而不是随机翻源码。** diff --git a/docs/zh/s01-the-agent-loop.md b/docs/zh/s01-the-agent-loop.md index 86788dc98..bf2241b5a 100644 --- a/docs/zh/s01-the-agent-loop.md +++ b/docs/zh/s01-the-agent-loop.md @@ -1,56 +1,214 @@ -# s01: The Agent Loop (Agent 循环) +# s01: The Agent Loop (智能体循环) -`[ s01 ] s02 > s03 > s04 > s05 > s06 | s07 > s08 > s09 > s10 > s11 > s12` +`s00 > [ s01 ] > s02 > s03 > s04 > s05 > s06 > s07 > s08 > s09 > s10 > s11 > s12 > s13 > s14 > s15 > s16 > s17 > s18 > s19` -> *"One loop & Bash is all you need"* -- 一个工具 + 一个循环 = 一个 Agent。 -> -> **Harness 层**: 循环 -- 模型与真实世界的第一道连接。 +> *没有循环,就没有 agent。* +> 这一章先教你做出一个最小但正确的循环,再告诉你为什么后面还需要更完整的控制平面。 -## 问题 +## 这一章要解决什么问题 -语言模型能推理代码, 但碰不到真实世界 -- 不能读文件、跑测试、看报错。没有循环, 每次工具调用你都得手动把结果粘回去。你自己就是那个循环。 +语言模型本身只会“生成下一段内容”。 -## 解决方案 +它不会自己: +- 打开文件 +- 运行命令 +- 观察报错 +- 把工具结果再接着用于下一步推理 + +如果没有一层代码在中间反复做这件事: + +```text +发请求给模型 + -> 发现模型想调工具 + -> 真的去执行工具 + -> 把结果再喂回模型 + -> 继续下一轮 ``` -+--------+ +-------+ +---------+ -| User | ---> | LLM | ---> | Tool | -| prompt | | | | execute | -+--------+ +---+---+ +----+----+ - ^ | - | tool_result | - +----------------+ - (loop until stop_reason != "tool_use") + +那模型就只是一个“会说话的程序”,还不是一个“会干活的 agent”。 + +所以这一章的核心目标只有一个: + +**把“模型 + 工具”连接成一个能持续推进任务的主循环。** + +## 先解释几个名词 + +### 什么是 loop + +`loop` 就是循环。 + +这里的意思不是“程序死循环”,而是: + +> 只要任务还没做完,系统就继续重复同一套步骤。 + +### 什么是 turn + +`turn` 可以理解成“一轮”。 + +最小版本里,一轮通常包含: + +1. 把当前消息发给模型 +2. 读取模型回复 +3. 如果模型调用了工具,就执行工具 +4. 把工具结果写回消息历史 + +然后才进入下一轮。 + +### 什么是 tool_result + +`tool_result` 就是工具执行结果。 + +它不是随便打印在终端上的日志,而是: + +> 要重新写回对话历史、让模型下一轮真的能看见的结果块。 + +### 什么是 state + +`state` 是“当前运行状态”。 + +第一次看到这个词时,你可以先把它理解成: + +> 主循环继续往下走时,需要一直带着走的那份数据。 + +最小版本里,最重要的状态就是: + +- `messages` +- 当前是第几轮 +- 这一轮结束后为什么还要继续 + +## 最小心智模型 + +先把整个 agent 想成下面这条回路: + +```text +user message + | + v +LLM + | + +-- 普通回答 ----------> 结束 + | + +-- tool_use ----------> 执行工具 + | + v + tool_result + | + v + 写回 messages + | + v + 下一轮继续 ``` -一个退出条件控制整个流程。循环持续运行, 直到模型不再调用工具。 +这条图里最关键的,不是“有一个 while True”。 -## 工作原理 +真正关键的是这句: -1. 用户 prompt 作为第一条消息。 +**工具结果必须重新进入消息历史,成为下一轮推理的输入。** + +如果少了这一步,模型就无法基于真实观察继续工作。 + +## 关键数据结构 + +### 1. Message + +最小教学版里,可以先把消息理解成: ```python -messages.append({"role": "user", "content": query}) +{"role": "user", "content": "..."} +{"role": "assistant", "content": [...]} ``` -2. 将消息和工具定义一起发给 LLM。 +这里最重要的不是字段名字,而是你要记住: + +**消息历史不是聊天记录展示层,而是模型下一轮要读的工作上下文。** + +### 2. Tool Result Block + +当工具执行完后,你要把它包装回消息流: + +```python +{ + "type": "tool_result", + "tool_use_id": "...", + "content": "...", +} +``` + +`tool_use_id` 的作用很简单: + +> 告诉模型“这条结果对应的是你刚才哪一次工具调用”。 + +### 3. LoopState + +这章建议你不要只用一堆零散局部变量。 + +最小也应该显式收拢出一个循环状态: + +```python +state = { + "messages": [...], + "turn_count": 1, + "transition_reason": None, +} +``` + +这里的 `transition_reason` 先只需要理解成: + +> 这一轮结束后,为什么要继续下一轮。 + +最小教学版只用一种原因就够了: + +```python +"tool_result" +``` + +也就是: + +> 因为刚执行完工具,所以要继续。 + +后面到了控制面更完整的章节里,你会看到它逐渐长成更多种原因。 +如果你想先看完整一点的形状,可以配合读: + +- [`s00a-query-control-plane.md`](./s00a-query-control-plane.md) + +## 最小实现 + +### 第一步:准备初始消息 + +用户的请求先进入 `messages`: + +```python +messages = [{"role": "user", "content": query}] +``` + +### 第二步:调用模型 + +把消息历史、system prompt 和工具定义一起发给模型: ```python response = client.messages.create( - model=MODEL, system=SYSTEM, messages=messages, - tools=TOOLS, max_tokens=8000, + model=MODEL, + system=SYSTEM, + messages=messages, + tools=TOOLS, + max_tokens=8000, ) ``` -3. 追加助手响应。检查 `stop_reason` -- 如果模型没有调用工具, 结束。 +### 第三步:追加 assistant 回复 ```python messages.append({"role": "assistant", "content": response.content}) -if response.stop_reason != "tool_use": - return ``` -4. 执行每个工具调用, 收集结果, 作为 user 消息追加。回到第 2 步。 +这一步非常重要。 + +很多初学者会只关心“最后有没有答案”,忽略把 assistant 回复本身写回历史。 +这样一来,下一轮上下文就会断掉。 + +### 第四步:如果模型调用了工具,就执行 ```python results = [] @@ -62,57 +220,135 @@ for block in response.content: "tool_use_id": block.id, "content": output, }) +``` + +### 第五步:把工具结果作为新消息写回去 + +```python messages.append({"role": "user", "content": results}) ``` -组装为一个完整函数: +然后下一轮重新发给模型。 + +### 组合成一个完整循环 ```python -def agent_loop(query): - messages = [{"role": "user", "content": query}] +def agent_loop(state): while True: response = client.messages.create( - model=MODEL, system=SYSTEM, messages=messages, - tools=TOOLS, max_tokens=8000, + model=MODEL, + system=SYSTEM, + messages=state["messages"], + tools=TOOLS, + max_tokens=8000, ) - messages.append({"role": "assistant", "content": response.content}) + + state["messages"].append({ + "role": "assistant", + "content": response.content, + }) if response.stop_reason != "tool_use": + state["transition_reason"] = None return results = [] for block in response.content: if block.type == "tool_use": - output = run_bash(block.input["command"]) + output = run_tool(block) results.append({ "type": "tool_result", "tool_use_id": block.id, "content": output, }) - messages.append({"role": "user", "content": results}) + + state["messages"].append({"role": "user", "content": results}) + state["turn_count"] += 1 + state["transition_reason"] = "tool_result" ``` -不到 30 行, 这就是整个 Agent。后面 11 个章节都在这个循环上叠加机制 -- 循环本身始终不变。 +这就是最小 agent loop。 + +## 它如何接进整个系统 + +从现在开始,后面所有章节本质上都在做同一件事: + +**往这个循环里增加新的状态、新的分支判断和新的执行能力。** + +例如: + +- `s02` 往里面接工具路由 +- `s03` 往里面接规划状态 +- `s06` 往里面接上下文压缩 +- `s07` 往里面接权限判断 +- `s11` 往里面接错误恢复 + +所以请把这一章牢牢记成一句话: -## 变更内容 +> agent 的核心不是“模型很聪明”,而是“系统持续把现实结果喂回模型”。 -| 组件 | 之前 | 之后 | -|---------------|------------|--------------------------------| -| Agent loop | (无) | `while True` + stop_reason | -| Tools | (无) | `bash` (单一工具) | -| Messages | (无) | 累积式消息列表 | -| Control flow | (无) | `stop_reason != "tool_use"` | +## 为什么教学版先接受 `stop_reason == "tool_use"` 这个简化 -## 试一试 +这一章里,我们先用: -```sh -cd learn-claude-code -python agents/s01_agent_loop.py +```python +if response.stop_reason != "tool_use": + return ``` -试试这些 prompt (英文 prompt 对 LLM 效果更好, 也可以用中文): +这完全合理。 + +因为初学者在第一章真正要学会的,不是所有复杂边界,而是: + +1. assistant 回复要写回历史 +2. tool_result 要写回历史 +3. 主循环要持续推进 + +但你也要知道,这只是第一层简化。 + +更完整的系统不会只依赖 `stop_reason`,还会自己维护更明确的续行状态。 +这是后面要补的,不是这一章一开始就要背下来的东西。 + +## 初学者最容易犯的错 + +### 1. 把工具结果打印出来,但不写回 `messages` + +这样模型下一轮根本看不到真实执行结果。 + +### 2. 只保存用户消息,不保存 assistant 消息 + +这样上下文会断层,模型会越来越不像“接着刚才做”。 + +### 3. 不给工具结果绑定 `tool_use_id` + +模型会分不清哪条结果对应哪次调用。 + +### 4. 一上来就把流式、并发、恢复、压缩全塞进第一章 + +这会让主线变得非常难学。 + +第一章最重要的是先把最小回路搭起来。 + +### 5. 以为 `messages` 只是聊天展示 + +不是。 + +在 agent 里,`messages` 更像“下一轮工作输入”。 + +## 教学边界 + +这一章只需要先讲透一件事: + +**Agent 之所以从“会说”变成“会做”,是因为模型输出能走到工具,工具结果又能回到下一轮模型输入。** + +所以教学仓库在这里要刻意停住: + +- 不要一开始就拉进 streaming、retry、budget、recovery +- 不要一开始就混入权限、Hook、任务系统 +- 不要把第一章写成整套系统所有后续机制的总图 + +如果读者已经能凭记忆写出 `messages -> model -> tool_result -> next turn` 这条回路,这一章就已经达标了。 + +## 一句话记住 -1. `Create a file called hello.py that prints "Hello, World!"` -2. `List all Python files in this directory` -3. `What is the current git branch?` -4. `Create a directory called test_output and write 3 files in it` +**Agent Loop 的本质,是把“模型的动作意图”变成“真实执行结果”,再把结果送回模型继续推理。** diff --git a/docs/zh/s02-tool-use.md b/docs/zh/s02-tool-use.md index a26d0a190..aee04179e 100644 --- a/docs/zh/s02-tool-use.md +++ b/docs/zh/s02-tool-use.md @@ -1,6 +1,6 @@ # s02: Tool Use (工具使用) -`s01 > [ s02 ] s03 > s04 > s05 > s06 | s07 > s08 > s09 > s10 > s11 > s12` +`s00 > s01 > [ s02 ] > s03 > s04 > s05 > s06 > s07 > s08 > s09 > s10 > s11 > s12 > s13 > s14 > s15 > s16 > s17 > s18 > s19` > *"加一个工具, 只加一个 handler"* -- 循环不用动, 新工具注册进 dispatch map 就行。 > @@ -99,3 +99,122 @@ python agents/s02_tool_use.py 2. `Create a file called greet.py with a greet(name) function` 3. `Edit greet.py to add a docstring to the function` 4. `Read greet.py to verify the edit worked` + +## 如果你开始觉得“工具不只是 handler map” + +到这里为止,教学主线先把工具讲成: + +- schema +- handler +- `tool_result` + +这是对的,而且必须先这么学。 + +但如果你继续把系统做大,很快就会发现工具层还会继续长出: + +- 权限环境 +- 当前消息和 app state +- MCP client +- 文件读取缓存 +- 通知与 query 跟踪 + +也就是说,在一个结构更完整的系统里,工具层最后会更像一条“工具控制平面”,而不只是一张分发表。 + +这层不要抢正文主线。 +你先把这一章吃透,再继续看: + +- [`s02a-tool-control-plane.md`](./s02a-tool-control-plane.md) + +## 消息规范化 + +教学版的 `messages` 列表直接发给 API, 所见即所发。但当系统变复杂后 (工具超时、用户取消、压缩替换), 内部消息列表会出现 API 不接受的格式问题。需要在发送前做一次规范化。 + +### 为什么需要 + +API 协议有三条硬性约束: +1. 每个 `tool_use` 块**必须**有匹配的 `tool_result` (通过 `tool_use_id` 关联) +2. `user` / `assistant` 消息必须**严格交替** (不能连续两条同角色) +3. 只接受协议定义的字段 (内部元数据会导致 400 错误) + +### 实现 + +```python +def normalize_messages(messages: list) -> list: + """将内部消息列表规范化为 API 可接受的格式。""" + normalized = [] + + for msg in messages: + # Step 1: 剥离内部字段 + clean = {"role": msg["role"]} + if isinstance(msg.get("content"), str): + clean["content"] = msg["content"] + elif isinstance(msg.get("content"), list): + clean["content"] = [ + {k: v for k, v in block.items() + if k not in ("_internal", "_source", "_timestamp")} + for block in msg["content"] + ] + normalized.append(clean) + + # Step 2: tool_result 配对补齐 + # 收集所有已有的 tool_result ID + existing_results = set() + for msg in normalized: + if isinstance(msg.get("content"), list): + for block in msg["content"]: + if block.get("type") == "tool_result": + existing_results.add(block.get("tool_use_id")) + + # 找出缺失配对的 tool_use, 插入占位 result + for msg in normalized: + if msg["role"] == "assistant" and isinstance(msg.get("content"), list): + for block in msg["content"]: + if (block.get("type") == "tool_use" + and block.get("id") not in existing_results): + # 在下一条 user 消息中补齐 + normalized.append({"role": "user", "content": [{ + "type": "tool_result", + "tool_use_id": block["id"], + "content": "(cancelled)", + }]}) + + # Step 3: 合并连续同角色消息 + merged = [normalized[0]] if normalized else [] + for msg in normalized[1:]: + if msg["role"] == merged[-1]["role"]: + # 合并内容 + prev = merged[-1] + prev_content = prev["content"] if isinstance(prev["content"], list) \ + else [{"type": "text", "text": prev["content"]}] + curr_content = msg["content"] if isinstance(msg["content"], list) \ + else [{"type": "text", "text": msg["content"]}] + prev["content"] = prev_content + curr_content + else: + merged.append(msg) + + return merged +``` + +在 agent loop 中, 每次 API 调用前运行: + +```python +response = client.messages.create( + model=MODEL, system=system, + messages=normalize_messages(messages), # 规范化后再发送 + tools=TOOLS, max_tokens=8000, +) +``` + +**关键洞察**: `messages` 列表是系统的内部表示, API 看到的是规范化后的副本。两者不是同一个东西。 + +## 教学边界 + +这一章最重要的,不是把完整工具运行时一次讲全,而是先讲清 3 个稳定点: + +- tool schema 是给模型看的说明 +- handler map 是代码里的分发入口 +- `tool_result` 是结果回流到主循环的统一出口 + +只要这三点稳住,读者就已经能自己在不改主循环的前提下新增工具。 + +权限、hook、并发、流式执行、外部工具来源这些后续层次当然重要,但都应该建立在这层最小分发模型之后。 diff --git a/docs/zh/s02a-tool-control-plane.md b/docs/zh/s02a-tool-control-plane.md new file mode 100644 index 000000000..abd430ed7 --- /dev/null +++ b/docs/zh/s02a-tool-control-plane.md @@ -0,0 +1,296 @@ +# s02a: Tool Control Plane (工具控制平面) + +> 这篇桥接文档用来回答另一个关键问题: +> +> **为什么“工具系统”不只是一个 `tool_name -> handler` 的映射表?** + +## 这一篇为什么要存在 + +`s02` 先教你工具注册和分发,这完全正确。 +因为如果你一开始连工具调用都没做出来,后面的一切都无从谈起。 + +但当系统长大以后,工具层会逐渐承载越来越多的责任: + +- 权限判断 +- MCP 接入 +- 通知发送 +- subagent / teammate 共享状态 +- file state cache +- 当前消息和当前会话环境 +- 某些工具专属限制 + +这时候,“工具层”就已经不是一张函数表了。 + +它更像一条总线: + +**模型通过工具名发出动作意图,系统通过工具控制平面决定这条意图在什么环境里执行。** + +## 先解释几个名词 + +### 什么是工具控制平面 + +这里的“控制平面”可以继续沿用上一份桥接文档的理解: + +> 不直接做业务结果,而是负责协调工具如何执行的一层。 + +它关心的问题不是“这个工具最后返回了什么”,而是: + +- 它在哪执行 +- 它有没有权限 +- 它可不可以访问某些共享状态 +- 它是本地工具还是外部工具 + +### 什么是执行上下文 + +执行上下文,就是工具运行时能看到的环境。 + +例如: + +- 当前工作目录 +- 当前 app state +- 当前消息列表 +- 当前权限模式 +- 当前可用 MCP client + +### 什么是能力来源 + +不是所有工具都来自同一个地方。 + +系统里常见的能力来源有: + +- 本地原生工具 +- MCP 外部工具 +- agent 工具 +- task / worktree / team 这类平台工具 + +## 最小心智模型 + +工具系统可以先画成 4 层: + +```text +1. ToolSpec + 模型看见的工具名字、描述、输入 schema + +2. Tool Router + 根据工具名把请求送去正确的能力来源 + +3. ToolUseContext + 工具运行时能访问的共享环境 + +4. Tool Result Envelope + 把输出包装回主循环 +``` + +最重要的升级点在第三层: + +**更完整系统的核心,不是 tool table,而是 ToolUseContext。** + +## 关键数据结构 + +### 1. ToolSpec + +这还是最基础的结构: + +```python +tool = { + "name": "read_file", + "description": "Read file contents.", + "input_schema": {...}, +} +``` + +### 2. ToolDispatchMap + +```python +handlers = { + "read_file": read_file, + "write_file": write_file, + "bash": run_bash, +} +``` + +这依旧需要,但它不是全部。 + +### 3. ToolUseContext + +教学版可以先做一个简化版本: + +```python +tool_use_context = { + "tools": handlers, + "permission_context": {...}, + "mcp_clients": {}, + "messages": [...], + "app_state": {...}, + "notifications": [], + "cwd": "...", +} +``` + +这个结构的关键点是: + +- 工具不再只拿到“输入参数” +- 工具还能拿到“共享运行环境” + +### 4. ToolResultEnvelope + +不要把返回值只想成字符串。 + +更稳妥的形状是: + +```python +result = { + "ok": True, + "content": "...", + "is_error": False, + "attachments": [], +} +``` + +这样后面你才能平滑承接: + +- 普通文本结果 +- 结构化结果 +- 错误结果 +- 附件类结果 + +## 为什么更完整的系统一定会出现 ToolUseContext + +想象两个系统。 + +### 系统 A:只有 dispatch map + +```python +output = handlers[tool_name](**tool_input) +``` + +这适合最小 demo。 + +### 系统 B:有 ToolUseContext + +```python +output = handlers[tool_name](tool_input, tool_use_context) +``` + +这个版本才更接近一个真实平台。 + +因为工具现在不只是“做一个动作”,而是在一个复杂系统里做动作。 + +例如: + +- `bash` 要看权限 +- `mcp__postgres__query` 要找对应 client +- `agent` 工具要创建子执行环境 +- `task_output` 工具可能要写磁盘并发通知 + +这些都要求它们共享同一个上下文总线。 + +## 最小实现 + +### 第一步:仍然保留 ToolSpec 和 handler + +这个主线不要丢。 + +### 第二步:引入一个统一 context + +```python +class ToolUseContext: + def __init__(self): + self.handlers = {} + self.permission_context = {} + self.mcp_clients = {} + self.messages = [] + self.app_state = {} + self.notifications = [] +``` + +### 第三步:让所有 handler 都能看到 context + +```python +def run_tool(tool_name: str, tool_input: dict, ctx: ToolUseContext): + handler = ctx.handlers[tool_name] + return handler(tool_input, ctx) +``` + +### 第四步:在 router 层分不同能力来源 + +```python +def route_tool(tool_name: str, tool_input: dict, ctx: ToolUseContext): + if tool_name.startswith("mcp__"): + return run_mcp_tool(tool_name, tool_input, ctx) + return run_native_tool(tool_name, tool_input, ctx) +``` + +## 一张应该讲清楚的图 + +```text +LLM tool call + | + v +Tool Router + | + +-- native tools ----------> local handlers + | + +-- mcp tools -------------> mcp client + | + +-- agent/task/team tools --> platform handlers + | + v + ToolUseContext + - permissions + - messages + - app state + - notifications + - mcp clients +``` + +## 它和 `s02`、`s19` 的关系 + +- `s02` 先教你工具调用为什么成立 +- 这篇解释更完整的系统里工具层为什么会长成一个控制平面 +- `s19` 再把 MCP 作为外部能力来源接进来 + +也就是说: + +**MCP 不是另一套独立系统,而是 Tool Control Plane 的一个能力来源。** + +## 初学者最容易犯的错 + +### 1. 以为工具上下文只是 `cwd` + +不是。 + +更完整的系统里,工具上下文往往还包含权限、状态、外部连接和通知接口。 + +### 2. 让每个工具自己去全局变量里找环境 + +这样工具层会变得非常散。 + +更清楚的做法,是显式传一个统一 context。 + +### 3. 把本地工具和 MCP 工具拆成完全不同体系 + +这会让系统边界越来越乱。 + +更好的方式是: + +- 能力来源不同 +- 但都汇入统一 router 和统一 result envelope + +### 4. 把 tool result 永远当成纯字符串 + +这样后面接附件、错误、结构化信息时会很别扭。 + +## 教学边界 + +这篇最重要的,不是把工具层做成一个庞大的企业总线,而是先把下面三层边界讲清: + +- tool call 不是直接执行,而是先进入统一调度入口 +- 工具 handler 不应该各自去偷拿环境,而应该共享一份显式 `ToolUseContext` +- 本地工具、插件工具、MCP 工具可以来源不同,但结果都应该回到统一控制面 + +类型化上下文、能力注册中心、大结果存储和更细的工具限额,都是你把这条最小控制总线讲稳以后再补的扩展。 + +## 一句话记住 + +**最小工具系统靠 dispatch map,更完整的工具系统靠 ToolUseContext 这条控制总线。** diff --git a/docs/zh/s02b-tool-execution-runtime.md b/docs/zh/s02b-tool-execution-runtime.md new file mode 100644 index 000000000..fe6eac5ac --- /dev/null +++ b/docs/zh/s02b-tool-execution-runtime.md @@ -0,0 +1,332 @@ +# s02b: Tool Execution Runtime (工具执行运行时) + +> 这篇桥接文档解决的不是“工具怎么注册”,而是: +> +> **当模型一口气发出多个工具调用时,系统到底按什么规则执行、并发、回写、合并上下文?** + +## 这一篇为什么要存在 + +`s02` 先教你: + +- 工具 schema +- dispatch map +- tool_result 回流 + +这完全正确。 +因为工具调用先得成立,后面才谈得上复杂度。 + +但系统一旦长大,真正棘手的问题会变成下面这些: + +- 多个工具能不能并行执行 +- 哪些工具必须串行 +- 工具执行过程中要不要先发进度消息 +- 并发工具的结果应该按完成顺序回写,还是按原始出现顺序回写 +- 工具执行会不会改共享上下文 +- 多个并发工具如果都要改上下文,最后怎么合并 + +这些问题已经不是“工具注册”能解释的了。 + +它们属于更深一层: + +**工具执行运行时。** + +## 先解释几个名词 + +### 什么叫工具执行运行时 + +这里的运行时,不是指编程语言 runtime。 + +这里说的是: + +> 当工具真正开始执行时,系统用什么规则去调度、并发、跟踪和回写这些工具。 + +### 什么叫 concurrency safe + +你可以先把它理解成: + +> 这个工具能不能和别的同类工具同时跑,而不会把共享状态搞乱。 + +例如很多只读工具常常是 concurrency safe: + +- `read_file` +- 某些搜索工具 +- 某些纯查询类 MCP 工具 + +而很多写操作不是: + +- `write_file` +- `edit_file` +- 某些会改全局状态的工具 + +### 什么叫 progress message + +有些工具跑得慢,不适合一直静默。 + +progress message 就是: + +> 工具还没结束,但系统先把“它正在做什么”告诉上层。 + +### 什么叫 context modifier + +有些工具执行完不只是返回结果,还会修改共享环境。 + +例如: + +- 更新通知队列 +- 更新 app state +- 更新“哪些工具正在运行” + +这种“对共享上下文的修改动作”,就可以理解成 context modifier。 + +## 最小心智模型 + +先不要把工具执行想成: + +```text +tool_use -> handler -> result +``` + +更接近真实可扩展系统的理解是: + +```text +tool_use blocks + -> +按执行安全性分批 + -> +每批决定串行还是并行 + -> +执行过程中可能产出 progress + -> +最终按稳定顺序回写结果 + -> +必要时再合并 context modifiers +``` + +这里最关键的升级点有两个: + +- 并发不是默认全开 +- 上下文修改不是谁先跑完谁先直接乱写 + +## 关键数据结构 + +### 1. ToolExecutionBatch + +教学版最小可以先用这样一个概念: + +```python +batch = { + "is_concurrency_safe": True, + "blocks": [tool_use_1, tool_use_2, tool_use_3], +} +``` + +它的意义是: + +- 不是每个工具都单独处理 +- 系统会先把工具调用按可否并发分成一批一批 + +### 2. TrackedTool + +如果你准备把执行层做得更稳、更清楚,建议显式跟踪每个工具: + +```python +tracked_tool = { + "id": "toolu_01", + "name": "read_file", + "status": "queued", # queued / executing / completed / yielded + "is_concurrency_safe": True, + "pending_progress": [], + "results": [], + "context_modifiers": [], +} +``` + +这类结构的价值很大。 + +因为系统终于开始能回答: + +- 哪些工具还在排队 +- 哪些已经开始 +- 哪些已经完成 +- 哪些已经先吐出了中间进度 + +### 3. MessageUpdate + +工具执行过程中,不一定只有最终结果。 + +最小可以先理解成: + +```python +update = { + "message": maybe_message, + "new_context": current_context, +} +``` + +更完整的执行层里,一个工具执行运行时往往会产出两类更新: + +- 要立刻往上游发的消息更新 +- 只影响内部共享环境的 context 更新 + +### 4. Queued Context Modifiers + +这是最容易被忽略、但很关键的一层。 + +在并发工具批次里,更稳的策略不是“谁先完成谁先改 context”,而是: + +> 先把 context modifier 暂存起来,最后按原始工具顺序统一合并。 + +最小理解方式: + +```python +queued_context_modifiers = { + "toolu_01": [modify_ctx_a], + "toolu_02": [modify_ctx_b], +} +``` + +## 最小实现 + +### 第一步:先分清哪些工具能并发 + +```python +def is_concurrency_safe(tool_name: str, tool_input: dict) -> bool: + return tool_name in {"read_file", "search_files"} +``` + +### 第二步:先分批,再执行 + +```python +batches = partition_tool_calls(tool_uses) + +for batch in batches: + if batch["is_concurrency_safe"]: + run_concurrently(batch["blocks"]) + else: + run_serially(batch["blocks"]) +``` + +### 第三步:并发批次先吐进度,再收最终结果 + +```python +for update in run_concurrently(...): + if update.get("message"): + yield update["message"] +``` + +### 第四步:context modifier 不要乱序落地 + +```python +queued_modifiers = {} + +for update in concurrent_updates: + if update.get("context_modifier"): + queued_modifiers[update["tool_id"]].append(update["context_modifier"]) + +for tool in original_batch_order: + for modifier in queued_modifiers.get(tool["id"], []): + context = modifier(context) +``` + +这一步是整篇里最容易被低估,但其实最接近真实系统开始长出执行运行时的点之一。 + +## 一张真正应该建立的图 + +```text +tool_use blocks + | + v +partition by concurrency safety + | + +-- read-only / safe batch -----> concurrent execution + | | + | +-- progress updates + | +-- final results + | +-- queued context modifiers + | + +-- exclusive batch ------------> serial execution + | + +-- direct result + direct context update +``` + +## 为什么这层比“dispatch map”更接近真实系统主脉络 + +最小 demo 里: + +```python +handlers[tool_name](tool_input) +``` + +就够了。 + +但在更完整系统里,真正复杂的不是“找到 handler”。 + +真正复杂的是: + +- 多工具之间如何共存 +- 哪些能并发 +- 并发时如何保证回写顺序稳定 +- 并发时如何避免共享 context 被抢写 +- 工具报错时是否中止其他工具 + +所以这层讲的不是边角优化,而是: + +> 工具系统从“可调用”升级到“可调度”的关键一步。 + +## 它和前后章节怎么接 + +- `s02` 先教你工具为什么能被调用 +- [`s02a-tool-control-plane.md`](./s02a-tool-control-plane.md) 讲工具为什么会长成统一控制面 +- 这篇继续讲,工具真的开始运行以后,系统如何调度它们 +- `s07`、`s13`、`s19` 往后都还会继续用到这层心智 + +尤其是: + +- 权限系统会影响工具能不能执行 +- 后台任务会影响工具是否立即结束 +- MCP / plugin 会让工具来源更多、执行形态更复杂 + +## 初学者最容易犯的错 + +### 1. 看到多个工具调用,就默认全部并发 + +这样很容易把共享状态搞乱。 + +### 2. 只按完成顺序回写结果 + +如果你完全按“谁先跑完谁先写”,主循环看到的顺序会越来越不稳定。 + +### 3. 并发工具直接同时改共享 context + +这会制造很多很难解释的隐性状态问题。 + +### 4. 认为 progress message 是“可有可无的 UI 装饰” + +它其实会影响: + +- 上层何时知道工具还活着 +- 长工具调用期间用户是否困惑 +- streaming 执行体验是否稳定 + +### 5. 只讲工具 schema,不讲工具调度 + +这样读者最后只会“注册工具”,却不理解真实 agent 为什么还要长出工具执行运行时。 + +## 教学边界 + +这篇最重要的,不是把工具调度层一次讲成一个庞大 runtime,而是先让读者守住三件事: + +- 工具调用要先分批,而不是默认看到多个 `tool_use` 就全部并发 +- 并发执行和稳定回写是两件事,不应该混成一个动作 +- 共享 context 的修改最好先排队,再按稳定顺序统一合并 + +只要这三条边界已经清楚,后面的权限、后台任务和 MCP 接入就都有地方挂。 +更细的队列模型、取消策略、流式输出协议,都可以放到你把这条最小运行时自己手搓出来以后再补。 + +## 读完这一篇你应该能说清楚 + +至少能完整说出这句话: + +> 工具系统不只是 `tool_name -> handler`,它还需要一层执行运行时来决定哪些工具并发、哪些串行、结果如何回写、共享上下文如何稳定合并。 + +如果这句话你已经能稳定说清,那么你对 agent 工具层的理解,就已经比“会注册几个工具”深一大层了。 diff --git a/docs/zh/s03-todo-write.md b/docs/zh/s03-todo-write.md index e593233a6..f89935294 100644 --- a/docs/zh/s03-todo-write.md +++ b/docs/zh/s03-todo-write.md @@ -1,98 +1,325 @@ -# s03: TodoWrite (待办写入) +# s03: TodoWrite (会话内规划) -`s01 > s02 > [ s03 ] s04 > s05 > s06 | s07 > s08 > s09 > s10 > s11 > s12` +`s00 > s01 > s02 > [ s03 ] > s04 > s05 > s06 > s07 > s08 > s09 > s10 > s11 > s12 > s13 > s14 > s15 > s16 > s17 > s18 > s19` -> *"没有计划的 agent 走哪算哪"* -- 先列步骤再动手, 完成率翻倍。 -> -> **Harness 层**: 规划 -- 让模型不偏航, 但不替它画航线。 +> *计划不是替模型思考,而是把“正在做什么”明确写出来。* -## 问题 +## 这一章要解决什么问题 -多步任务中, 模型会丢失进度 -- 重复做过的事、跳步、跑偏。对话越长越严重: 工具结果不断填满上下文, 系统提示的影响力逐渐被稀释。一个 10 步重构可能做完 1-3 步就开始即兴发挥, 因为 4-10 步已经被挤出注意力了。 +到了 `s02`,agent 已经会读文件、写文件、跑命令。 -## 解决方案 +问题也马上出现了: +- 多步任务容易走一步忘一步 +- 明明已经做过的检查,会重复再做 +- 一口气列出很多步骤后,很快又回到即兴发挥 + +这是因为模型虽然“能想”,但它的当前注意力始终受上下文影响。 +如果没有一块**显式、稳定、可反复更新**的计划状态,大任务就很容易漂。 + +所以这一章要补上的,不是“更强的工具”,而是: + +**让 agent 把当前会话里的计划外显出来,并且持续更新。** + +## 先解释几个名词 + +### 什么是会话内规划 + +这里说的规划,不是长期项目管理,也不是磁盘上的任务系统。 + +它更像: + +> 为了完成当前这次请求,先把接下来几步写出来,并在过程中不断更新。 + +### 什么是 todo + +`todo` 在这一章里只是一个载体。 + +你不要把它理解成“某个特定产品里的某个工具名”,更应该把它理解成: + +> 模型用来写入当前计划的一条入口。 + +### 什么是 active step + +`active step` 可以理解成“当前正在做的那一步”。 + +教学版里我们用 `in_progress` 表示它。 +这么做的目的不是形式主义,而是帮助模型维持焦点: + +> 同一时间,先把一件事做完,再进入下一件。 + +### 什么是提醒 + +提醒不是替模型规划,而是当它连续几轮都忘记更新计划时,轻轻拉它回来。 + +## 先立清边界:这章不是任务系统 + +这是这一章最重要的边界。 + +`s03` 讲的是: + +- 当前会话里的轻量计划 +- 用来帮助模型聚焦下一步 +- 可以随任务推进不断改写 + +它**不是**: + +- 持久化任务板 +- 依赖图 +- 多 agent 共用的工作图 +- 后台运行时任务管理 + +这些会在 `s12-s14` 再系统展开。 + +如果你现在就把 `s03` 讲成完整任务平台,初学者会很快混淆: + +- “当前这一步要做什么” +- “整个系统长期还有哪些工作项” + +## 最小心智模型 + +把这一章先想成一个很简单的结构: + +```text +用户提出大任务 + | + v +模型先写一份当前计划 + | + v +计划状态 + - [ ] 还没做 + - [>] 正在做 + - [x] 已完成 + | + v +每做完一步,就更新计划 ``` -+--------+ +-------+ +---------+ -| User | ---> | LLM | ---> | Tools | -| prompt | | | | + todo | -+--------+ +---+---+ +----+----+ - ^ | - | tool_result | - +----------------+ - | - +-----------+-----------+ - | TodoManager state | - | [ ] task A | - | [>] task B <- doing | - | [x] task C | - +-----------------------+ - | - if rounds_since_todo >= 3: - inject into tool_result + +更具体一点: + +```text +1. 先拆几步 +2. 选一项作为当前 active step +3. 做完后标记 completed +4. 把下一项改成 in_progress +5. 如果好几轮没更新,系统提醒一下 +``` + +这就是最小版本最该教清楚的部分。 + +## 关键数据结构 + +### 1. PlanItem + +最小条目可以长这样: + +```python +{ + "content": "Read the failing test", + "status": "pending" | "in_progress" | "completed", + "activeForm": "Reading the failing test", +} ``` -## 工作原理 +这里的字段分别表示: -1. TodoManager 存储带状态的项目。同一时间只允许一个 `in_progress`。 +- `content`:这一步要做什么 +- `status`:这一步现在处在什么状态 +- `activeForm`:当它正在进行中时,可以用更自然的进行时描述 + +### 2. PlanningState + +除了计划条目本身,还应该有一点最小运行状态: + +```python +{ + "items": [...], + "rounds_since_update": 0, +} +``` + +`rounds_since_update` 的意思很简单: + +> 连续多少轮过去了,模型还没有更新这份计划。 + +### 3. 状态约束 + +教学版推荐先立一条简单规则: + +```text +同一时间,最多一个 in_progress +``` + +这不是宇宙真理。 +它只是一个非常适合初学者的教学约束: + +**强制模型聚焦当前一步。** + +## 最小实现 + +### 第一步:准备一个计划管理器 ```python class TodoManager: - def update(self, items: list) -> str: - validated, in_progress_count = [], 0 - for item in items: - status = item.get("status", "pending") - if status == "in_progress": - in_progress_count += 1 - validated.append({"id": item["id"], "text": item["text"], - "status": status}) - if in_progress_count > 1: - raise ValueError("Only one task can be in_progress") - self.items = validated - return self.render() + def __init__(self): + self.items = [] +``` + +### 第二步:允许模型整体更新当前计划 + +```python +def update(self, items: list) -> str: + validated = [] + in_progress_count = 0 + + for item in items: + status = item.get("status", "pending") + if status == "in_progress": + in_progress_count += 1 + validated.append({ + "content": item["content"], + "status": status, + "activeForm": item.get("activeForm", ""), + }) + + if in_progress_count > 1: + raise ValueError("Only one item can be in_progress") + + self.items = validated + return self.render() +``` + +教学版让模型“整份重写”当前计划,比做一堆局部增删改更容易理解。 + +### 第三步:把计划渲染成可读文本 + +```python +def render(self) -> str: + lines = [] + for item in self.items: + marker = { + "pending": "[ ]", + "in_progress": "[>]", + "completed": "[x]", + }[item["status"]] + lines.append(f"{marker} {item['content']}") + return "\n".join(lines) ``` -2. `todo` 工具和其他工具一样加入 dispatch map。 +### 第四步:把 `todo` 接成一个工具 ```python TOOL_HANDLERS = { - # ...base tools... + "read_file": run_read, + "write_file": run_write, + "edit_file": run_edit, + "bash": run_bash, "todo": lambda **kw: TODO.update(kw["items"]), } ``` -3. nag reminder: 模型连续 3 轮以上不调用 `todo` 时注入提醒。 +### 第五步:如果连续几轮没更新计划,就提醒 ```python -if rounds_since_todo >= 3 and messages: - last = messages[-1] - if last["role"] == "user" and isinstance(last.get("content"), list): - last["content"].insert(0, { - "type": "text", - "text": "Update your todos.", - }) +if rounds_since_update >= 3: + results.insert(0, { + "type": "text", + "text": "Refresh your plan before continuing.", + }) ``` -"同时只能有一个 in_progress" 强制顺序聚焦。nag reminder 制造问责压力 -- 你不更新计划, 系统就追着你问。 +这一步的核心意义不是“催促”本身,而是: + +> 系统开始把“计划状态是否失活”也看成主循环的一部分。 + +## 它如何接到主循环里 + +这一章以后,主循环不再只维护: -## 相对 s02 的变更 +- `messages` -| 组件 | 之前 (s02) | 之后 (s03) | -|----------------|------------------|--------------------------------| -| Tools | 4 | 5 (+todo) | -| 规划 | 无 | 带状态的 TodoManager | -| Nag 注入 | 无 | 3 轮后注入 `` | -| Agent loop | 简单分发 | + rounds_since_todo 计数器 | +还开始维护一份额外的会话状态: -## 试一试 +- `PlanningState` -```sh -cd learn-claude-code -python agents/s03_todo_write.py +也就是说,agent loop 现在不只是在“对话”。 + +它还在维持一块当前工作面板: + +```text +messages -> 模型看到的历史 +planning state -> 当前计划的显式外部状态 ``` -试试这些 prompt (英文 prompt 对 LLM 效果更好, 也可以用中文): +这就是这一章真正想让你学会的升级: + +**把“当前要做什么”从模型脑内,移到系统可观察的状态里。** + +## 为什么这章故意不讲成任务图 + +因为这里的重点是: + +- 帮模型聚焦下一步 +- 让当前进度变得外显 +- 给主循环一个“过程性状态” + +而不是: + +- 任务依赖 +- 长期持久化 +- 多人协作任务板 +- 后台运行槽位 + +如果你已经开始关心这些问题,说明你快进入: + +- [`s12-task-system.md`](./s12-task-system.md) +- [`s13a-runtime-task-model.md`](./s13a-runtime-task-model.md) + +## 初学者最容易犯的错 + +### 1. 把计划写得过长 + +计划不是越多越好。 + +如果一上来列十几步,模型很快就会失去维护意愿。 + +### 2. 不区分“当前一步”和“未来几步” + +如果同时有很多个 `in_progress`,焦点就会散。 + +### 3. 把会话计划当成长期任务系统 + +这会让 `s03` 和 `s12` 的边界完全混掉。 + +### 4. 只在开始时写一次计划,后面从不更新 + +那这份计划就失去价值了。 + +### 5. 以为 reminder 是可有可无的小装饰 + +不是。 + +提醒机制说明了一件很重要的事: + +> 主循环不仅要执行动作,还要维护动作过程中的结构化状态。 + +## 教学边界 + +这一章讲的是: + +**会话里的外显计划状态。** + +它还不是后面那种持久任务系统,所以边界要守住: + +- 这里的 `todo` 只服务当前会话,不负责跨阶段持久化 +- `{id, text, status}` 这种小结构已经够教会核心模式 +- reminder 直接一点没问题,重点是让模型持续更新计划 + +这一章真正要让读者看见的是: + +**当计划进入结构化状态,而不是散在自然语言里时,agent 的漂移会明显减少。** + +## 一句话记住 -1. `Refactor the file hello.py: add type hints, docstrings, and a main guard` -2. `Create a Python package with __init__.py, utils.py, and tests/test_utils.py` -3. `Review all Python files and fix any style issues` +**`s03` 的 todo,不是任务平台,而是当前会话里的“外显计划状态”。** diff --git a/docs/zh/s04-subagent.md b/docs/zh/s04-subagent.md index 708be1f60..b215a37b6 100644 --- a/docs/zh/s04-subagent.md +++ b/docs/zh/s04-subagent.md @@ -1,96 +1,306 @@ -# s04: Subagents (Subagent) +# s04: Subagents (子智能体) -`s01 > s02 > s03 > [ s04 ] s05 > s06 | s07 > s08 > s09 > s10 > s11 > s12` +`s00 > s01 > s02 > s03 > [ s04 ] > s05 > s06 > s07 > s08 > s09 > s10 > s11 > s12 > s13 > s14 > s15 > s16 > s17 > s18 > s19` -> *"大任务拆小, 每个小任务干净的上下文"* -- Subagent 用独立 messages[], 不污染主对话。 -> -> **Harness 层**: 上下文隔离 -- 守护模型的思维清晰度。 +> *一个大任务,不一定要塞进一个上下文里做完。* -## 问题 +## 这一章到底要解决什么问题 -Agent 工作越久, messages 数组越臃肿。每次读文件、跑命令的输出都永久留在上下文里。"这个项目用什么测试框架?" 可能要读 5 个文件, 但父 Agent 只需要一个词: "pytest。" +当 agent 连续做很多事时,`messages` 会越来越长。 -## 解决方案 +比如用户只问: +> “这个项目用什么测试框架?” + +但 agent 可能为了回答这个问题: + +- 读了 `pyproject.toml` +- 读了 `requirements.txt` +- 搜了 `pytest` +- 跑了测试命令 + +真正有价值的最终答案,可能只有一句话: + +> “这个项目主要用 `pytest`。” + +如果这些中间过程都永久堆在父对话里,后面的问题会越来越难回答,因为上下文被大量局部任务的噪声填满了。 + +这就是子智能体要解决的问题: + +**把局部任务放进独立上下文里做,做完只把必要结果带回来。** + +## 先解释几个名词 + +### 什么是“父智能体” + +当前正在和用户对话、持有主 `messages` 的 agent,就是父智能体。 + +### 什么是“子智能体” + +父智能体临时派生出来,专门处理某个子任务的 agent,就是子智能体。 + +### 什么叫“上下文隔离” + +意思是: + +- 父智能体有自己的 `messages` +- 子智能体也有自己的 `messages` +- 子智能体的中间过程不会自动写回父智能体 + +## 最小心智模型 + +```text +Parent agent + | + | 1. 决定把一个局部任务外包出去 + v +Subagent + | + | 2. 在自己的上下文里读文件 / 搜索 / 执行工具 + v +Summary + | + | 3. 只把最终摘要或结果带回父智能体 + v +Parent agent continues ``` -Parent agent Subagent -+------------------+ +------------------+ -| messages=[...] | | messages=[] | <-- fresh -| | dispatch | | -| tool: task | ----------> | while tool_use: | -| prompt="..." | | call tools | -| | summary | append results | -| result = "..." | <---------- | return last text | -+------------------+ +------------------+ - -Parent context stays clean. Subagent context is discarded. -``` -## 工作原理 +最重要的点只有一个: + +**子智能体的价值,不是“多一个模型实例”本身,而是“多一个干净上下文”。** + +## 最小实现长什么样 -1. 父 Agent 有一个 `task` 工具。Subagent 拥有除 `task` 外的所有基础工具 (禁止递归生成)。 +### 第一步:给父智能体一个 `task` 工具 + +父智能体需要一个工具,让模型可以主动说: + +> “这个子任务我想交给一个独立上下文去做。” + +最小 schema 可以非常简单: ```python -PARENT_TOOLS = CHILD_TOOLS + [ - {"name": "task", - "description": "Spawn a subagent with fresh context.", - "input_schema": { - "type": "object", - "properties": {"prompt": {"type": "string"}}, - "required": ["prompt"], - }}, -] +{ + "name": "task", + "description": "Run a subtask in a clean context and return a summary.", + "input_schema": { + "type": "object", + "properties": { + "prompt": {"type": "string"} + }, + "required": ["prompt"] + } +} ``` -2. Subagent 以 `messages=[]` 启动, 运行自己的循环。只有最终文本返回给父 Agent。 +### 第二步:子智能体使用自己的消息列表 ```python def run_subagent(prompt: str) -> str: sub_messages = [{"role": "user", "content": prompt}] - for _ in range(30): # safety limit - response = client.messages.create( - model=MODEL, system=SUBAGENT_SYSTEM, - messages=sub_messages, - tools=CHILD_TOOLS, max_tokens=8000, - ) - sub_messages.append({"role": "assistant", - "content": response.content}) - if response.stop_reason != "tool_use": - break - results = [] - for block in response.content: - if block.type == "tool_use": - handler = TOOL_HANDLERS.get(block.name) - output = handler(**block.input) - results.append({"type": "tool_result", - "tool_use_id": block.id, - "content": str(output)[:50000]}) - sub_messages.append({"role": "user", "content": results}) - return "".join( - b.text for b in response.content if hasattr(b, "text") - ) or "(no summary)" + ... ``` -Subagent 可能跑了 30+ 次工具调用, 但整个消息历史直接丢弃。父 Agent 收到的只是一段摘要文本, 作为普通 `tool_result` 返回。 +这就是隔离的关键。 + +不是共享父智能体的 `messages`,而是从一份新的列表开始。 + +### 第三步:子智能体只拿必要工具 + +子智能体通常不需要拥有和父智能体完全一样的能力。 + +最小版本里,常见做法是: + +- 给它文件读取、搜索、bash 之类的基础工具 +- 不给它继续派生子智能体的能力 + +这样可以防止它无限递归。 + +### 第四步:只把结果带回父智能体 + +子智能体做完事后,不把全部内部历史写回去,而是返回一段总结。 + +```python +return { + "type": "tool_result", + "tool_use_id": block.id, + "content": summary_text, +} +``` + +## 这一章最关键的数据结构 + +如果你只记一个结构,就记这个: + +```python +class SubagentContext: + messages: list + tools: list + handlers: dict + max_turns: int +``` + +解释一下: + +- `messages`:子智能体自己的上下文 +- `tools`:子智能体可以调用哪些工具 +- `handlers`:这些工具到底对应哪些 Python 函数 +- `max_turns`:防止子智能体无限跑 + +这就是最小子智能体的骨架。 + +## 为什么它真的有用 + +### 用处 1:给父上下文减负 + +局部任务的中间噪声不会全都留在主对话里。 + +### 用处 2:让任务描述更清楚 + +一个子智能体接到的 prompt 可以非常聚焦: + +- “读完这几个文件,给我一句总结” +- “检查这个目录里有没有测试” +- “对这个函数写一个最小修复” + +### 用处 3:让后面的多 agent 协作有基础 + +你可以把子智能体理解成多 agent 系统的最小起点。 + +先把一次性子任务外包做明白,后面再升级到长期 teammate、任务认领、团队协议,会顺很多。 -## 相对 s03 的变更 +## 从 0 到 1 的实现顺序 -| 组件 | 之前 (s03) | 之后 (s04) | -|----------------|------------------|-------------------------------| -| Tools | 5 | 5 (基础) + task (仅父端) | -| 上下文 | 单一共享 | 父 + 子隔离 | -| Subagent | 无 | `run_subagent()` 函数 | -| 返回值 | 不适用 | 仅摘要文本 | +推荐按这个顺序写: -## 试一试 +### 版本 1:空白上下文子智能体 -```sh -cd learn-claude-code -python agents/s04_subagent.py +先只实现: + +- 一个 `task` 工具 +- 一个 `run_subagent(prompt)` 函数 +- 子智能体自己的 `messages` +- 子智能体最后返回摘要 + +这已经够了。 + +### 版本 2:限制工具集 + +给子智能体一个更小、更安全的工具集。 + +比如: + +- 允许 `read_file` +- 允许 `grep` +- 允许只读 bash +- 不允许 `task` + +### 版本 3:加入最大轮数和失败保护 + +至少补两个保护: + +- 最多跑多少轮 +- 工具出错时怎么退出 + +### 版本 4:再考虑 fork + +只有当你已经稳定跑通前面三步,才考虑 fork。 + +## 什么是 fork,为什么它是“下一步”,不是“起步” + +前面的最小实现是: + +- 子智能体从空白上下文开始 + +这叫最朴素的子智能体。 + +但有时一个子任务必须知道父智能体之前在聊什么。 + +例如: + +> “基于我们刚才已经讨论出来的方案,去补测试。” + +这时可以用 `fork`: + +- 不是从空白 `messages` 开始 +- 而是先复制父智能体的已有上下文,再追加子任务 prompt + +```python +sub_messages = list(parent_messages) +sub_messages.append({"role": "user", "content": prompt}) ``` -试试这些 prompt (英文 prompt 对 LLM 效果更好, 也可以用中文): +这就是 fork 的本质: + +**继承上下文,而不是重头开始。** + +## 初学者最容易踩的坑 + +### 坑 1:把子智能体当成“为了炫技的并发” + +子智能体首先是为了解决上下文问题,不是为了展示“我有很多 agent”。 + +### 坑 2:把父历史全部原样灌回去 + +如果你最后又把子智能体全量历史粘回父对话,那隔离价值就几乎没了。 + +### 坑 3:一上来就做特别复杂的角色系统 + +比如一开始就加: + +- explorer +- reviewer +- planner +- tester +- implementer + +这些都可以做,但不应该先做。 + +先把“一个干净上下文的子任务执行器”做对,后面角色化只是在它上面再包一层。 + +### 坑 4:忘记给子智能体设置停止条件 + +如果没有: + +- 最大轮数 +- 异常处理 +- 工具过滤 + +子智能体很容易无限转。 + +## 教学边界 + +这章要先打牢的,不是“多 agent 很高级”,而是: + +**子智能体首先是一个上下文边界。** + +所以教学版先停在这里就够了: + +- 一次性子任务就够 +- 摘要返回就够 +- 新 `messages` + 工具过滤就够 + +不要提前把 `fork`、后台运行、transcript 持久化、worktree 绑定一起塞进来。 + +真正该守住的顺序仍然是: + +**先做隔离,再做高级化。** + +## 和后续章节的关系 + +- `s04` 解决的是“局部任务的上下文隔离” +- `s15-s17` 解决的是“多个长期角色如何协作” +- `s18` 解决的是“多个执行者如何在文件系统层面隔离” + +它们不是重复关系,而是递进关系。 + +## 这一章学完后,你应该能回答 + +- 为什么大任务不应该总塞在一个 `messages` 里? +- 子智能体最小版为什么只需要独立上下文和摘要返回? +- fork 是什么,为什么它不该成为第一步? +- 为什么子智能体的第一价值是“减噪”,而不是“炫多 agent”? + +--- -1. `Use a subtask to find what testing framework this project uses` -2. `Delegate: read all .py files and summarize what each one does` -3. `Use a task to create a new module, then verify it from here` +**一句话记住:子智能体的核心,不是多一个角色,而是多一个干净上下文。** diff --git a/docs/zh/s05-skill-loading.md b/docs/zh/s05-skill-loading.md index 29790d4bd..726ea29bd 100644 --- a/docs/zh/s05-skill-loading.md +++ b/docs/zh/s05-skill-loading.md @@ -1,110 +1,309 @@ -# s05: Skills (Skill 加载) +# s05: Skills (按需知识加载) -`s01 > s02 > s03 > s04 > [ s05 ] s06 | s07 > s08 > s09 > s10 > s11 > s12` +`s00 > s01 > s02 > s03 > s04 > [ s05 ] > s06 > s07 > s08 > s09 > s10 > s11 > s12 > s13 > s14 > s15 > s16 > s17 > s18 > s19` -> *"用到什么知识, 临时加载什么知识"* -- 通过 tool_result 注入, 不塞 system prompt。 -> -> **Harness 层**: 按需知识 -- 模型开口要时才给的领域专长。 +> *不是把所有知识永远塞进 prompt,而是在需要的时候再加载正确那一份。* -## 问题 +## 这一章要解决什么问题 -你希望 Agent 遵循特定领域的工作流: git 约定、测试模式、代码审查清单。全塞进系统提示太浪费 -- 10 个 Skill, 每个 2000 token, 就是 20,000 token, 大部分跟当前任务毫无关系。 +到了 `s04`,你的 agent 已经会: -## 解决方案 +- 调工具 +- 做会话内规划 +- 把大任务分给子 agent +接下来很自然会遇到另一个问题: + +> 不同任务需要的领域知识不一样。 + +例如: + +- 做代码审查,需要一套审查清单 +- 做 Git 操作,需要一套提交约定 +- 做 MCP 集成,需要一套专门步骤 + +如果你把这些知识包全部塞进 system prompt,就会出现两个问题: + +1. 大部分 token 都浪费在当前用不到的说明上 +2. prompt 越来越臃肿,主线规则越来越不清楚 + +所以这一章真正要做的是: + +**把“长期可选知识”从 system prompt 主体里拆出来,改成按需加载。** + +## 先解释几个名词 + +### 什么是 skill + +这里的 `skill` 可以先简单理解成: + +> 一份围绕某类任务的可复用说明书。 + +它通常会告诉 agent: + +- 什么时候该用它 +- 做这类任务时有哪些步骤 +- 有哪些注意事项 + +### 什么是 discovery + +`discovery` 指“发现有哪些 skill 可用”。 + +这一层只需要很轻量的信息,例如: + +- skill 名字 +- 一句描述 + +### 什么是 loading + +`loading` 指“把某个 skill 的完整正文真正读进来”。 + +这一层才是昂贵的,因为它会把完整内容放进当前上下文。 + +## 最小心智模型 + +把这一章先理解成两层: + +```text +第 1 层:轻量目录 + - skill 名称 + - skill 描述 + - 让模型知道“有哪些可用” + +第 2 层:按需正文 + - 只有模型真正需要时才加载 + - 通过工具结果注入当前上下文 ``` -System prompt (Layer 1 -- always present): -+--------------------------------------+ -| You are a coding agent. | -| Skills available: | -| - git: Git workflow helpers | ~100 tokens/skill -| - test: Testing best practices | -+--------------------------------------+ - -When model calls load_skill("git"): -+--------------------------------------+ -| tool_result (Layer 2 -- on demand): | -| | -| Full git workflow instructions... | ~2000 tokens -| Step 1: ... | -| | -+--------------------------------------+ + +可以画成这样: + +```text +system prompt + | + +-- Skills available: + - code-review: review checklist + - git-workflow: branch and commit guidance + - mcp-builder: build an MCP server +``` + +当模型判断自己需要某份知识时: + +```text +load_skill("code-review") + | + v +tool_result + | + v + +完整审查说明 + +``` + +这就是这一章最核心的设计。 + +## 关键数据结构 + +### 1. SkillManifest + +先准备一份很轻的元信息: + +```python +{ + "name": "code-review", + "description": "Checklist for reviewing code changes", +} ``` -第一层: 系统提示中放 Skill 名称 (低成本)。第二层: tool_result 中按需放完整内容。 +它的作用只是让模型知道: -## 工作原理 +> 这份 skill 存在,并且大概是干什么的。 -1. 每个 Skill 是一个目录, 包含 `SKILL.md` 文件和 YAML frontmatter。 +### 2. SkillDocument +真正被加载时,再读取完整内容: + +```python +{ + "manifest": {...}, + "body": "... full skill text ...", +} +``` + +### 3. SkillRegistry + +你最好不要把 skill 散着读取。 + +更清楚的方式是做一个统一注册表: + +```python +registry = { + "code-review": SkillDocument(...), + "git-workflow": SkillDocument(...), +} ``` + +它至少要能回答两个问题: + +1. 有哪些 skill 可用 +2. 某个 skill 的完整内容是什么 + +## 最小实现 + +### 第一步:把每个 skill 放成一个目录 + +最小结构可以这样: + +```text skills/ - pdf/ - SKILL.md # ---\n name: pdf\n description: Process PDF files\n ---\n ... code-review/ - SKILL.md # ---\n name: code-review\n description: Review code\n ---\n ... + SKILL.md + git-workflow/ + SKILL.md ``` -2. SkillLoader 递归扫描 `SKILL.md` 文件, 用目录名作为 Skill 标识。 +### 第二步:从 `SKILL.md` 里读取最小元信息 ```python -class SkillLoader: - def __init__(self, skills_dir: Path): +class SkillRegistry: + def __init__(self, skills_dir): self.skills = {} - for f in sorted(skills_dir.rglob("SKILL.md")): - text = f.read_text() - meta, body = self._parse_frontmatter(text) - name = meta.get("name", f.parent.name) - self.skills[name] = {"meta": meta, "body": body} - - def get_descriptions(self) -> str: - lines = [] - for name, skill in self.skills.items(): - desc = skill["meta"].get("description", "") - lines.append(f" - {name}: {desc}") - return "\n".join(lines) - - def get_content(self, name: str) -> str: - skill = self.skills.get(name) - if not skill: - return f"Error: Unknown skill '{name}'." - return f"\n{skill['body']}\n" + self._load_all() + + def _load_all(self): + for path in skills_dir.rglob("SKILL.md"): + meta, body = parse_frontmatter(path.read_text()) + name = meta.get("name", path.parent.name) + self.skills[name] = { + "manifest": { + "name": name, + "description": meta.get("description", ""), + }, + "body": body, + } ``` -3. 第一层写入系统提示。第二层不过是 dispatch map 中的又一个工具。 +这里的 `frontmatter` 你可以先简单理解成: + +> 放在正文前面的一小段结构化元数据。 + +### 第三步:把 skill 目录放进 system prompt ```python -SYSTEM = f"""You are a coding agent at {WORKDIR}. +SYSTEM = f"""You are a coding agent. Skills available: -{SKILL_LOADER.get_descriptions()}""" +{SKILL_REGISTRY.describe_available()} +""" +``` + +注意这里放的是**目录信息**,不是完整正文。 +### 第四步:提供一个 `load_skill` 工具 + +```python TOOL_HANDLERS = { - # ...base tools... - "load_skill": lambda **kw: SKILL_LOADER.get_content(kw["name"]), + "load_skill": lambda **kw: SKILL_REGISTRY.load_full_text(kw["name"]), } ``` -模型知道有哪些 Skill (便宜), 需要时再加载完整内容 (贵)。 +当模型调用它时,把完整 skill 正文作为 `tool_result` 返回。 + +### 第五步:让 skill 正文只在当前需要时进入上下文 + +这一步的核心思想就是: + +> 平时只展示“有哪些知识包”,真正工作时才把那一包展开。 + +## skill、memory、CLAUDE.md 的边界 + +这三个概念很容易混。 + +### skill + +可选知识包。 +只有在某类任务需要时才加载。 + +### memory + +跨会话仍然有价值的信息。 +它是系统记住的东西,不是任务手册。 + +### CLAUDE.md + +更稳定、更长期的规则说明。 +它通常比单个 skill 更“全局”。 + +一个简单判断法: + +- 这是某类任务才需要的做法或知识:`skill` +- 这是需要长期记住的事实或偏好:`memory` +- 这是更稳定的全局规则:`CLAUDE.md` -## 相对 s04 的变更 +## 它如何接到主循环里 -| 组件 | 之前 (s04) | 之后 (s05) | -|----------------|------------------|--------------------------------| -| Tools | 5 (基础 + task) | 5 (基础 + load_skill) | -| 系统提示 | 静态字符串 | + Skill 描述列表 | -| 知识库 | 无 | skills/\*/SKILL.md 文件 | -| 注入方式 | 无 | 两层 (系统提示 + result) | +这一章以后,system prompt 不再只是一段固定身份说明。 -## 试一试 +它开始长出一个很重要的新段落: -```sh -cd learn-claude-code -python agents/s05_skill_loading.py +- 可用技能目录 + +而消息流里则会出现新的按需注入内容: + +- 某个 skill 的完整正文 + +也就是说,系统输入现在开始分成两层: + +```text +稳定层: + 身份、规则、工具、skill 目录 + +按需层: + 当前真的加载进来的 skill 正文 ``` -试试这些 prompt (英文 prompt 对 LLM 效果更好, 也可以用中文): +这也是 `s10` 会继续系统化展开的东西。 + +## 初学者最容易犯的错 + +### 1. 把所有 skill 正文永远塞进 system prompt + +这样会让 prompt 很快臃肿到难以维护。 + +### 2. skill 目录信息写得太弱 + +如果只有名字,没有描述,模型就不知道什么时候该加载它。 + +### 3. 把 skill 当成“绝对规则” + +skill 更像“可选工作手册”,不是所有轮次都必须用。 + +### 4. 把 skill 和 memory 混成一类 + +skill 解决的是“怎么做一类事”,memory 解决的是“记住长期事实”。 + +### 5. 一上来就讲太多多源加载细节 + +教学主线真正要先讲清的是: + +**轻量发现,重内容按需加载。** + +## 教学边界 + +这章只要先守住两层就够了: + +- 轻量发现:先告诉模型有哪些 skill +- 按需深加载:真正需要时再把正文放进输入 + +所以这里不用提前扩到: + +- 多来源收集 +- 条件激活 +- skill 参数化 +- fork 式执行 +- 更复杂的 prompt 管道拼装 + +如果读者已经明白“为什么不能把所有 skill 永远塞进 system prompt,而应该先列目录、再按需加载”,这章就已经讲到位了。 + +## 一句话记住 -1. `What skills are available?` -2. `Load the agent-builder skill and follow its instructions` -3. `I need to do a code review -- load the relevant skill first` -4. `Build an MCP server using the mcp-builder skill` +**Skill 系统的核心,不是“多一个工具”,而是“把可选知识从常驻 prompt 里拆出来,改成按需加载”。** diff --git a/docs/zh/s06-context-compact.md b/docs/zh/s06-context-compact.md index 40108e2ed..95bb1f1ec 100644 --- a/docs/zh/s06-context-compact.md +++ b/docs/zh/s06-context-compact.md @@ -1,126 +1,330 @@ # s06: Context Compact (上下文压缩) -`s01 > s02 > s03 > s04 > s05 > [ s06 ] | s07 > s08 > s09 > s10 > s11 > s12` +`s00 > s01 > s02 > s03 > s04 > s05 > [ s06 ] > s07 > s08 > s09 > s10 > s11 > s12 > s13 > s14 > s15 > s16 > s17 > s18 > s19` -> *"上下文总会满, 要有办法腾地方"* -- 三层压缩策略, 换来无限会话。 -> -> **Harness 层**: 压缩 -- 干净的记忆, 无限的会话。 +> *上下文不是越多越好,而是要把“仍然有用的部分”留在活跃工作面里。* -## 问题 +## 这一章要解决什么问题 -上下文窗口是有限的。读一个 1000 行的文件就吃掉 ~4000 token; 读 30 个文件、跑 20 条命令, 轻松突破 100k token。不压缩, Agent 根本没法在大项目里干活。 +到了 `s05`,agent 已经会: -## 解决方案 +- 读写文件 +- 规划步骤 +- 派子 agent +- 按需加载 skill -三层压缩, 激进程度递增: +也正因为它会做的事情更多了,上下文会越来越快膨胀: +- 读一个大文件,会塞进很多文本 +- 跑一条长命令,会得到大段输出 +- 多轮任务推进后,旧结果会越来越多 + +如果没有压缩机制,很快就会出现这些问题: + +1. 模型注意力被旧结果淹没 +2. API 请求越来越重,越来越贵 +3. 最终直接撞上上下文上限,任务中断 + +所以这一章真正要解决的是: + +**怎样在不丢掉主线连续性的前提下,把活跃上下文重新腾出空间。** + +## 先解释几个名词 + +### 什么是上下文窗口 + +你可以把上下文窗口理解成: + +> 模型这一轮真正能一起看到的输入容量。 + +它不是无限的。 + +### 什么是活跃上下文 + +并不是历史上出现过的所有内容,都必须一直留在窗口里。 + +活跃上下文更像: + +> 当前这几轮继续工作时,最值得模型马上看到的那一部分。 + +### 什么是压缩 + +这里的压缩,不是 ZIP 压缩文件。 + +它的意思是: + +> 用更短的表示方式,保留继续工作真正需要的信息。 + +例如: + +- 大输出只保留预览,全文写到磁盘 +- 很久以前的工具结果改成占位提示 +- 整段长历史总结成一份摘要 + +## 最小心智模型 + +这一章建议你先记三层,不要一上来记八层十层: + +```text +第 1 层:大结果不直接塞进上下文 + -> 写到磁盘,只留预览 + +第 2 层:旧结果不一直原样保留 + -> 替换成简短占位 + +第 3 层:整体历史太长时 + -> 生成一份连续性摘要 +``` + +可以画成这样: + +```text +tool output + | + +-- 太大 -----------------> 保存到磁盘 + 留预览 + | + v +messages + | + +-- 太旧 -----------------> 替换成占位提示 + | + v +if whole context still too large: + | + v +compact history -> summary ``` -Every turn: -+------------------+ -| Tool call result | -+------------------+ - | - v -[Layer 1: micro_compact] (silent, every turn) - Replace tool_result > 3 turns old - with "[Previous: used {tool_name}]" - | - v -[Check: tokens > 50000?] - | | - no yes - | | - v v -continue [Layer 2: auto_compact] - Save transcript to .transcripts/ - LLM summarizes conversation. - Replace all messages with [summary]. - | - v - [Layer 3: compact tool] - Model calls compact explicitly. - Same summarization as auto_compact. + +手动触发 `/compact` 或 `compact` 工具,本质上也是走第 3 层。 + +## 关键数据结构 + +### 1. Persisted Output Marker + +当工具输出太大时,不要把全文强塞进当前对话。 + +最小标记可以长这样: + +```text + +Full output saved to: .task_outputs/tool-results/abc123.txt +Preview: +... + +``` + +这个结构表达的是: + +- 全文没有丢 +- 只是搬去了磁盘 +- 当前上下文里只保留一个足够让模型继续判断的预览 + +### 2. CompactState + +最小教学版建议你显式维护一份压缩状态: + +```python +{ + "has_compacted": False, + "last_summary": "", + "recent_files": [], +} +``` + +这里的字段分别表示: + +- `has_compacted`:这一轮之前是否已经做过完整压缩 +- `last_summary`:最近一次压缩得到的摘要 +- `recent_files`:最近碰过哪些文件,压缩后方便继续追踪 + +### 3. Micro-Compact Boundary + +教学版可以先设一条简单规则: + +```text +只保留最近 3 个工具结果的完整内容 +更旧的改成占位提示 +``` + +这就已经足够让初学者理解: + +**不是所有历史都要原封不动地一直带着跑。** + +## 最小实现 + +### 第一步:大工具结果先写磁盘 + +```python +def persist_large_output(tool_use_id: str, output: str) -> str: + if len(output) <= PERSIST_THRESHOLD: + return output + + stored_path = save_to_disk(tool_use_id, output) + preview = output[:2000] + return ( + "\n" + f"Full output saved to: {stored_path}\n" + f"Preview:\n{preview}\n" + "" + ) ``` -## 工作原理 +这一步的关键思想是: + +> 让模型知道“发生了什么”,但不强迫它一直背着整份原始大输出。 -1. **第一层 -- micro_compact**: 每次 LLM 调用前, 将旧的 tool result 替换为占位符。 +### 第二步:旧工具结果做微压缩 ```python def micro_compact(messages: list) -> list: - tool_results = [] - for i, msg in enumerate(messages): - if msg["role"] == "user" and isinstance(msg.get("content"), list): - for j, part in enumerate(msg["content"]): - if isinstance(part, dict) and part.get("type") == "tool_result": - tool_results.append((i, j, part)) - if len(tool_results) <= KEEP_RECENT: - return messages - for _, _, part in tool_results[:-KEEP_RECENT]: - if len(part.get("content", "")) > 100: - part["content"] = f"[Previous: used {tool_name}]" + tool_results = collect_tool_results(messages) + for result in tool_results[:-3]: + result["content"] = "[Earlier tool result omitted for brevity]" return messages ``` -2. **第二层 -- auto_compact**: token 超过阈值时, 保存完整对话到磁盘, 让 LLM 做摘要。 +这一步不是为了优雅,而是为了防止上下文被旧结果持续霸占。 + +### 第三步:整体历史过长时,做一次完整压缩 ```python -def auto_compact(messages: list) -> list: - # Save transcript for recovery - transcript_path = TRANSCRIPT_DIR / f"transcript_{int(time.time())}.jsonl" - with open(transcript_path, "w") as f: - for msg in messages: - f.write(json.dumps(msg, default=str) + "\n") - # LLM summarizes - response = client.messages.create( - model=MODEL, - messages=[{"role": "user", "content": - "Summarize this conversation for continuity..." - + json.dumps(messages, default=str)[:80000]}], - max_tokens=2000, - ) - return [ - {"role": "user", "content": f"[Compressed]\n\n{response.content[0].text}"}, - ] +def compact_history(messages: list) -> list: + summary = summarize_conversation(messages) + return [{ + "role": "user", + "content": ( + "This conversation was compacted for continuity.\n\n" + + summary + ), + }] ``` -3. **第三层 -- manual compact**: `compact` 工具按需触发同样的摘要机制。 +这里最重要的不是摘要格式多么复杂,而是你要保住这几类信息: + +- 当前目标是什么 +- 已经做了什么 +- 改过哪些文件 +- 还有什么没完成 +- 哪些决定不能丢 -4. 循环整合三层: +### 第四步:在主循环里接入压缩 ```python -def agent_loop(messages: list): +def agent_loop(state): while True: - micro_compact(messages) # Layer 1 - if estimate_tokens(messages) > THRESHOLD: - messages[:] = auto_compact(messages) # Layer 2 - response = client.messages.create(...) - # ... tool execution ... - if manual_compact: - messages[:] = auto_compact(messages) # Layer 3 + state["messages"] = micro_compact(state["messages"]) + + if estimate_context_size(state["messages"]) > CONTEXT_LIMIT: + state["messages"] = compact_history(state["messages"]) + state["has_compacted"] = True + + response = call_model(...) + ... ``` -完整历史通过 transcript 保存在磁盘上。信息没有真正丢失, 只是移出了活跃上下文。 +### 第五步:手动压缩和自动压缩复用同一条机制 + +教学版里,`compact` 工具不需要重新发明另一套逻辑。 + +它只需要表达: + +> 用户或模型现在主动要求执行一次完整压缩。 + +## 压缩后,真正要保住什么 + +这是这章最容易讲虚的地方。 + +压缩不是“把历史缩短”这么简单。 + +真正重要的是: + +**让模型还能继续接着干活。** + +所以一份合格的压缩结果,至少要保住下面这些东西: + +1. 当前任务目标 +2. 已完成的关键动作 +3. 已修改或重点查看过的文件 +4. 关键决定与约束 +5. 下一步应该做什么 + +如果这些没有保住,那压缩虽然腾出了空间,却打断了工作连续性。 + +## 它如何接到主循环里 + +从这一章开始,主循环不再只是: + +- 收消息 +- 调模型 +- 跑工具 -## 相对 s05 的变更 +它还多了一个很关键的责任: -| 组件 | 之前 (s05) | 之后 (s06) | -|----------------|------------------|--------------------------------| -| Tools | 5 | 5 (基础 + compact) | -| 上下文管理 | 无 | 三层压缩 | -| Micro-compact | 无 | 旧结果 -> 占位符 | -| Auto-compact | 无 | token 阈值触发 | -| Transcripts | 无 | 保存到 .transcripts/ | +- 管理活跃上下文的预算 -## 试一试 +也就是说,agent loop 现在开始同时维护两件事: -```sh -cd learn-claude-code -python agents/s06_context_compact.py +```text +任务推进 +上下文预算 ``` -试试这些 prompt (英文 prompt 对 LLM 效果更好, 也可以用中文): +这一步非常重要,因为后面的很多机制都会和它联动: + +- `s09` memory 决定什么信息值得长期保存 +- `s10` prompt pipeline 决定哪些块应该重新注入 +- `s11` error recovery 会处理压缩不足时的恢复分支 + +## 初学者最容易犯的错 + +### 1. 以为压缩等于删除 + +不是。 + +更准确地说,是把“不必常驻活跃上下文”的内容换一种表示。 + +### 2. 只在撞到上限后才临时乱补 + +更好的做法是从一开始就有三层思路: + +- 大结果先落盘 +- 旧结果先缩短 +- 整体过长再摘要 + +### 3. 摘要只写成一句空话 + +如果摘要没有保住文件、决定、下一步,它对继续工作没有帮助。 + +### 4. 把压缩和 memory 混成一类 + +压缩解决的是: + +- 当前会话太长了怎么办 + +memory 解决的是: + +- 哪些信息跨会话仍然值得保留 + +### 5. 一上来就给初学者讲过多产品化层级 + +教学主线先讲清最小正确模型,比堆很多层名词更重要。 + +## 教学边界 + +这章不要滑成“所有产品化压缩技巧大全”。 + +教学版只需要讲清三件事: + +1. 什么该留在活跃上下文里 +2. 什么该搬到磁盘或占位标记里 +3. 完整压缩后,哪些连续性信息一定不能丢 + +这已经足够建立稳定心智: + +**压缩不是删历史,而是把细节搬走,好让系统继续工作。** + +如果读者已经能用 `persisted output + micro compact + summary compact` 保住长会话连续性,这章就已经够深了。 + +## 一句话记住 -1. `Read every Python file in the agents/ directory one by one` (观察 micro-compact 替换旧结果) -2. `Keep reading files until compression triggers automatically` -3. `Use the compact tool to manually compress the conversation` +**上下文压缩的核心,不是尽量少字,而是让模型在更短的活跃上下文里,仍然保住继续工作的连续性。** diff --git a/docs/zh/s07-permission-system.md b/docs/zh/s07-permission-system.md new file mode 100644 index 000000000..dbb97f0d0 --- /dev/null +++ b/docs/zh/s07-permission-system.md @@ -0,0 +1,314 @@ +# s07: Permission System (权限系统) + +`s00 > s01 > s02 > s03 > s04 > s05 > s06 > [ s07 ] > s08 > s09 > s10 > s11 > s12 > s13 > s14 > s15 > s16 > s17 > s18 > s19` + +> *模型可以提出行动建议,但真正执行之前,必须先过安全关。* + +## 这一章的核心目标 + +到了 `s06`,你的 agent 已经能读文件、改文件、跑命令、做规划、压缩上下文。 + +问题也随之出现了: + +- 模型可能会写错文件 +- 模型可能会执行危险命令 +- 模型可能会在不该动手的时候动手 + +所以从这一章开始,系统需要一条新的管道: + +**“意图”不能直接变成“执行”,中间必须经过权限检查。** + +## 建议联读 + +- 如果你开始把“模型提议动作”和“系统真的执行动作”混成一件事,先回 [`s00a-query-control-plane.md`](./s00a-query-control-plane.md),重新确认 query 是怎么进入控制面的。 +- 如果你还没彻底稳住“工具请求为什么不能直接落到 handler”,建议把 [`s02a-tool-control-plane.md`](./s02a-tool-control-plane.md) 放在手边一起读。 +- 如果你在 `PermissionRule / PermissionDecision / tool_result` 这几层对象上开始打结,先回 [`data-structures.md`](./data-structures.md),把状态边界重新拆开。 + +## 先解释几个名词 + +### 什么是权限系统 + +权限系统不是“有没有权限”这样一个布尔值。 + +它更像一条管道,用来回答: + +1. 这次调用要不要直接拒绝? +2. 能不能自动放行? +3. 剩下的要不要问用户? + +### 什么是权限模式 + +权限模式是系统当前的总体风格。 + +例如: + +- 谨慎一点:大多数操作都问用户 +- 保守一点:只允许读,不允许写 +- 流畅一点:简单安全的操作自动放行 + +### 什么是规则 + +规则就是“遇到某种工具调用时,该怎么处理”的小条款。 + +最小规则通常包含三部分: + +```python +{ + "tool": "bash", + "content": "sudo *", + "behavior": "deny", +} +``` + +意思是: + +- 针对 `bash` +- 如果命令内容匹配 `sudo *` +- 就拒绝 + +## 最小权限系统应该长什么样 + +如果你是从 0 开始手写,一个最小但正确的权限系统只需要四步: + +```text +tool_call + | + v +1. deny rules -> 命中了就拒绝 + | + v +2. mode check -> 根据当前模式决定 + | + v +3. allow rules -> 命中了就放行 + | + v +4. ask user -> 剩下的交给用户确认 +``` + +这四步已经能覆盖教学仓库 80% 的核心需要。 + +## 为什么顺序是这样 + +### 第 1 步先看 deny rules + +因为有些东西不应该交给“模式”去决定。 + +比如: + +- 明显危险的命令 +- 明显越界的路径 + +这些应该优先挡掉。 + +### 第 2 步看 mode + +因为模式决定当前会话的大方向。 + +例如在 `plan` 模式下,系统就应该天然更保守。 + +### 第 3 步看 allow rules + +有些安全、重复、常见的操作可以直接过。 + +比如: + +- 读文件 +- 搜索代码 +- 查看 git 状态 + +### 第 4 步才 ask + +前面都没命中的灰区,才交给用户。 + +## 推荐先实现的 3 种模式 + +不要一上来就做特别多模式。 +先把下面三种做稳: + +| 模式 | 含义 | 适合什么场景 | +|---|---|---| +| `default` | 未命中规则时问用户 | 日常交互 | +| `plan` | 只允许读,不允许写 | 计划、审查、分析 | +| `auto` | 简单安全操作自动过,危险操作再问 | 高流畅度探索 | + +先有这三种,你就已经有了一个可用的权限系统。 + +## 这一章最重要的数据结构 + +### 1. 权限规则 + +```python +PermissionRule = { + "tool": str, + "behavior": "allow" | "deny" | "ask", + "path": str | None, + "content": str | None, +} +``` + +你不一定一开始就需要 `path` 和 `content` 都支持。 +但规则至少要能表达: + +- 针对哪个工具 +- 命中后怎么处理 + +### 2. 权限模式 + +```python +mode = "default" | "plan" | "auto" +``` + +### 3. 权限决策结果 + +```python +{ + "behavior": "allow" | "deny" | "ask", + "reason": "why this decision was made" +} +``` + +这三个结构已经足够搭起最小系统。 + +## 最小实现怎么写 + +```python +def check_permission(tool_name: str, tool_input: dict) -> dict: + # 1. deny rules + for rule in deny_rules: + if matches(rule, tool_name, tool_input): + return {"behavior": "deny", "reason": "matched deny rule"} + + # 2. mode + if mode == "plan" and tool_name in WRITE_TOOLS: + return {"behavior": "deny", "reason": "plan mode blocks writes"} + if mode == "auto" and tool_name in READ_ONLY_TOOLS: + return {"behavior": "allow", "reason": "auto mode allows reads"} + + # 3. allow rules + for rule in allow_rules: + if matches(rule, tool_name, tool_input): + return {"behavior": "allow", "reason": "matched allow rule"} + + # 4. fallback + return {"behavior": "ask", "reason": "needs confirmation"} +``` + +然后在执行工具前接进去: + +```python +decision = perms.check(tool_name, tool_input) + +if decision["behavior"] == "deny": + return f"Permission denied: {decision['reason']}" +if decision["behavior"] == "ask": + ok = ask_user(...) + if not ok: + return "Permission denied by user" + +return handler(**tool_input) +``` + +## Bash 为什么值得单独讲 + +所有工具里,`bash` 通常最危险。 + +因为: + +- `read_file` 只能读文件 +- `write_file` 只能写文件 +- 但 `bash` 几乎能做任何事 + +所以你不能只把 bash 当成一个普通字符串。 + +一个更成熟的系统,通常会把 bash 当成一门小语言来检查。 + +哪怕教学版不做完整语法分析,也建议至少先挡住这些明显危险点: + +- `sudo` +- `rm -rf` +- 命令替换 +- 可疑重定向 +- 明显的 shell 元字符拼接 + +这背后的核心思想只有一句: + +**bash 不是普通文本,而是可执行动作描述。** + +## 初学者怎么把这章做对 + +### 第一步:先做 3 个模式 + +不要一开始就做 6 个模式、10 个来源、复杂 classifier。 + +先稳稳做出: + +- `default` +- `plan` +- `auto` + +### 第二步:先做 deny / allow 两类规则 + +这已经足够表达很多现实需求。 + +### 第三步:给 bash 加最小安全检查 + +哪怕只是模式匹配版,也比完全裸奔好很多。 + +### 第四步:加拒绝计数 + +如果 agent 连续多次被拒绝,说明它可能卡住了。 + +这时可以: + +- 给出提示 +- 建议切到 `plan` +- 让用户重新澄清目标 + +## 教学边界 + +这一章先只讲透一条权限管道就够了: + +- 工具意图先进入权限判断 +- 权限结果只分成 `allow / ask / deny` +- 通过以后才真的执行 + +先把这条主线做稳,比一开始塞进很多模式名、规则来源、写回配置、额外目录、自动分类器都更重要。 + +换句话说,这章要先让读者真正理解的是: + +**任何工具调用,都不应该直接执行;中间必须先过一条权限管道。** + +## 这章不应该讲太多什么 + +为了不打乱初学者心智,这章不应该过早陷入: + +- 企业策略源的全部优先级 +- 非常复杂的自动分类器 +- 产品环境里的所有无头模式细节 +- 某个特定生产代码里的全部 validator 名称 + +这些东西存在,但不属于第一层理解。 + +第一层理解只有一句话: + +**任何工具调用,都不应该直接执行;中间必须先过一条权限管道。** + +## 这一章和后续章节的关系 + +- `s07` 决定“能不能执行” +- `s08` 决定“执行前后还能不能插入额外逻辑” +- `s10` 会把当前模式和权限说明放进 prompt 组装里 + +所以这章是后面很多机制的安全前提。 + +## 学完这章后,你应该能回答 + +- 为什么权限系统不是一个简单开关? +- 为什么 deny 要先于 allow? +- 为什么要先做 3 个模式,而不是一上来做很复杂? +- 为什么 bash 要被特殊对待? + +--- + +**一句话记住:权限系统不是为了让 agent 更笨,而是为了让 agent 的行动先经过一道可靠的安全判断。** diff --git a/docs/zh/s07-task-system.md b/docs/zh/s07-task-system.md deleted file mode 100644 index 4b9be120a..000000000 --- a/docs/zh/s07-task-system.md +++ /dev/null @@ -1,133 +0,0 @@ -# s07: Task System (任务系统) - -`s01 > s02 > s03 > s04 > s05 > s06 | [ s07 ] s08 > s09 > s10 > s11 > s12` - -> *"大目标要拆成小任务, 排好序, 记在磁盘上"* -- 文件持久化的任务图, 为多 agent 协作打基础。 -> -> **Harness 层**: 持久化任务 -- 比任何一次对话都长命的目标。 - -## 问题 - -s03 的 TodoManager 只是内存中的扁平清单: 没有顺序、没有依赖、状态只有做完没做完。真实目标是有结构的 -- 任务 B 依赖任务 A, 任务 C 和 D 可以并行, 任务 E 要等 C 和 D 都完成。 - -没有显式的关系, Agent 分不清什么能做、什么被卡住、什么能同时跑。而且清单只活在内存里, 上下文压缩 (s06) 一跑就没了。 - -## 解决方案 - -把扁平清单升级为持久化到磁盘的**任务图**。每个任务是一个 JSON 文件, 有状态、前置依赖 (`blockedBy`)。任务图随时回答三个问题: - -- **什么可以做?** -- 状态为 `pending` 且 `blockedBy` 为空的任务。 -- **什么被卡住?** -- 等待前置任务完成的任务。 -- **什么做完了?** -- 状态为 `completed` 的任务, 完成时自动解锁后续任务。 - -``` -.tasks/ - task_1.json {"id":1, "status":"completed"} - task_2.json {"id":2, "blockedBy":[1], "status":"pending"} - task_3.json {"id":3, "blockedBy":[1], "status":"pending"} - task_4.json {"id":4, "blockedBy":[2,3], "status":"pending"} - -任务图 (DAG): - +----------+ - +--> | task 2 | --+ - | | pending | | -+----------+ +----------+ +--> +----------+ -| task 1 | | task 4 | -| completed| --> +----------+ +--> | blocked | -+----------+ | task 3 | --+ +----------+ - | pending | - +----------+ - -顺序: task 1 必须先完成, 才能开始 2 和 3 -并行: task 2 和 3 可以同时执行 -依赖: task 4 要等 2 和 3 都完成 -状态: pending -> in_progress -> completed -``` - -这个任务图是 s07 之后所有机制的协调骨架: 后台执行 (s08)、多 agent 团队 (s09+)、worktree 隔离 (s12) 都读写这同一个结构。 - -## 工作原理 - -1. **TaskManager**: 每个任务一个 JSON 文件, CRUD + 依赖图。 - -```python -class TaskManager: - def __init__(self, tasks_dir: Path): - self.dir = tasks_dir - self.dir.mkdir(exist_ok=True) - self._next_id = self._max_id() + 1 - - def create(self, subject, description=""): - task = {"id": self._next_id, "subject": subject, - "status": "pending", "blockedBy": [], - "owner": ""} - self._save(task) - self._next_id += 1 - return json.dumps(task, indent=2) -``` - -2. **依赖解除**: 完成任务时, 自动将其 ID 从其他任务的 `blockedBy` 中移除, 解锁后续任务。 - -```python -def _clear_dependency(self, completed_id): - for f in self.dir.glob("task_*.json"): - task = json.loads(f.read_text()) - if completed_id in task.get("blockedBy", []): - task["blockedBy"].remove(completed_id) - self._save(task) -``` - -3. **状态变更 + 依赖关联**: `update` 处理状态转换和依赖边。 - -```python -def update(self, task_id, status=None, - add_blocked_by=None, remove_blocked_by=None): - task = self._load(task_id) - if status: - task["status"] = status - if status == "completed": - self._clear_dependency(task_id) - if add_blocked_by: - task["blockedBy"] = list(set(task["blockedBy"] + add_blocked_by)) - if remove_blocked_by: - task["blockedBy"] = [x for x in task["blockedBy"] if x not in remove_blocked_by] - self._save(task) -``` - -4. 四个任务工具加入 dispatch map。 - -```python -TOOL_HANDLERS = { - # ...base tools... - "task_create": lambda **kw: TASKS.create(kw["subject"]), - "task_update": lambda **kw: TASKS.update(kw["task_id"], kw.get("status")), - "task_list": lambda **kw: TASKS.list_all(), - "task_get": lambda **kw: TASKS.get(kw["task_id"]), -} -``` - -从 s07 起, 任务图是多步工作的默认选择。s03 的 Todo 仍可用于单次会话内的快速清单。 - -## 相对 s06 的变更 - -| 组件 | 之前 (s06) | 之后 (s07) | -|---|---|---| -| Tools | 5 | 8 (`task_create/update/list/get`) | -| 规划模型 | 扁平清单 (仅内存) | 带依赖关系的任务图 (磁盘) | -| 关系 | 无 | `blockedBy` 边 | -| 状态追踪 | 做完没做完 | `pending` -> `in_progress` -> `completed` | -| 持久化 | 压缩后丢失 | 压缩和重启后存活 | - -## 试一试 - -```sh -cd learn-claude-code -python agents/s07_task_system.py -``` - -试试这些 prompt (英文 prompt 对 LLM 效果更好, 也可以用中文): - -1. `Create 3 tasks: "Setup project", "Write code", "Write tests". Make them depend on each other in order.` -2. `List all tasks and show the dependency graph` -3. `Complete task 1 and then list tasks to see task 2 unblocked` -4. `Create a task board for refactoring: parse -> transform -> emit -> test, where transform and emit can run in parallel after parse` diff --git a/docs/zh/s08-background-tasks.md b/docs/zh/s08-background-tasks.md deleted file mode 100644 index 2931c31b9..000000000 --- a/docs/zh/s08-background-tasks.md +++ /dev/null @@ -1,109 +0,0 @@ -# s08: Background Tasks (后台任务) - -`s01 > s02 > s03 > s04 > s05 > s06 | s07 > [ s08 ] s09 > s10 > s11 > s12` - -> *"慢操作丢后台, agent 继续想下一步"* -- 后台线程跑命令, 完成后注入通知。 -> -> **Harness 层**: 后台执行 -- 模型继续思考, harness 负责等待。 - -## 问题 - -有些命令要跑好几分钟: `npm install`、`pytest`、`docker build`。阻塞式循环下模型只能干等。用户说 "装依赖, 顺便建个配置文件", Agent 却只能一个一个来。 - -## 解决方案 - -``` -Main thread Background thread -+-----------------+ +-----------------+ -| agent loop | | subprocess runs | -| ... | | ... | -| [LLM call] <---+------- | enqueue(result) | -| ^drain queue | +-----------------+ -+-----------------+ - -Timeline: -Agent --[spawn A]--[spawn B]--[other work]---- - | | - v v - [A runs] [B runs] (parallel) - | | - +-- results injected before next LLM call --+ -``` - -## 工作原理 - -1. BackgroundManager 用线程安全的通知队列追踪任务。 - -```python -class BackgroundManager: - def __init__(self): - self.tasks = {} - self._notification_queue = [] - self._lock = threading.Lock() -``` - -2. `run()` 启动守护线程, 立即返回。 - -```python -def run(self, command: str) -> str: - task_id = str(uuid.uuid4())[:8] - self.tasks[task_id] = {"status": "running", "command": command} - thread = threading.Thread( - target=self._execute, args=(task_id, command), daemon=True) - thread.start() - return f"Background task {task_id} started" -``` - -3. 子进程完成后, 结果进入通知队列。 - -```python -def _execute(self, task_id, command): - try: - r = subprocess.run(command, shell=True, cwd=WORKDIR, - capture_output=True, text=True, timeout=300) - output = (r.stdout + r.stderr).strip()[:50000] - except subprocess.TimeoutExpired: - output = "Error: Timeout (300s)" - with self._lock: - self._notification_queue.append({ - "task_id": task_id, "result": output[:500]}) -``` - -4. 每次 LLM 调用前排空通知队列。 - -```python -def agent_loop(messages: list): - while True: - notifs = BG.drain_notifications() - if notifs: - notif_text = "\n".join( - f"[bg:{n['task_id']}] {n['result']}" for n in notifs) - messages.append({"role": "user", - "content": f"\n{notif_text}\n" - f""}) - response = client.messages.create(...) -``` - -循环保持单线程。只有子进程 I/O 被并行化。 - -## 相对 s07 的变更 - -| 组件 | 之前 (s07) | 之后 (s08) | -|----------------|------------------|------------------------------------| -| Tools | 8 | 6 (基础 + background_run + check) | -| 执行方式 | 仅阻塞 | 阻塞 + 后台线程 | -| 通知机制 | 无 | 每轮排空的队列 | -| 并发 | 无 | 守护线程 | - -## 试一试 - -```sh -cd learn-claude-code -python agents/s08_background_tasks.py -``` - -试试这些 prompt (英文 prompt 对 LLM 效果更好, 也可以用中文): - -1. `Run "sleep 5 && echo done" in the background, then create a file while it runs` -2. `Start 3 background tasks: "sleep 2", "sleep 4", "sleep 6". Check their status.` -3. `Run pytest in the background and keep working on other things` diff --git a/docs/zh/s08-hook-system.md b/docs/zh/s08-hook-system.md new file mode 100644 index 000000000..fd5c0a43d --- /dev/null +++ b/docs/zh/s08-hook-system.md @@ -0,0 +1,296 @@ +# s08: Hook System (Hook 系统) + +`s00 > s01 > s02 > s03 > s04 > s05 > s06 > s07 > [ s08 ] > s09 > s10 > s11 > s12 > s13 > s14 > s15 > s16 > s17 > s18 > s19` + +> *不改主循环代码,也能在关键时机插入额外行为。* + +## 这章要解决什么问题 + +到了 `s07`,我们已经能在工具执行前做权限判断。 + +但很多真实需求并不属于“允许 / 拒绝”这条线,而属于: + +- 在某个固定时机顺手做一点事 +- 不改主循环主体,也能接入额外规则 +- 让用户或插件在系统边缘扩展能力 + +例如: + +- 会话开始时打印欢迎信息 +- 工具执行前做一次额外检查 +- 工具执行后补一条审计日志 + +如果每增加一个需求,你都去修改主循环,主循环就会越来越重,最后谁都不敢动。 + +所以这一章要引入的机制是: + +**主循环只负责暴露“时机”,真正的附加行为交给 hook。** + +## 建议联读 + +- 如果你还在把 hook 想成“往主循环里继续塞 if/else”,先回 [`s02a-tool-control-plane.md`](./s02a-tool-control-plane.md),重新确认主循环和控制面的边界。 +- 如果你开始把主循环、tool handler、hook side effect 混成一层,建议先看 [`entity-map.md`](./entity-map.md),把谁负责推进主状态、谁只是旁路观察分开。 +- 如果你准备继续读后面的 prompt、recovery、teams,可以把 [`s00e-reference-module-map.md`](./s00e-reference-module-map.md) 一起放在旁边,因为从这一章开始“控制面 + 侧车扩展”会反复一起出现。 + +## 什么是 hook + +你可以把 `hook` 理解成一个“预留插口”。 + +意思是: + +1. 主系统运行到某个固定时机 +2. 把当前上下文交给 hook +3. hook 返回结果 +4. 主系统再决定下一步怎么继续 + +最重要的一句话是: + +**hook 让系统可扩展,但不要求主循环理解每个扩展需求。** + +主循环只需要知道三件事: + +- 现在是什么事件 +- 要把哪些上下文交出去 +- 收到结果以后怎么处理 + +## 最小心智模型 + +教学版先只讲 3 个事件: + +- `SessionStart` +- `PreToolUse` +- `PostToolUse` + +这样做不是因为系统永远只有 3 个事件, +而是因为初学者先把这 3 个事件学明白,就已经能自己做出一套可用的 hook 机制。 + +可以把它想成这条流程: + +```text +主循环继续往前跑 + | + +-- 到了某个预留时机 + | + +-- 调用 hook runner + | + +-- 收到 hook 返回结果 + | + +-- 决定继续、阻止、还是补充说明 +``` + +## 教学版统一返回约定 + +这一章最容易把人讲乱的地方,就是“不同 hook 事件的返回语义”。 + +教学版建议先统一成下面这套规则: + +| 退出码 | 含义 | +|---|---| +| `0` | 正常继续 | +| `1` | 阻止当前动作 | +| `2` | 注入一条补充消息,再继续 | + +这套规则的价值不在于“最真实”,而在于“最容易学会”。 + +因为它让你先记住 hook 最核心的 3 种作用: + +- 观察 +- 拦截 +- 补充 + +等教学版跑通以后,再去做“不同事件采用不同语义”的细化,也不会乱。 + +## 关键数据结构 + +### 1. HookEvent + +```python +event = { + "name": "PreToolUse", + "payload": { + "tool_name": "bash", + "input": {"command": "pytest"}, + }, +} +``` + +它回答的是: + +- 现在发生了什么事 +- 这件事的上下文是什么 + +### 2. HookResult + +```python +result = { + "exit_code": 0, + "message": "", +} +``` + +它回答的是: + +- hook 想不想阻止主流程 +- 要不要向模型补一条说明 + +### 3. HookRunner + +```python +class HookRunner: + def run(self, event_name: str, payload: dict) -> dict: + ... +``` + +主循环不直接关心“每个 hook 的细节实现”。 +它只把事件交给统一的 runner。 + +这就是这一章的关键抽象边界: + +**主循环知道事件名,hook runner 知道怎么调扩展逻辑。** + +## 最小执行流程 + +先看最重要的 `PreToolUse` / `PostToolUse`: + +```text +model 发起 tool_use + | + v +run_hook("PreToolUse", ...) + | + +-- exit 1 -> 阻止工具执行 + +-- exit 2 -> 先补一条消息给模型,再继续 + +-- exit 0 -> 直接继续 + | + v +执行工具 + | + v +run_hook("PostToolUse", ...) + | + +-- exit 2 -> 追加补充说明 + +-- exit 0 -> 正常结束 +``` + +再加上 `SessionStart`,一整套最小 hook 机制就立住了。 + +## 最小实现 + +### 第一步:准备一个事件到处理器的映射 + +```python +HOOKS = { + "SessionStart": [on_session_start], + "PreToolUse": [pre_tool_guard], + "PostToolUse": [post_tool_log], +} +``` + +这里先用“一个事件对应一组处理函数”的最小结构就够了。 + +### 第二步:统一运行 hook + +```python +def run_hooks(event_name: str, payload: dict) -> dict: + for handler in HOOKS.get(event_name, []): + result = handler(payload) + if result["exit_code"] in (1, 2): + return result + return {"exit_code": 0, "message": ""} +``` + +教学版里先用“谁先返回阻止/注入,谁就优先”的简单规则。 + +### 第三步:接进主循环 + +```python +pre = run_hooks("PreToolUse", { + "tool_name": block.name, + "input": block.input, +}) + +if pre["exit_code"] == 1: + results.append(blocked_tool_result(pre["message"])) + continue + +if pre["exit_code"] == 2: + messages.append({"role": "user", "content": pre["message"]}) + +output = run_tool(...) + +post = run_hooks("PostToolUse", { + "tool_name": block.name, + "input": block.input, + "output": output, +}) +``` + +这一步最关键的不是代码量,而是心智: + +**hook 不是主循环的替代品,hook 是主循环在固定时机对外发出的调用。** + +## 这一章的教学边界 + +如果你后面继续扩展平台,hook 事件面当然会继续扩大。 + +常见扩展方向包括: + +- 生命周期事件:开始、结束、配置变化 +- 工具事件:执行前、执行后、失败后 +- 压缩事件:压缩前、压缩后 +- 多 agent 事件:子 agent 启动、任务完成、队友空闲 + +但教学仓这里要守住一个原则: + +**先把 hook 的统一模型讲清,再慢慢增加事件种类。** + +不要一开始就把几十种事件、几十套返回语义全部灌给读者。 + +## 初学者最容易犯的错 + +### 1. 把 hook 当成“到处插 if” + +如果还是散落在主循环里写条件分支,那还不是真正的 hook 设计。 + +### 2. 没有统一的返回结构 + +今天返回字符串,明天返回布尔值,后天返回整数,最后主循环一定会变乱。 + +### 3. 一上来就把所有事件做全 + +教学顺序应该是: + +1. 先学会 3 个事件 +2. 再学会统一返回协议 +3. 最后才扩事件面 + +### 4. 忘了说明“教学版统一语义”和“高完成度细化语义”的区别 + +如果这层不提前说清,读者后面看到更复杂实现时会以为前面学错了。 + +其实不是学错了,而是: + +**先学统一模型,再学事件细化。** + +## 学完这一章,你应该真正掌握什么 + +学完以后,你应该能自己清楚说出下面几句话: + +1. hook 的作用,是在固定时机扩展系统,而不是改写主循环。 +2. hook 至少需要“事件名 + payload + 返回结果”这三样东西。 +3. 教学版可以先用统一的 `0 / 1 / 2` 返回约定。 +4. `PreToolUse` 和 `PostToolUse` 已经足够支撑最核心的扩展能力。 + +如果这 4 句话你已经能独立复述,说明这一章的核心心智已经建立起来了。 + +## 下一章学什么 + +这一章解决的是: + +> 在固定时机插入行为。 + +下一章 `s09` 要解决的是: + +> 哪些信息应该跨会话留下,哪些不该留。 + +也就是从“扩展点”进一步走向“持久状态”。 diff --git a/docs/zh/s09-agent-teams.md b/docs/zh/s09-agent-teams.md deleted file mode 100644 index d43be9448..000000000 --- a/docs/zh/s09-agent-teams.md +++ /dev/null @@ -1,127 +0,0 @@ -# s09: Agent Teams (Agent 团队) - -`s01 > s02 > s03 > s04 > s05 > s06 | s07 > s08 > [ s09 ] s10 > s11 > s12` - -> *"任务太大一个人干不完, 要能分给队友"* -- 持久化队友 + JSONL 邮箱。 -> -> **Harness 层**: 团队邮箱 -- 多个模型, 通过文件协调。 - -## 问题 - -Subagent (s04) 是一次性的: 生成、干活、返回摘要、消亡。没有身份, 没有跨调用的记忆。Background Tasks (s08) 能跑 shell 命令, 但做不了 LLM 引导的决策。 - -真正的团队协作需要三样东西: (1) 能跨多轮对话存活的持久 Agent, (2) 身份和生命周期管理, (3) Agent 之间的通信通道。 - -## 解决方案 - -``` -Teammate lifecycle: - spawn -> WORKING -> IDLE -> WORKING -> ... -> SHUTDOWN - -Communication: - .team/ - config.json <- team roster + statuses - inbox/ - alice.jsonl <- append-only, drain-on-read - bob.jsonl - lead.jsonl - - +--------+ send("alice","bob","...") +--------+ - | alice | -----------------------------> | bob | - | loop | bob.jsonl << {json_line} | loop | - +--------+ +--------+ - ^ | - | BUS.read_inbox("alice") | - +---- alice.jsonl -> read + drain ---------+ -``` - -## 工作原理 - -1. TeammateManager 通过 config.json 维护团队名册。 - -```python -class TeammateManager: - def __init__(self, team_dir: Path): - self.dir = team_dir - self.dir.mkdir(exist_ok=True) - self.config_path = self.dir / "config.json" - self.config = self._load_config() - self.threads = {} -``` - -2. `spawn()` 创建队友并在线程中启动 agent loop。 - -```python -def spawn(self, name: str, role: str, prompt: str) -> str: - member = {"name": name, "role": role, "status": "working"} - self.config["members"].append(member) - self._save_config() - thread = threading.Thread( - target=self._teammate_loop, - args=(name, role, prompt), daemon=True) - thread.start() - return f"Spawned teammate '{name}' (role: {role})" -``` - -3. MessageBus: append-only 的 JSONL 收件箱。`send()` 追加一行; `read_inbox()` 读取全部并清空。 - -```python -class MessageBus: - def send(self, sender, to, content, msg_type="message", extra=None): - msg = {"type": msg_type, "from": sender, - "content": content, "timestamp": time.time()} - if extra: - msg.update(extra) - with open(self.dir / f"{to}.jsonl", "a") as f: - f.write(json.dumps(msg) + "\n") - - def read_inbox(self, name): - path = self.dir / f"{name}.jsonl" - if not path.exists(): return "[]" - msgs = [json.loads(l) for l in path.read_text().strip().splitlines() if l] - path.write_text("") # drain - return json.dumps(msgs, indent=2) -``` - -4. 每个队友在每次 LLM 调用前检查收件箱, 将消息注入上下文。 - -```python -def _teammate_loop(self, name, role, prompt): - messages = [{"role": "user", "content": prompt}] - for _ in range(50): - inbox = BUS.read_inbox(name) - if inbox != "[]": - messages.append({"role": "user", - "content": f"{inbox}"}) - response = client.messages.create(...) - if response.stop_reason != "tool_use": - break - # execute tools, append results... - self._find_member(name)["status"] = "idle" -``` - -## 相对 s08 的变更 - -| 组件 | 之前 (s08) | 之后 (s09) | -|----------------|------------------|------------------------------------| -| Tools | 6 | 9 (+spawn/send/read_inbox) | -| Agent 数量 | 单一 | 领导 + N 个队友 | -| 持久化 | 无 | config.json + JSONL 收件箱 | -| 线程 | 后台命令 | 每线程完整 agent loop | -| 生命周期 | 一次性 | idle -> working -> idle | -| 通信 | 无 | message + broadcast | - -## 试一试 - -```sh -cd learn-claude-code -python agents/s09_agent_teams.py -``` - -试试这些 prompt (英文 prompt 对 LLM 效果更好, 也可以用中文): - -1. `Spawn alice (coder) and bob (tester). Have alice send bob a message.` -2. `Broadcast "status update: phase 1 complete" to all teammates` -3. `Check the lead inbox for any messages` -4. 输入 `/team` 查看团队名册和状态 -5. 输入 `/inbox` 手动检查领导的收件箱 diff --git a/docs/zh/s09-memory-system.md b/docs/zh/s09-memory-system.md new file mode 100644 index 000000000..e4c755959 --- /dev/null +++ b/docs/zh/s09-memory-system.md @@ -0,0 +1,408 @@ +# s09: Memory System (记忆系统) + +`s00 > s01 > s02 > s03 > s04 > s05 > s06 > s07 > s08 > [ s09 ] > s10 > s11 > s12 > s13 > s14 > s15 > s16 > s17 > s18 > s19` + +> *不是所有信息都该进入 memory;只有跨会话仍然有价值的信息,才值得留下。* + +## 这一章在解决什么问题 + +如果一个 agent 每次新会话都完全从零开始,它就会不断重复忘记这些事情: + +- 用户长期偏好 +- 用户多次纠正过的错误 +- 某些不容易从代码直接看出来的项目约定 +- 某些外部资源在哪里找 + +这会让系统显得“每次都像第一次合作”。 + +所以需要 memory。 + +## 但先立一个边界:memory 不是什么都存 + +这是这一章最容易讲歪的地方。 + +memory 不是“把一切有用信息都记下来”。 + +如果你这样做,很快就会出现两个问题: + +1. memory 变成垃圾堆,越存越乱 +2. agent 开始依赖过时记忆,而不是读取当前真实状态 + +所以这章必须先立一个原则: + +**只有那些跨会话仍然有价值,而且不能轻易从当前仓库状态直接推出来的信息,才适合进入 memory。** + +## 建议联读 + +- 如果你还把 memory 想成“更长一点的上下文窗口”,先回 [`s06-context-compact.md`](./s06-context-compact.md),重新确认 compact 和长期记忆是两套机制。 +- 如果你在 `messages[]`、摘要块、memory store 这三层之间开始读混,建议边看边对照 [`data-structures.md`](./data-structures.md)。 +- 如果你准备继续读 `s10`,最好把 [`s10a-message-prompt-pipeline.md`](./s10a-message-prompt-pipeline.md) 放在旁边,因为 memory 真正重要的是它怎样重新进入下一轮输入。 + +## 先解释几个名词 + +### 什么是“跨会话” + +意思是: + +- 当前对话结束了 +- 下次重新开始一个新对话 +- 这条信息仍然可能有用 + +### 什么是“不可轻易重新推导” + +例如: + +- 用户明确说“我讨厌这种写法” +- 某个架构决定背后的真实原因是合规要求 +- 某个团队总在某个外部看板里跟踪问题 + +这些东西,往往不是你重新扫一遍代码就能立刻知道的。 + +## 最适合先教的 4 类 memory + +### 1. `user` + +用户偏好。 + +例如: + +- 喜欢什么代码风格 +- 回答希望简洁还是详细 +- 更偏好什么工具链 + +### 2. `feedback` + +用户明确纠正过你的地方。 + +例如: + +- “不要这样改” +- “这个判断方式之前错过” +- “以后遇到这种情况要先做 X” + +### 3. `project` + +这里只保存**不容易从代码直接重新看出来**的项目约定或背景。 + +例如: + +- 某个设计决定是因为合规而不是技术偏好 +- 某个目录虽然看起来旧,但短期内不能动 +- 某条规则是团队故意定下来的,不是历史残留 + +### 4. `reference` + +外部资源指针。 + +例如: + +- 某个问题单在哪个看板里 +- 某个监控面板在哪里 +- 某个资料库在哪个 URL + +## 哪些东西不要存进 memory + +这是比“该存什么”更重要的一张表: + +| 不要存的东西 | 为什么 | +|---|---| +| 文件结构、函数签名、目录布局 | 这些可以重新读代码得到 | +| 当前任务进度 | 这属于 task / plan,不属于 memory | +| 临时分支名、当前 PR 号 | 很快会过时 | +| 修 bug 的具体代码细节 | 代码和提交记录才是准确信息 | +| 密钥、密码、凭证 | 安全风险 | + +这条边界一定要稳。 + +否则 memory 会从“帮助系统长期变聪明”变成“帮助系统长期产生幻觉”。 + +## 最小心智模型 + +```text +conversation + | + | 用户提到一个长期重要信息 + v +save_memory + | + v +.memory/ + ├── MEMORY.md # 索引 + ├── prefer_tabs.md + ├── feedback_tests.md + └── incident_board.md + | + v +下次新会话开始时重新加载 +``` + +## 这一章最关键的数据结构 + +### 1. 单条 memory 文件 + +最简单也最清晰的做法,是每条 memory 一个文件。 + +```md +--- +name: prefer_tabs +description: User prefers tabs for indentation +type: user +--- +The user explicitly prefers tabs over spaces when editing source files. +``` + +这里的 `frontmatter` 可以理解成: + +**放在正文前面的结构化元数据。** + +它让系统先知道: + +- 这条 memory 叫什么 +- 大致是什么 +- 属于哪一类 + +### 2. 索引文件 `MEMORY.md` + +最小实现里,再加一个索引文件就够了: + +```md +# Memory Index + +- prefer_tabs: User prefers tabs for indentation [user] +- avoid_mock_heavy_tests: User dislikes mock-heavy tests [feedback] +``` + +索引的作用不是重复保存全部内容。 +它只是帮系统快速知道“有哪些 memory 可用”。 + +## 最小实现步骤 + +### 第一步:定义 memory 类型 + +```python +MEMORY_TYPES = ("user", "feedback", "project", "reference") +``` + +### 第二步:写一个 `save_memory` 工具 + +最小参数就四个: + +- `name` +- `description` +- `type` +- `content` + +### 第三步:每条 memory 独立落盘 + +```python +def save_memory(name, description, mem_type, content): + path = memory_dir / f"{safe_name}.md" + path.write_text(frontmatter + content) + rebuild_index() +``` + +### 第四步:会话开始时重新加载 + +把 memory 文件重新读出来,拼成一段 memory section。 + +### 第五步:把 memory section 接进系统输入 + +这一步会在 `s10` 的 prompt 组装里系统化。 + +## memory、task、plan、CLAUDE.md 的边界 + +这是最值得初学者反复区分的一组概念。 + +### memory + +保存跨会话仍有价值的信息。 + +### task + +保存当前工作要做什么、依赖关系如何、进度如何。 + +### plan + +保存“这一轮我要怎么做”的过程性安排。 + +### CLAUDE.md + +保存更稳定、更像长期规则的说明文本。 + +一个简单判断法: + +- 只对这次任务有用:`task / plan` +- 以后很多会话可能都还会有用:`memory` +- 属于长期系统级或项目级固定说明:`CLAUDE.md` + +## 初学者最容易犯的错 + +### 错误 1:把代码结构也存进 memory + +例如: + +- “这个项目有 `src/` 和 `tests/`” +- “这个函数在 `app.py`” + +这些都不该存。 + +因为系统完全可以重新去读。 + +### 错误 2:把当前任务状态存进 memory + +例如: + +- “我现在正在改认证模块” +- “这个 PR 还有两项没做” + +这些是 task / plan,不是 memory。 + +### 错误 3:把 memory 当成绝对真相 + +memory 可能过时。 + +所以更稳妥的规则是: + +**memory 用来提供方向,不用来替代当前观察。** + +如果 memory 和当前代码状态冲突,优先相信你现在看到的真实状态。 + +## 从教学版到高完成度版:记忆系统还要补的 6 条边界 + +最小教学版只要先把“该存什么 / 不该存什么”讲清楚。 +但如果你要把系统做到更稳、更像真实工作平台,下面这 6 条边界也必须讲清。 + +### 1. 不是所有 memory 都该放在同一个作用域 + +更完整系统里,至少要分清: + +- `private`:只属于当前用户或当前 agent 的记忆 +- `team`:整个项目团队都该共享的记忆 + +一个很稳的教学判断法是: + +- `user` 类型,几乎总是 `private` +- `feedback` 类型,默认 `private`;只有它明确是团队规则时才升到 `team` +- `project` 和 `reference`,通常更偏向 `team` + +这样做的价值是: + +- 不把个人偏好误写成团队规范 +- 不把团队规范只锁在某一个人的私有记忆里 + +### 2. 不只保存“你做错了”,也要保存“这样做是对的” + +很多人讲 memory 时,只会想到纠错。 + +这不够。 + +因为真正能长期使用的系统,还需要记住: + +- 哪种不明显的做法,用户已经明确认可 +- 哪个判断方式,项目里已经被验证有效 + +也就是说,`feedback` 不只来自负反馈,也来自被验证的正反馈。 + +如果只存纠错,不存被确认有效的做法,系统会越来越保守,却不一定越来越聪明。 + +### 3. 有些东西即使用户要求你存,也不该直接存 + +这条边界一定要说死。 + +就算用户说“帮我记住”,下面这些东西也不应该直接写进 memory: + +- 本周 PR 列表 +- 当前分支名 +- 今天改了哪些文件 +- 某个函数现在在什么路径 +- 当前正在做哪两个子任务 + +这些内容的问题不是“没有价值”,而是: + +- 太容易过时 +- 更适合存在代码、任务板、git 记录里 +- 会把 memory 变成活动日志 + +更好的做法是追问一句: + +> 这里面真正值得长期留下的、非显然的信息到底是什么? + +### 4. memory 会漂移,所以回答前要先核对当前状态 + +memory 记录的是“曾经成立过的事实”,不是永久真理。 + +所以更稳的工作方式是: + +1. 先把 memory 当作方向提示 +2. 再去读当前文件、当前资源、当前配置 +3. 如果冲突,优先相信你刚观察到的真实状态 + +这点对初学者尤其重要。 +因为他们最容易把 memory 当成“已经查证过的答案”。 + +### 5. 用户说“忽略 memory”时,就当它是空的 + +这是一个很容易漏讲的行为边界。 + +如果用户明确说: + +- “这次不要参考 memory” +- “忽略之前的记忆” + +那系统更合理的处理不是: + +- 一边继续用 memory +- 一边嘴上说“我知道但先忽略” + +而是: + +**在这一轮里,按 memory 为空来工作。** + +### 6. 推荐具体路径、函数、外部资源前,要再验证一次 + +memory 很适合保存: + +- 哪个看板通常有上下文 +- 哪个目录以前是关键入口 +- 某种项目约定为什么存在 + +但在你真的要对用户说: + +- “去改 `src/auth.py`” +- “调用 `AuthManager`” +- “看这个 URL 就对了” + +之前,最好再核对一次。 + +因为命名、路径、系统入口、外部链接,都是会变的。 + +所以更稳妥的做法不是: + +> memory 里写过,就直接复述。 + +而是: + +> memory 先告诉我去哪里验证;验证完,再给用户结论。 + +## 教学边界 + +这章最重要的,不是 memory 以后还能多自动、多复杂,而是先把存储边界讲清楚: + +- 什么值得跨会话留下 +- 什么只是当前任务状态,不该进 memory +- memory 和 task / plan / CLAUDE.md 各自负责什么 + +只要这几层边界清楚,教学目标就已经达成了。 + +更复杂的自动整合、作用域分层、自动抽取,都应该放在这个最小边界之后。 + +## 学完这章后,你应该能回答 + +- 为什么 memory 不是“什么都记”? +- 什么样的信息适合跨会话保存? +- 为什么代码结构和当前任务状态不应该进 memory? +- memory 和 task / plan / CLAUDE.md 的边界是什么? + +--- + +**一句话记住:memory 保存的是“以后还可能有价值、但当前代码里不容易直接重新看出来”的信息。** diff --git a/docs/zh/s10-system-prompt.md b/docs/zh/s10-system-prompt.md new file mode 100644 index 000000000..c6394bc58 --- /dev/null +++ b/docs/zh/s10-system-prompt.md @@ -0,0 +1,308 @@ +# s10: System Prompt Construction (系统提示词构建) + +`s00 > s01 > s02 > s03 > s04 > s05 > s06 > s07 > s08 > s09 > [ s10 ] > s11 > s12 > s13 > s14 > s15 > s16 > s17 > s18 > s19` + +> *系统提示词不是一整块大字符串,而是一条可维护的组装流水线。* + +## 这一章为什么重要 + +很多初学者一开始会把 system prompt 写成一大段固定文本。 + +这样在最小 demo 里当然能跑。 + +但一旦系统开始长功能,你很快会遇到这些问题: + +- 工具列表会变 +- skills 会变 +- memory 会变 +- 当前目录、日期、模式会变 +- 某些提醒只在这一轮有效,不该永远塞进系统说明 + +所以到了这个阶段,system prompt 不能再当成一块硬编码文本。 + +它应该升级成: + +**由多个来源共同组装出来的一条流水线。** + +## 建议联读 + +- 如果你还习惯把 prompt 看成“神秘大段文本”,先回 [`s00a-query-control-plane.md`](./s00a-query-control-plane.md),重新确认模型输入在进模型前经历了哪些控制层。 +- 如果你想真正稳住“哪些内容先拼、哪些后拼”,建议把 [`s10a-message-prompt-pipeline.md`](./s10a-message-prompt-pipeline.md) 放在手边,这页就是本章最关键的桥。 +- 如果你开始把 system rules、工具说明、memory、runtime state 混成一个大块,先看 [`data-structures.md`](./data-structures.md),把这些输入片段的来源重新拆开。 + +## 先解释几个名词 + +### 什么是 system prompt + +system prompt 是给模型的系统级说明。 + +它通常负责告诉模型: + +- 你是谁 +- 你能做什么 +- 你应该遵守什么规则 +- 你现在处在什么环境里 + +### 什么是“组装流水线” + +意思是: + +- 不同信息来自不同地方 +- 最后按顺序拼接成一份输入 + +它不是一个死字符串,而是一条构建过程。 + +### 什么是动态信息 + +有些信息经常变化,例如: + +- 当前日期 +- 当前工作目录 +- 本轮新增的提醒 + +这些信息不适合和所有稳定说明混在一起。 + +## 最小心智模型 + +最容易理解的方式,是把 system prompt 想成 6 段: + +```text +1. 核心身份和行为说明 +2. 工具列表 +3. skills 元信息 +4. memory 内容 +5. CLAUDE.md 指令链 +6. 动态环境信息 +``` + +然后按顺序拼起来: + +```text +core ++ tools ++ skills ++ memory ++ claude_md ++ dynamic_context += final system prompt +``` + +## 为什么不能把所有东西都硬塞进一个大字符串 + +因为这样会有三个问题: + +### 1. 不好维护 + +你很难知道: + +- 哪一段来自哪里 +- 该修改哪一部分 +- 哪一段是固定说明,哪一段是临时上下文 + +### 2. 不好测试 + +如果 system prompt 是一大坨文本,你很难分别测试: + +- 工具说明生成得对不对 +- memory 是否被正确拼进去 +- CLAUDE.md 是否被正确读取 + +### 3. 不好做缓存和动态更新 + +一些稳定内容其实不需要每轮大变。 +一些临时内容又只该活一轮。 + +这就要求你把“稳定块”和“动态块”分开思考。 + +## 最小实现结构 + +### 第一步:做一个 builder + +```python +class SystemPromptBuilder: + def build(self) -> str: + parts = [] + parts.append(self._build_core()) + parts.append(self._build_tools()) + parts.append(self._build_skills()) + parts.append(self._build_memory()) + parts.append(self._build_claude_md()) + parts.append(self._build_dynamic()) + return "\n\n".join(p for p in parts if p) +``` + +这就是这一章最核心的设计。 + +### 第二步:每一段只负责一种来源 + +例如: + +- `_build_tools()` 只负责把工具说明生成出来 +- `_build_memory()` 只负责拿 memory +- `_build_claude_md()` 只负责读指令文件 + +这样每一段的职责就很清楚。 + +## 这一章最关键的结构化边界 + +### 边界 1:稳定说明 vs 动态提醒 + +最重要的一组边界是: + +- 稳定的系统说明 +- 每轮临时变化的提醒 + +这两类东西不应该混为一谈。 + +### 边界 2:system prompt vs system reminder + +system prompt 适合放: + +- 身份 +- 规则 +- 工具 +- 长期约束 + +system reminder 适合放: + +- 这一轮才临时需要的补充上下文 +- 当前变动的状态 + +所以更清晰的做法是: + +- 主 system prompt 保持相对稳定 +- 每轮额外变化的内容,用单独的 reminder 方式追加 + +## 一个实用的教学版本 + +教学版可以先这样分: + +```text +静态部分: +- core +- tools +- skills +- memory +- CLAUDE.md + +动态部分: +- date +- cwd +- model +- current mode +``` + +如果你还想再清楚一点,可以加一个边界标记: + +```text +=== DYNAMIC_BOUNDARY === +``` + +它的作用不是神秘魔法。 + +它只是提醒你: + +**上面更稳定,下面更容易变。** + +## CLAUDE.md 为什么要单独一段 + +因为它的角色不是“某一次任务的临时上下文”,而是更稳定的长期说明。 + +教学仓里,最容易理解的链条是: + +1. 用户全局级 +2. 项目根目录级 +3. 当前子目录级 + +然后全部拼进去,而不是互相覆盖。 + +这样读者更容易理解“规则来源可以分层叠加”这个思想。 + +## memory 为什么要和 system prompt 有关系 + +因为 memory 的本质是: + +**把跨会话仍然有价值的信息,重新带回模型当前的工作环境。** + +如果保存了 memory,却从来不在系统输入中重新呈现,那它就等于没被真正用起来。 + +所以 memory 最终一定要进入 prompt 组装链条。 + +## 初学者最容易混淆的点 + +### 1. 把 system prompt 讲成一个固定字符串 + +这会让读者看不到系统是如何长大的。 + +### 2. 把所有变化信息都塞进 system prompt + +这会把稳定说明和临时提醒搅在一起。 + +### 3. 把 CLAUDE.md、memory、skills 写成同一种东西 + +它们都可能进入 prompt,但来源和职责不同: + +- `skills`:可选能力或知识包 +- `memory`:跨会话记住的信息 +- `CLAUDE.md`:长期规则说明 + +## 教学边界 + +这一章先只建立一个核心心智: + +**prompt 不是一整块静态文本,而是一条被逐段组装出来的输入流水线。** + +所以这里先不要扩到太多外层细节: + +- 不要先讲复杂的 section 注册系统 +- 不要先讲缓存与预算 +- 不要先讲所有外部能力如何追加 prompt 说明 + +只要读者已经能把稳定规则、动态提醒、memory、skills 这些来源看成不同输入段,而不是同一种“大 prompt”,这一章就已经讲到位了。 + +## 如果你开始分不清 prompt、message、reminder + +这是非常正常的。 + +因为到了这一章,系统输入已经不再只有一个 system prompt 了。 +它至少会同时出现: + +- system prompt blocks +- 普通对话消息 +- tool_result 消息 +- memory attachment +- 当前轮 reminder + +如果你开始有这类困惑: + +- “这个信息到底该放 prompt 里,还是放 message 里?” +- “为什么 system prompt 不是全部输入?” +- “reminder 和长期规则到底差在哪?” + +建议继续看: + +- [`s10a-message-prompt-pipeline.md`](./s10a-message-prompt-pipeline.md) +- [`entity-map.md`](./entity-map.md) + +## 这章和后续章节的关系 + +这一章像一个汇合点: + +- `s05` skills 会汇进来 +- `s09` memory 会汇进来 +- `s07` 的当前模式也可能汇进来 +- `s19` MCP 以后也可能给 prompt 增加说明 + +所以 `s10` 的价值不是“新加一个功能”, +而是“把前面长出来的功能组织成一份清楚的系统输入”。 + +## 学完这章后,你应该能回答 + +- 为什么 system prompt 不能只是一整块硬编码文本? +- 为什么要把不同来源拆成独立 section? +- system prompt 和 system reminder 的边界是什么? +- memory、skills、CLAUDE.md 为什么都可能进入 prompt,但又不是一回事? + +--- + +**一句话记住:system prompt 的关键不是“写一段很长的话”,而是“把不同来源的信息按清晰边界组装起来”。** diff --git a/docs/zh/s10-team-protocols.md b/docs/zh/s10-team-protocols.md deleted file mode 100644 index a57c926b7..000000000 --- a/docs/zh/s10-team-protocols.md +++ /dev/null @@ -1,108 +0,0 @@ -# s10: Team Protocols (团队协议) - -`s01 > s02 > s03 > s04 > s05 > s06 | s07 > s08 > s09 > [ s10 ] s11 > s12` - -> *"队友之间要有统一的沟通规矩"* -- 一个 request-response 模式驱动所有协商。 -> -> **Harness 层**: 协议 -- 模型之间的结构化握手。 - -## 问题 - -s09 中队友能干活能通信, 但缺少结构化协调: - -**关机**: 直接杀线程会留下写了一半的文件和过期的 config.json。需要握手 -- 领导请求, 队友批准 (收尾退出) 或拒绝 (继续干)。 - -**计划审批**: 领导说 "重构认证模块", 队友立刻开干。高风险变更应该先过审。 - -两者结构一样: 一方发带唯一 ID 的请求, 另一方引用同一 ID 响应。 - -## 解决方案 - -``` -Shutdown Protocol Plan Approval Protocol -================== ====================== - -Lead Teammate Teammate Lead - | | | | - |--shutdown_req-->| |--plan_req------>| - | {req_id:"abc"} | | {req_id:"xyz"} | - | | | | - |<--shutdown_resp-| |<--plan_resp-----| - | {req_id:"abc", | | {req_id:"xyz", | - | approve:true} | | approve:true} | - -Shared FSM: - [pending] --approve--> [approved] - [pending] --reject---> [rejected] - -Trackers: - shutdown_requests = {req_id: {target, status}} - plan_requests = {req_id: {from, plan, status}} -``` - -## 工作原理 - -1. 领导生成 request_id, 通过收件箱发起关机请求。 - -```python -shutdown_requests = {} - -def handle_shutdown_request(teammate: str) -> str: - req_id = str(uuid.uuid4())[:8] - shutdown_requests[req_id] = {"target": teammate, "status": "pending"} - BUS.send("lead", teammate, "Please shut down gracefully.", - "shutdown_request", {"request_id": req_id}) - return f"Shutdown request {req_id} sent (status: pending)" -``` - -2. 队友收到请求后, 用 approve/reject 响应。 - -```python -if tool_name == "shutdown_response": - req_id = args["request_id"] - approve = args["approve"] - shutdown_requests[req_id]["status"] = "approved" if approve else "rejected" - BUS.send(sender, "lead", args.get("reason", ""), - "shutdown_response", - {"request_id": req_id, "approve": approve}) -``` - -3. 计划审批遵循完全相同的模式。队友提交计划 (生成 request_id), 领导审查 (引用同一个 request_id)。 - -```python -plan_requests = {} - -def handle_plan_review(request_id, approve, feedback=""): - req = plan_requests[request_id] - req["status"] = "approved" if approve else "rejected" - BUS.send("lead", req["from"], feedback, - "plan_approval_response", - {"request_id": request_id, "approve": approve}) -``` - -一个 FSM, 两种用途。同样的 `pending -> approved | rejected` 状态机可以套用到任何请求-响应协议上。 - -## 相对 s09 的变更 - -| 组件 | 之前 (s09) | 之后 (s10) | -|----------------|------------------|--------------------------------------| -| Tools | 9 | 12 (+shutdown_req/resp +plan) | -| 关机 | 仅自然退出 | 请求-响应握手 | -| 计划门控 | 无 | 提交/审查与审批 | -| 关联 | 无 | 每个请求一个 request_id | -| FSM | 无 | pending -> approved/rejected | - -## 试一试 - -```sh -cd learn-claude-code -python agents/s10_team_protocols.py -``` - -试试这些 prompt (英文 prompt 对 LLM 效果更好, 也可以用中文): - -1. `Spawn alice as a coder. Then request her shutdown.` -2. `List teammates to see alice's status after shutdown approval` -3. `Spawn bob with a risky refactoring task. Review and reject his plan.` -4. `Spawn charlie, have him submit a plan, then approve it.` -5. 输入 `/team` 监控状态 diff --git a/docs/zh/s10a-message-prompt-pipeline.md b/docs/zh/s10a-message-prompt-pipeline.md new file mode 100644 index 000000000..2ab35c428 --- /dev/null +++ b/docs/zh/s10a-message-prompt-pipeline.md @@ -0,0 +1,298 @@ +# s10a: Message & Prompt Pipeline (消息与提示词管道) + +> 这篇桥接文档是 `s10` 的扩展。 +> 它要补清一个很关键的心智: +> +> **system prompt 很重要,但它不是模型完整输入的全部。** + +## 为什么要补这一篇 + +`s10` 已经把 system prompt 从“大字符串”升级成“可维护的组装流水线”,这一步非常重要。 + +但当系统开始长出更多输入来源时,还会继续往前走一步: + +它会发现,真正送给模型的输入,不只包含: + +- system prompt + +还包含: + +- 规范化后的 messages +- memory attachments +- hook 注入消息 +- system reminder +- 当前轮次的动态上下文 + +也就是说,真正的输入更像一条完整管道: + +**Prompt Pipeline,而不只是 Prompt Builder。** + +## 先解释几个名词 + +### 什么是 prompt block + +你可以把 `prompt block` 理解成: + +> system prompt 内部的一段结构化片段。 + +例如: + +- 核心身份说明 +- 工具说明 +- memory section +- CLAUDE.md section + +### 什么是 normalized message + +`normalized message` 的意思是: + +> 把不同来源、不同格式的消息整理成统一、稳定、可发给模型的消息形式。 + +为什么需要这一步? + +因为系统里可能出现: + +- 普通用户消息 +- assistant 回复 +- tool_result +- 系统提醒 +- attachment 包裹消息 + +如果不先整理,模型输入层会越来越乱。 + +### 什么是 system reminder + +这在 `s10` 已经提到过。 + +它不是长期规则,而是: + +> 只在当前轮或当前阶段临时追加的一小段系统信息。 + +## 最小心智模型 + +把完整输入先理解成下面这条流水线: + +```text +多种输入来源 + | + +-- system prompt blocks + +-- messages + +-- attachments + +-- reminders + | + v +normalize + | + v +final api payload +``` + +这条图里最重要的不是“normalize”这个词有多高级,而是: + +**所有来源先分清边界,再在最后一步统一整理。** + +## system prompt 为什么不是全部 + +这是初学者非常容易混的一个点。 + +system prompt 适合放: + +- 身份 +- 规则 +- 工具能力描述 +- 长期说明 + +但有些东西不适合放进去: + +- 这一轮刚发生的 tool_result +- 某个 hook 刚注入的补充说明 +- 某条 memory attachment +- 当前临时提醒 + +这些更适合存在消息流里,而不是塞进 prompt block。 + +## 关键数据结构 + +### 1. SystemPromptBlock + +```python +block = { + "text": "...", + "cache_scope": None, +} +``` + +最小教学版可以只理解成: + +- 一段文本 +- 可选的缓存信息 + +### 2. PromptParts + +```python +parts = { + "core": "...", + "tools": "...", + "skills": "...", + "memory": "...", + "claude_md": "...", + "dynamic": "...", +} +``` + +### 3. NormalizedMessage + +```python +message = { + "role": "user" | "assistant", + "content": [...], +} +``` + +这里的 `content` 建议直接理解成“块列表”,而不是只是一段字符串。 +因为后面你会自然遇到: + +- text block +- tool_use block +- tool_result block +- attachment-like block + +### 4. ReminderMessage + +```python +reminder = { + "role": "system", + "content": "Current mode: plan", +} +``` + +教学版里你不一定真的要用 `system` role 单独传,但心智上要区分: + +- 这是长期 prompt block +- 还是当前轮临时 reminder + +## 最小实现 + +### 第一步:继续保留 `SystemPromptBuilder` + +这一步不能丢。 + +### 第二步:把消息输入做成独立管道 + +```python +def build_messages(raw_messages, attachments, reminders): + messages = normalize_messages(raw_messages) + messages = attach_memory(messages, attachments) + messages = append_reminders(messages, reminders) + return messages +``` + +### 第三步:在最后一层统一生成 API payload + +```python +payload = { + "system": build_system_prompt(), + "messages": build_messages(...), + "tools": build_tools(...), +} +``` + +这一步特别关键。 + +它会让读者明白: + +**system prompt、messages、tools 是并列输入面,而不是互相替代。** + +## 一张更完整但仍然容易理解的图 + +```text +Prompt Blocks + - core + - tools + - memory + - CLAUDE.md + - dynamic context + +Messages + - user messages + - assistant messages + - tool_result messages + - injected reminders + +Attachments + - memory attachment + - hook attachment + + | + v + normalize + assemble + | + v + final API payload +``` + +## 什么时候该放在 prompt,什么时候该放在 message + +可以先记这个简单判断法: + +### 更适合放在 prompt block + +- 长期稳定规则 +- 工具列表 +- 长期身份说明 +- CLAUDE.md + +### 更适合放在 message 流 + +- 当前轮 tool_result +- 刚发生的提醒 +- 当前轮追加的上下文 +- 某次 hook 输出 + +### 更适合做 attachment + +- 大块但可选的补充信息 +- 需要按需展开的说明 + +## 初学者最容易犯的错 + +### 1. 把所有东西都塞进 system prompt + +这样会让 prompt 越来越脏,也会模糊稳定信息和动态信息的边界。 + +### 2. 完全不做 normalize + +随着消息来源增多,输入格式会越来越不稳定。 + +### 3. 把 memory、hook、tool_result 都当成一类东西 + +它们都能影响模型,但进入输入层的方式并不相同。 + +### 4. 忽略“临时 reminder”这一层 + +这会让很多本该只活一轮的信息,被错误地塞进长期 system prompt。 + +## 它和 `s10`、`s19` 的关系 + +- `s10` 讲 prompt builder +- 这篇讲 message + prompt 的完整输入管道 +- `s19` 则会把 MCP 带来的额外说明和外部能力继续接入这条管道 + +也就是说: + +**builder 是 prompt 的内部结构,pipeline 是模型输入的整体结构。** + +## 教学边界 + +这篇最重要的,不是罗列所有输入来源,而是先把三条管线边界讲稳: + +- 什么该进 system blocks +- 什么该进 normalized messages +- 什么只应该作为临时 reminder 或 attachment + +只要这三层边界清楚,读者就已经能自己搭出一条可靠输入管道。 +更细的 cache scope、attachment 去重和大结果外置,都可以放到后续扩展里再补。 + +## 一句话记住 + +**真正送给模型的,不只是一个 prompt,而是“prompt blocks + normalized messages + attachments + reminders”组成的输入管道。** diff --git a/docs/zh/s11-autonomous-agents.md b/docs/zh/s11-autonomous-agents.md deleted file mode 100644 index b1f51278b..000000000 --- a/docs/zh/s11-autonomous-agents.md +++ /dev/null @@ -1,144 +0,0 @@ -# s11: Autonomous Agents (Autonomous Agent) - -`s01 > s02 > s03 > s04 > s05 > s06 | s07 > s08 > s09 > s10 > [ s11 ] s12` - -> *"队友自己看看板, 有活就认领"* -- 不需要领导逐个分配, 自组织。 -> -> **Harness 层**: 自治 -- 模型自己找活干, 无需指派。 - -## 问题 - -s09-s10 中, 队友只在被明确指派时才动。领导得给每个队友写 prompt, 任务看板上 10 个未认领的任务得手动分配。这扩展不了。 - -真正的自治: 队友自己扫描任务看板, 认领没人做的任务, 做完再找下一个。 - -一个细节: Context Compact (s06) 后 Agent 可能忘了自己是谁。身份重注入解决这个问题。 - -## 解决方案 - -``` -Teammate lifecycle with idle cycle: - -+-------+ -| spawn | -+---+---+ - | - v -+-------+ tool_use +-------+ -| WORK | <------------- | LLM | -+---+---+ +-------+ - | - | stop_reason != tool_use (or idle tool called) - v -+--------+ -| IDLE | poll every 5s for up to 60s -+---+----+ - | - +---> check inbox --> message? ----------> WORK - | - +---> scan .tasks/ --> unclaimed? -------> claim -> WORK - | - +---> 60s timeout ----------------------> SHUTDOWN - -Identity re-injection after compression: - if len(messages) <= 3: - messages.insert(0, identity_block) -``` - -## 工作原理 - -1. 队友循环分两个阶段: WORK 和 IDLE。LLM 停止调用工具 (或调用了 `idle`) 时, 进入 IDLE。 - -```python -def _loop(self, name, role, prompt): - while True: - # -- WORK PHASE -- - messages = [{"role": "user", "content": prompt}] - for _ in range(50): - response = client.messages.create(...) - if response.stop_reason != "tool_use": - break - # execute tools... - if idle_requested: - break - - # -- IDLE PHASE -- - self._set_status(name, "idle") - resume = self._idle_poll(name, messages) - if not resume: - self._set_status(name, "shutdown") - return - self._set_status(name, "working") -``` - -2. 空闲阶段循环轮询收件箱和任务看板。 - -```python -def _idle_poll(self, name, messages): - for _ in range(IDLE_TIMEOUT // POLL_INTERVAL): # 60s / 5s = 12 - time.sleep(POLL_INTERVAL) - inbox = BUS.read_inbox(name) - if inbox: - messages.append({"role": "user", - "content": f"{inbox}"}) - return True - unclaimed = scan_unclaimed_tasks() - if unclaimed: - claim_task(unclaimed[0]["id"], name) - messages.append({"role": "user", - "content": f"Task #{unclaimed[0]['id']}: " - f"{unclaimed[0]['subject']}"}) - return True - return False # timeout -> shutdown -``` - -3. 任务看板扫描: 找 pending 状态、无 owner、未被阻塞的任务。 - -```python -def scan_unclaimed_tasks() -> list: - unclaimed = [] - for f in sorted(TASKS_DIR.glob("task_*.json")): - task = json.loads(f.read_text()) - if (task.get("status") == "pending" - and not task.get("owner") - and not task.get("blockedBy")): - unclaimed.append(task) - return unclaimed -``` - -4. 身份重注入: 上下文过短 (说明发生了压缩) 时, 在开头插入身份块。 - -```python -if len(messages) <= 3: - messages.insert(0, {"role": "user", - "content": f"You are '{name}', role: {role}, " - f"team: {team_name}. Continue your work."}) - messages.insert(1, {"role": "assistant", - "content": f"I am {name}. Continuing."}) -``` - -## 相对 s10 的变更 - -| 组件 | 之前 (s10) | 之后 (s11) | -|----------------|------------------|----------------------------------| -| Tools | 12 | 14 (+idle, +claim_task) | -| 自治性 | 领导指派 | 自组织 | -| 空闲阶段 | 无 | 轮询收件箱 + 任务看板 | -| 任务认领 | 仅手动 | 自动认领未分配任务 | -| 身份 | 系统提示 | + 压缩后重注入 | -| 超时 | 无 | 60 秒空闲 -> 自动关机 | - -## 试一试 - -```sh -cd learn-claude-code -python agents/s11_autonomous_agents.py -``` - -试试这些 prompt (英文 prompt 对 LLM 效果更好, 也可以用中文): - -1. `Create 3 tasks on the board, then spawn alice and bob. Watch them auto-claim.` -2. `Spawn a coder teammate and let it find work from the task board itself` -3. `Create tasks with dependencies. Watch teammates respect the blocked order.` -4. 输入 `/tasks` 查看带 owner 的任务看板 -5. 输入 `/team` 监控谁在工作、谁在空闲 diff --git a/docs/zh/s11-error-recovery.md b/docs/zh/s11-error-recovery.md new file mode 100644 index 000000000..81da62625 --- /dev/null +++ b/docs/zh/s11-error-recovery.md @@ -0,0 +1,391 @@ +# s11: Error Recovery (错误恢复) + +`s00 > s01 > s02 > s03 > s04 > s05 > s06 > s07 > s08 > s09 > s10 > [ s11 ] > s12 > s13 > s14 > s15 > s16 > s17 > s18 > s19` + +> *错误不是例外,而是主循环必须预留出来的一条正常分支。* + +## 这一章要解决什么问题 + +到了 `s10`,你的 agent 已经有了: + +- 主循环 +- 工具调用 +- 规划 +- 上下文压缩 +- 权限、hook、memory、system prompt + +这时候系统已经不再是一个“只会聊天”的 demo,而是一个真的在做事的程序。 + +问题也随之出现: + +- 模型输出写到一半被截断 +- 上下文太长,请求直接失败 +- 网络暂时抖动,API 超时或限流 + +如果没有恢复机制,主循环会在第一个错误上直接停住。 +这对初学者很危险,因为他们会误以为“agent 不稳定是模型的问题”。 + +实际上,很多失败并不是“任务真的失败了”,而只是: + +**这一轮需要换一种继续方式。** + +所以这一章的目标只有一个: + +**把“报错就崩”升级成“先判断错误类型,再选择恢复路径”。** + +## 建议联读 + +- 如果你开始分不清“为什么这一轮还在继续”,先回 [`s00c-query-transition-model.md`](./s00c-query-transition-model.md),重新确认 transition reason 为什么是独立状态。 +- 如果你在恢复逻辑里又把上下文压缩和错误恢复混成一团,建议顺手回看 [`s06-context-compact.md`](./s06-context-compact.md),区分“为了缩上下文而压缩”和“因为失败而恢复”。 +- 如果你准备继续往 `s12` 走,建议把 [`data-structures.md`](./data-structures.md) 放在旁边,因为后面任务系统会在“恢复状态之外”再引入新的 durable work 状态。 + +## 先解释几个名词 + +### 什么叫恢复 + +恢复,不是把所有错误都藏起来。 + +恢复的意思是: + +- 先判断这是不是临时问题 +- 如果是,就尝试一个有限次数的补救动作 +- 如果补救失败,再把失败明确告诉用户 + +### 什么叫重试预算 + +重试预算,就是“最多试几次”。 + +比如: + +- 续写最多 3 次 +- 网络重连最多 3 次 + +如果没有这个预算,程序就可能无限循环。 + +### 什么叫状态机 + +状态机这个词听起来很大,其实意思很简单: + +> 一个东西会在几个明确状态之间按规则切换。 + +在这一章里,主循环就从“普通执行”变成了: + +- 正常执行 +- 续写恢复 +- 压缩恢复 +- 退避重试 +- 最终失败 + +## 最小心智模型 + +不要把错误恢复想得太神秘。 + +教学版只需要先区分 3 类问题: + +```text +1. 输出被截断 + 模型还没说完,但 token 用完了 + +2. 上下文太长 + 请求装不进模型窗口了 + +3. 临时连接失败 + 网络、超时、限流、服务抖动 +``` + +对应 3 条恢复路径: + +```text +LLM call + | + +-- stop_reason == "max_tokens" + | -> 注入续写提示 + | -> 再试一次 + | + +-- prompt too long + | -> 压缩旧上下文 + | -> 再试一次 + | + +-- timeout / rate limit / transient API error + -> 等一会儿 + -> 再试一次 +``` + +这就是最小但正确的恢复模型。 + +## 关键数据结构 + +### 1. 恢复状态 + +```python +recovery_state = { + "continuation_attempts": 0, + "compact_attempts": 0, + "transport_attempts": 0, +} +``` + +它的作用不是“记录一切”,而是: + +- 防止无限重试 +- 让每种恢复路径各算各的次数 + +### 2. 恢复决策 + +```python +{ + "kind": "continue" | "compact" | "backoff" | "fail", + "reason": "why this branch was chosen", +} +``` + +把“错误长什么样”和“接下来怎么做”分开,会更清楚。 + +### 3. 续写提示 + +```python +CONTINUE_MESSAGE = ( + "Output limit hit. Continue directly from where you stopped. " + "Do not restart or repeat." +) +``` + +这条提示非常重要。 + +因为如果你只说“继续”,模型经常会: + +- 重新总结 +- 重新开头 +- 重复已经输出过的内容 + +## 最小实现 + +先写一个恢复选择器: + +```python +def choose_recovery(stop_reason: str | None, error_text: str | None) -> dict: + if stop_reason == "max_tokens": + return {"kind": "continue", "reason": "output truncated"} + + if error_text and "prompt" in error_text and "long" in error_text: + return {"kind": "compact", "reason": "context too large"} + + if error_text and any(word in error_text for word in [ + "timeout", "rate", "unavailable", "connection" + ]): + return {"kind": "backoff", "reason": "transient transport failure"} + + return {"kind": "fail", "reason": "unknown or non-recoverable error"} +``` + +再把它接进主循环: + +```python +while True: + try: + response = client.messages.create(...) + decision = choose_recovery(response.stop_reason, None) + except Exception as e: + response = None + decision = choose_recovery(None, str(e).lower()) + + if decision["kind"] == "continue": + messages.append({"role": "user", "content": CONTINUE_MESSAGE}) + continue + + if decision["kind"] == "compact": + messages = auto_compact(messages) + continue + + if decision["kind"] == "backoff": + time.sleep(backoff_delay(...)) + continue + + if decision["kind"] == "fail": + break + + # 正常工具处理 +``` + +注意这里的重点不是代码花哨,而是: + +- 先分类 +- 再选动作 +- 每条动作有自己的预算 + +## 三条恢复路径分别在补什么洞 + +### 路径 1:输出被截断时,做续写 + +这个问题的本质不是“模型不会”,而是“这一轮输出空间不够”。 + +所以最小补法是: + +1. 追加一条续写消息 +2. 告诉模型不要重来,不要重复 +3. 让主循环继续 + +```python +if response.stop_reason == "max_tokens": + if state["continuation_attempts"] >= 3: + return "Error: output recovery exhausted" + state["continuation_attempts"] += 1 + messages.append({"role": "user", "content": CONTINUE_MESSAGE}) + continue +``` + +### 路径 2:上下文太长时,先压缩再重试 + +这里要先明确一点: + +压缩不是“把历史删掉”,而是: + +**把旧对话从原文,变成一份仍然可继续工作的摘要。** + +最小压缩结果建议至少保留: + +- 当前任务是什么 +- 已经做了什么 +- 关键决定是什么 +- 下一步准备做什么 + +```python +def auto_compact(messages: list) -> list: + summary = summarize_messages(messages) + return [{ + "role": "user", + "content": "This session was compacted. Continue from this summary:\n" + summary, + }] +``` + +### 路径 3:连接抖动时,退避重试 + +“退避”这个词的意思是: + +> 别立刻再打一次,而是等一小会儿再试。 + +为什么要等? + +因为这类错误往往是临时拥堵: + +- 刚超时 +- 刚限流 +- 服务器刚好抖了一下 + +如果你瞬间连续重打,只会更容易失败。 + +```python +def backoff_delay(attempt: int) -> float: + return min(1.0 * (2 ** attempt), 30.0) + random.uniform(0, 1) +``` + +## 如何接到主循环里 + +最干净的接法,是把恢复逻辑放在两个位置: + +### 位置 1:模型调用外层 + +负责处理: + +- API 报错 +- 网络错误 +- 超时 + +### 位置 2:拿到 response 以后 + +负责处理: + +- `stop_reason == "max_tokens"` +- 正常的 `tool_use` +- 正常的结束 + +也就是说,主循环现在不只是“调模型 -> 执行工具”,而是: + +```text +1. 调模型 +2. 如果调用报错,判断是否可以恢复 +3. 如果拿到响应,判断是否被截断 +4. 如果需要恢复,就修改 messages 或等待 +5. 如果不需要恢复,再进入正常工具分支 +``` + +## 初学者最容易犯的错 + +### 1. 把所有错误都当成一种错误 + +这样会导致: + +- 该续写的去压缩 +- 该等待的去重试 +- 该失败的却无限拖延 + +### 2. 没有重试预算 + +没有预算,主循环就可能永远卡在“继续”“继续”“继续”。 + +### 3. 续写提示写得太模糊 + +只写一个“continue”通常不够。 +你要明确告诉模型: + +- 不要重复 +- 不要重新总结 +- 直接从中断点接着写 + +### 4. 压缩后没有告诉模型“这是续场” + +如果压缩后只给一份摘要,不告诉模型“这是前文摘要”,模型很可能重新向用户提问。 + +### 5. 恢复过程完全没有日志 + +教学系统最好打印类似: + +- `[Recovery] continue` +- `[Recovery] compact` +- `[Recovery] backoff` + +这样读者才看得见主循环到底做了什么。 + +## 这一章和前后章节怎么衔接 + +- `s06` 讲的是“什么时候该压缩” +- `s10` 讲的是“系统提示词怎么组装” +- `s11` 讲的是“当执行失败时,主循环怎么续下去” +- `s12` 开始,恢复机制会保护更长、更复杂的任务流 + +所以 `s11` 的位置非常关键。 + +它不是外围小功能,而是: + +**把 agent 从“能跑”推进到“遇到问题也能继续跑”。** + +## 教学边界 + +这一章先把 3 条最小恢复路径讲稳就够了: + +- 输出截断后续写 +- 上下文过长后压缩再试 +- 请求抖动后退避重试 + +对教学主线来说,重点不是把所有“为什么继续下一轮”的原因一次讲全,而是先让读者明白: + +**恢复不是简单 try/except,而是系统知道该怎么续下去。** + +更大的 query 续行模型、预算续行、hook 介入这些内容,应该放回控制平面的桥接文档里看,而不是抢掉这章主线。 + +## 试一试 + +```sh +cd learn-claude-code +python agents/s11_error_recovery.py +``` + +可以试试这些任务: + +1. 让模型生成一段特别长的内容,观察它是否会自动续写。 +2. 连续读取一些大文件,观察上下文压缩是否会介入。 +3. 临时制造一次请求失败,观察系统是否会退避重试。 + +读这一章时,你真正要记住的不是某个具体异常名,而是这条主线: + +**错误先分类,恢复再执行,失败最后才暴露给用户。** diff --git a/docs/zh/s12-task-system.md b/docs/zh/s12-task-system.md new file mode 100644 index 000000000..10b68f172 --- /dev/null +++ b/docs/zh/s12-task-system.md @@ -0,0 +1,349 @@ +# s12: Task System (任务系统) + +`s00 > s01 > s02 > s03 > s04 > s05 > s06 > s07 > s08 > s09 > s10 > s11 > [ s12 ] > s13 > s14 > s15 > s16 > s17 > s18 > s19` + +> *Todo 只能提醒你“有事要做”,任务系统才能告诉你“先做什么、谁在等谁、哪一步还卡着”。* + +## 这一章要解决什么问题 + +`s03` 的 todo 已经能帮 agent 把大目标拆成几步。 + +但 todo 仍然有两个明显限制: + +- 它更像当前会话里的临时清单 +- 它不擅长表达“谁先谁后、谁依赖谁” + +例如下面这组工作: + +```text +1. 先写解析器 +2. 再写语义检查 +3. 测试和文档可以并行 +4. 最后整体验收 +``` + +这已经不是单纯的列表,而是一张“依赖关系图”。 + +如果没有专门的任务系统,agent 很容易出现这些问题: + +- 前置工作没做完,就贸然开始后面的任务 +- 某个任务完成以后,不知道解锁了谁 +- 多个 agent 协作时,没有统一任务板可读 + +所以这一章要做的升级是: + +**把“会话里的 todo”升级成“可持久化的任务图”。** + +## 建议联读 + +- 如果你刚从 `s03` 过来,先回 [`data-structures.md`](./data-structures.md),重新确认 `TodoItem / PlanState` 和 `TaskRecord` 不是同一层状态。 +- 如果你开始把“对象边界”读混,先回 [`entity-map.md`](./entity-map.md),把 message、task、runtime task、teammate 这几层拆开。 +- 如果你准备继续读 `s13`,建议把 [`s13a-runtime-task-model.md`](./s13a-runtime-task-model.md) 先放在手边,因为从这里开始最容易把 durable task 和 runtime task 混成一个词。 + +## 先把几个词讲明白 + +### 什么是任务 + +这里的 `task` 指的是: + +> 一个可以被跟踪、被分配、被完成、被阻塞的小工作单元。 + +它不是整段用户需求,而是用户需求拆出来的一小块工作。 + +### 什么是依赖 + +依赖的意思是: + +> 任务 B 必须等任务 A 完成,才能开始。 + +### 什么是任务图 + +任务图就是: + +> 任务节点 + 依赖连线 + +你可以把它理解成: + +- 点:每个任务 +- 线:谁依赖谁 + +### 什么是 ready + +`ready` 的意思很简单: + +> 这条任务现在已经满足开工条件。 + +也就是: + +- 自己还没开始 +- 前置依赖已经全部完成 + +## 最小心智模型 + +本章最重要的,不是复杂调度算法,而是先回答 4 个问题: + +1. 现在有哪些任务? +2. 每个任务是什么状态? +3. 哪些任务还被卡住? +4. 哪些任务已经可以开始? + +只要这 4 个问题能稳定回答,一个最小任务系统就已经成立了。 + +## 关键数据结构 + +### 1. TaskRecord + +```python +task = { + "id": 1, + "subject": "Write parser", + "description": "", + "status": "pending", + "blockedBy": [], + "blocks": [], + "owner": "", +} +``` + +每个字段都对应一个很实用的问题: + +- `id`:怎么唯一找到这条任务 +- `subject`:这条任务一句话在做什么 +- `description`:还有哪些补充说明 +- `status`:现在走到哪一步 +- `blockedBy`:还在等谁 +- `blocks`:它完成后会解锁谁 +- `owner`:现在由谁来做 + +### 2. TaskStatus + +教学版先只保留最少 4 个状态: + +```text +pending -> in_progress -> completed +deleted +``` + +解释如下: + +- `pending`:还没开始 +- `in_progress`:已经有人在做 +- `completed`:已经做完 +- `deleted`:逻辑删除,不再参与工作流 + +### 3. Ready Rule + +这是本章最关键的一条判断规则: + +```python +def is_ready(task: dict) -> bool: + return task["status"] == "pending" and not task["blockedBy"] +``` + +如果你把这条规则讲明白,读者就会第一次真正明白: + +**任务系统的核心不是“保存清单”,而是“判断什么时候能开工”。** + +## 最小实现 + +### 第一步:让任务落盘 + +不要只把任务放在 `messages` 里。 +教学版最简单的做法,就是“一任务一文件”: + +```text +.tasks/ + task_1.json + task_2.json + task_3.json +``` + +创建任务时,直接写成一条 JSON 记录: + +```python +class TaskManager: + def create(self, subject: str, description: str = "") -> dict: + task = { + "id": self._next_id(), + "subject": subject, + "description": description, + "status": "pending", + "blockedBy": [], + "blocks": [], + "owner": "", + } + self._save(task) + return task +``` + +### 第二步:把依赖关系写成双向 + +如果任务 A 完成后会解锁任务 B,最好同时维护两边: + +- A 的 `blocks` 里有 B +- B 的 `blockedBy` 里有 A + +```python +def add_dependency(self, task_id: int, blocks_id: int): + task = self._load(task_id) + blocked = self._load(blocks_id) + + if blocks_id not in task["blocks"]: + task["blocks"].append(blocks_id) + if task_id not in blocked["blockedBy"]: + blocked["blockedBy"].append(task_id) + + self._save(task) + self._save(blocked) +``` + +这样做的好处是: + +- 从前往后读得懂 +- 从后往前也读得懂 + +### 第三步:完成任务时自动解锁后续任务 + +```python +def complete(self, task_id: int): + task = self._load(task_id) + task["status"] = "completed" + self._save(task) + + for other in self._all_tasks(): + if task_id in other["blockedBy"]: + other["blockedBy"].remove(task_id) + self._save(other) +``` + +这一步非常关键。 + +因为它说明: + +**任务系统不是静态记录表,而是会随着完成事件自动推进的工作图。** + +### 第四步:把任务工具接给模型 + +教学版最小工具集建议先只做这 4 个: + +- `task_create` +- `task_update` +- `task_get` +- `task_list` + +这样模型就能: + +- 新建任务 +- 更新状态 +- 看单条任务 +- 看整张任务板 + +## 如何接到主循环里 + +从 `s12` 开始,主循环第一次拥有了“会话外状态”。 + +典型流程是: + +```text +用户提出复杂目标 + -> +模型决定先拆任务 + -> +调用 task_create / task_update + -> +任务落到 .tasks/ + -> +后续轮次继续读取并推进 +``` + +这里要牢牢记住一句话: + +**todo 更像本轮计划,task 更像长期工作板。** + +## 这一章和 s03、s13 的边界 + +这一层边界必须讲清楚,不然后面一定会混。 + +### 和 `s03` 的区别 + +| 机制 | 更适合什么 | +|---|---| +| `todo` | 当前会话里快速列步骤 | +| `task` | 持久化工作、依赖关系、多人协作 | + +如果只是“先看文件,再改代码,再跑测试”,todo 往往就够。 +如果是“跨很多轮、多人协作、还要管依赖”,就要上 task。 + +### 和 `s13` 的区别 + +本章的 `task` 指的是: + +> 一条工作目标 + +它回答的是: + +- 要做什么 +- 现在做到哪一步 +- 谁在等谁 + +它不是: + +- 某个正在后台跑的 `pytest` +- 某个正在执行的 worker +- 某条当前活着的执行线程 + +后面这些属于下一章要讲的: + +> 运行中的执行任务 + +## 初学者最容易犯的错 + +### 1. 只会创建任务,不会维护依赖 + +那最后得到的还是一张普通清单,不是任务图。 + +### 2. 任务只放内存,不落盘 + +系统一重启,整个工作结构就没了。 + +### 3. 完成任务后不自动解锁后续任务 + +这样系统永远不知道下一步谁可以开工。 + +### 4. 把工作目标和运行中的执行混成一层 + +这会导致后面 `s13` 的后台任务系统很难讲清。 + +## 教学边界 + +这一章先要守住的,不是任务平台以后还能长出多少管理功能,而是任务记录本身的最小主干: + +- `TaskRecord` +- 依赖关系 +- 持久化 +- 就绪判断 + +只要读者已经能把 todo 和 task、工作目标和运行执行明确分开,并且能手写一个会解锁后续任务的最小任务图,这章就已经讲到位了。 + +## 学完这一章,你应该真正掌握什么 + +学完以后,你应该能独立说清这几件事: + +1. 任务系统比 todo 多出来的核心能力,是“依赖关系”和“持久化”。 +2. `TaskRecord` 是本章最关键的数据结构。 +3. `blockedBy` / `blocks` 让系统能看懂前后关系。 +4. `is_ready()` 让系统能判断“谁现在可以开始”。 + +如果这 4 件事都已经清楚,说明你已经能从 0 到 1 手写一个最小任务系统。 + +## 下一章学什么 + +这一章解决的是: + +> 工作目标如何被长期组织。 + +下一章 `s13` 要解决的是: + +> 某个慢命令正在后台跑时,主循环怎么继续前进。 + +也就是从“工作图”走向“运行时执行层”。 diff --git a/docs/zh/s12-worktree-task-isolation.md b/docs/zh/s12-worktree-task-isolation.md deleted file mode 100644 index 31bddba23..000000000 --- a/docs/zh/s12-worktree-task-isolation.md +++ /dev/null @@ -1,123 +0,0 @@ -# s12: Worktree + Task Isolation (Worktree 任务隔离) - -`s01 > s02 > s03 > s04 > s05 > s06 | s07 > s08 > s09 > s10 > s11 > [ s12 ]` - -> *"各干各的目录, 互不干扰"* -- 任务管目标, worktree 管目录, 按 ID 绑定。 -> -> **Harness 层**: 目录隔离 -- 永不碰撞的并行执行通道。 - -## 问题 - -到 s11, Agent 已经能自主认领和完成任务。但所有任务共享一个目录。两个 Agent 同时重构不同模块 -- A 改 `config.py`, B 也改 `config.py`, 未提交的改动互相污染, 谁也没法干净回滚。 - -任务板管 "做什么" 但不管 "在哪做"。解法: 给每个任务一个独立的 git worktree 目录, 用任务 ID 把两边关联起来。 - -## 解决方案 - -``` -Control plane (.tasks/) Execution plane (.worktrees/) -+------------------+ +------------------------+ -| task_1.json | | auth-refactor/ | -| status: in_progress <------> branch: wt/auth-refactor -| worktree: "auth-refactor" | task_id: 1 | -+------------------+ +------------------------+ -| task_2.json | | ui-login/ | -| status: pending <------> branch: wt/ui-login -| worktree: "ui-login" | task_id: 2 | -+------------------+ +------------------------+ - | - index.json (worktree registry) - events.jsonl (lifecycle log) - -State machines: - Task: pending -> in_progress -> completed - Worktree: absent -> active -> removed | kept -``` - -## 工作原理 - -1. **创建任务。** 先把目标持久化。 - -```python -TASKS.create("Implement auth refactor") -# -> .tasks/task_1.json status=pending worktree="" -``` - -2. **创建 worktree 并绑定任务。** 传入 `task_id` 自动将任务推进到 `in_progress`。 - -```python -WORKTREES.create("auth-refactor", task_id=1) -# -> git worktree add -b wt/auth-refactor .worktrees/auth-refactor HEAD -# -> index.json gets new entry, task_1.json gets worktree="auth-refactor" -``` - -绑定同时写入两侧状态: - -```python -def bind_worktree(self, task_id, worktree): - task = self._load(task_id) - task["worktree"] = worktree - if task["status"] == "pending": - task["status"] = "in_progress" - self._save(task) -``` - -3. **在 worktree 中执行命令。** `cwd` 指向隔离目录。 - -```python -subprocess.run(command, shell=True, cwd=worktree_path, - capture_output=True, text=True, timeout=300) -``` - -4. **收尾。** 两种选择: - - `worktree_keep(name)` -- 保留目录供后续使用。 - - `worktree_remove(name, complete_task=True)` -- 删除目录, 完成绑定任务, 发出事件。一个调用搞定拆除 + 完成。 - -```python -def remove(self, name, force=False, complete_task=False): - self._run_git(["worktree", "remove", wt["path"]]) - if complete_task and wt.get("task_id") is not None: - self.tasks.update(wt["task_id"], status="completed") - self.tasks.unbind_worktree(wt["task_id"]) - self.events.emit("task.completed", ...) -``` - -5. **事件流。** 每个生命周期步骤写入 `.worktrees/events.jsonl`: - -```json -{ - "event": "worktree.remove.after", - "task": {"id": 1, "status": "completed"}, - "worktree": {"name": "auth-refactor", "status": "removed"}, - "ts": 1730000000 -} -``` - -事件类型: `worktree.create.before/after/failed`, `worktree.remove.before/after/failed`, `worktree.keep`, `task.completed`。 - -崩溃后从 `.tasks/` + `.worktrees/index.json` 重建现场。会话记忆是易失的; 磁盘状态是持久的。 - -## 相对 s11 的变更 - -| 组件 | 之前 (s11) | 之后 (s12) | -|--------------------|----------------------------|----------------------------------------------| -| 协调 | 任务板 (owner/status) | 任务板 + worktree 显式绑定 | -| 执行范围 | 共享目录 | 每个任务独立目录 | -| 可恢复性 | 仅任务状态 | 任务状态 + worktree 索引 | -| 收尾 | 任务完成 | 任务完成 + 显式 keep/remove | -| 生命周期可见性 | 隐式日志 | `.worktrees/events.jsonl` 显式事件流 | - -## 试一试 - -```sh -cd learn-claude-code -python agents/s12_worktree_task_isolation.py -``` - -试试这些 prompt (英文 prompt 对 LLM 效果更好, 也可以用中文): - -1. `Create tasks for backend auth and frontend login page, then list tasks.` -2. `Create worktree "auth-refactor" for task 1, then bind task 2 to a new worktree "ui-login".` -3. `Run "git status --short" in worktree "auth-refactor".` -4. `Keep worktree "ui-login", then list worktrees and inspect events.` -5. `Remove worktree "auth-refactor" with complete_task=true, then list tasks/worktrees/events.` diff --git a/docs/zh/s13-background-tasks.md b/docs/zh/s13-background-tasks.md new file mode 100644 index 000000000..0327565a6 --- /dev/null +++ b/docs/zh/s13-background-tasks.md @@ -0,0 +1,367 @@ +# s13: Background Tasks (后台任务) + +`s00 > s01 > s02 > s03 > s04 > s05 > s06 > s07 > s08 > s09 > s10 > s11 > s12 > [ s13 ] > s14 > s15 > s16 > s17 > s18 > s19` + +> *慢命令可以在旁边等,主循环不必陪着发呆。* + +## 这一章要解决什么问题 + +前面几章里,工具调用基本都是: + +```text +模型发起 + -> +立刻执行 + -> +立刻返回结果 +``` + +这对短命令没有问题。 +但一旦遇到这些慢操作,就会卡住: + +- `npm install` +- `pytest` +- `docker build` +- 大型代码生成或检查任务 + +如果主循环一直同步等待,会出现两个坏处: + +- 模型在等待期间什么都做不了 +- 用户明明还想继续别的工作,却被整轮流程堵住 + +所以这一章要解决的是: + +**把“慢执行”移到后台,让主循环继续推进别的事情。** + +## 建议联读 + +- 如果你还没有彻底稳住“任务目标”和“执行槽位”是两层对象,先看 [`s13a-runtime-task-model.md`](./s13a-runtime-task-model.md)。 +- 如果你开始分不清哪些状态该落在 `RuntimeTaskRecord`、哪些还应留在任务板,回看 [`data-structures.md`](./data-structures.md)。 +- 如果你开始把后台执行理解成“另一条主循环”,先看 [`s02b-tool-execution-runtime.md`](./s02b-tool-execution-runtime.md),重新校正“并行的是执行与等待,不是主循环本身”。 + +## 先把几个词讲明白 + +### 什么叫前台 + +前台指的是: + +> 主循环这轮发起以后,必须立刻等待结果的执行路径。 + +### 什么叫后台 + +后台不是神秘系统。 +后台只是说: + +> 命令先在另一条执行线上跑,主循环先去做别的事。 + +### 什么叫通知队列 + +通知队列就是一条“稍后再告诉主循环”的收件箱。 + +后台任务完成以后,不是直接把全文硬塞回模型, +而是先写一条摘要通知,等下一轮再统一带回去。 + +## 最小心智模型 + +这一章最关键的句子是: + +**主循环仍然只有一条,并行的是等待,不是主循环本身。** + +可以把结构画成这样: + +```text +主循环 + | + +-- background_run("pytest") + | -> 立刻返回 task_id + | + +-- 继续别的工作 + | + +-- 下一轮模型调用前 + -> drain_notifications() + -> 把摘要注入 messages + +后台执行线 + | + +-- 真正执行 pytest + +-- 完成后写入通知队列 +``` + +如果读者能牢牢记住这张图,后面扩展成更复杂的异步系统也不会乱。 + +## 关键数据结构 + +### 1. RuntimeTaskRecord + +```python +task = { + "id": "a1b2c3d4", + "command": "pytest", + "status": "running", + "started_at": 1710000000.0, + "result_preview": "", + "output_file": "", +} +``` + +这些字段分别表示: + +- `id`:唯一标识 +- `command`:正在跑什么命令 +- `status`:运行中、完成、失败、超时 +- `started_at`:什么时候开始 +- `result_preview`:先给模型看的简短摘要 +- `output_file`:完整输出写到了哪里 + +教学版再往前走一步时,建议把它直接落成两份文件: + +```text +.runtime-tasks/ + a1b2c3d4.json # RuntimeTaskRecord + a1b2c3d4.log # 完整输出 +``` + +这样读者会更容易理解: + +- `json` 记录的是运行状态 +- `log` 保存的是完整产物 +- 通知只负责把 `preview` 带回主循环 + +### 2. Notification + +```python +notification = { + "type": "background_completed", + "task_id": "a1b2c3d4", + "status": "completed", + "preview": "tests passed", +} +``` + +通知只负责做一件事: + +> 告诉主循环“有结果回来了,你要不要看”。 + +它不是完整日志本体。 + +## 最小实现 + +### 第一步:登记后台任务 + +```python +class BackgroundManager: + def __init__(self): + self.tasks = {} + self.notifications = [] + self.lock = threading.Lock() +``` + +这里最少要有两块状态: + +- `tasks`:当前有哪些后台任务 +- `notifications`:哪些结果已经回来,等待主循环领取 + +### 第二步:启动后台执行线 + +“线程”这个词第一次见可能会有点紧张。 +你可以先把它理解成: + +> 同一个程序里,另一条可以独立往前跑的执行线。 + +```python +def run(self, command: str) -> str: + task_id = new_id() + self.tasks[task_id] = { + "id": task_id, + "command": command, + "status": "running", + } + + thread = threading.Thread( + target=self._execute, + args=(task_id, command), + daemon=True, + ) + thread.start() + return task_id +``` + +这一步最重要的不是线程本身,而是: + +**主循环拿到 `task_id` 后就可以先继续往前走。** + +### 第三步:完成后写通知 + +```python +def _execute(self, task_id: str, command: str): + try: + result = subprocess.run(..., timeout=300) + status = "completed" + preview = (result.stdout + result.stderr)[:500] + except subprocess.TimeoutExpired: + status = "timeout" + preview = "command timed out" + + with self.lock: + self.tasks[task_id]["status"] = status + self.notifications.append({ + "type": "background_completed", + "task_id": task_id, + "status": status, + "preview": preview, + }) +``` + +这里体现的思想很重要: + +**后台执行负责产出结果,通知队列负责把结果送回主循环。** + +### 第四步:下一轮前排空通知 + +```python +def before_model_call(messages: list): + notifications = bg.drain_notifications() + if not notifications: + return + + text = "\n".join( + f"[bg:{n['task_id']}] {n['status']} - {n['preview']}" + for n in notifications + ) + messages.append({"role": "user", "content": text}) +``` + +这样模型在下一轮就会知道: + +- 哪个后台任务完成了 +- 是成功、失败还是超时 +- 如果要看全文,该再去读文件 + +## 为什么完整输出不要直接塞回 prompt + +这是本章必须讲透的点。 + +如果后台任务输出几万行日志,你不能每次都把全文塞回上下文。 +更稳的做法是: + +1. 完整输出写磁盘 +2. 通知里只放简短摘要 +3. 模型真的要看全文时,再调用 `read_file` + +这背后的心智很重要: + +**通知负责提醒,文件负责存原文。** + +## 如何接到主循环里 + +从 `s13` 开始,主循环多出一个标准前置步骤: + +```text +1. 先排空通知队列 +2. 再调用模型 +3. 普通工具照常同步执行 +4. 如果模型调用 background_run,就登记后台任务并立刻返回 task_id +5. 下一轮再把后台结果带回模型 +``` + +教学版最小工具建议先做两个: + +- `background_run` +- `background_check` + +这样已经足够支撑最小异步执行闭环。 + +## 这一章和任务系统的边界 + +这是本章最容易和 `s12` 混掉的地方。 + +### `s12` 的 task 是什么 + +`s12` 里的 `task` 是: + +> 工作目标 + +它关心的是: + +- 要做什么 +- 谁依赖谁 +- 现在总体进度如何 + +### `s13` 的 background task 是什么 + +本章里的后台任务是: + +> 正在运行的执行单元 + +它关心的是: + +- 哪个命令正在跑 +- 跑到什么状态 +- 结果什么时候回来 + +所以最稳的记法是: + +- `task` 更像工作板 +- `background task` 更像运行中的作业 + +两者相关,但不是同一个东西。 + +## 初学者最容易犯的错 + +### 1. 以为“后台”就是更复杂的主循环 + +不是。 +主循环仍然尽量保持单主线。 + +### 2. 只开线程,不登记状态 + +这样任务一多,你根本不知道: + +- 谁还在跑 +- 谁已经完成 +- 谁失败了 + +### 3. 把长日志全文塞进上下文 + +上下文很快就会被撑爆。 + +### 4. 把 `s12` 的工作目标和本章的运行任务混为一谈 + +这会让后面多 agent 和调度章节全部打结。 + +## 教学边界 + +这一章只需要先把一个最小运行时模式讲清楚: + +- 慢工作在后台跑 +- 主循环继续保持单主线 +- 结果通过通知路径在后面回到模型 + +只要这条模式稳了,线程池、更多 worker 类型、更复杂的事件系统都可以后补。 + +这章真正要让读者守住的是: + +**并行的是等待与执行槽位,不是主循环本身。** + +## 学完这一章,你应该真正掌握什么 + +学完以后,你应该能独立复述下面几句话: + +1. 主循环只有一条,并行的是等待,不是主循环本身。 +2. 后台任务至少需要“任务表 + 通知队列”两块状态。 +3. `background_run` 应该立刻返回 `task_id`,而不是同步卡住。 +4. 通知只放摘要,完整输出放文件。 + +如果这 4 句话都已经非常清楚,说明你已经掌握了后台任务系统的核心。 + +## 下一章学什么 + +这一章解决的是: + +> 慢命令如何在后台运行。 + +下一章 `s14` 要解决的是: + +> 如果连“启动后台任务”这件事都不一定由当前用户触发,而是由时间触发,该怎么做。 + +也就是从“异步运行”继续走向“定时触发”。 diff --git a/docs/zh/s13a-runtime-task-model.md b/docs/zh/s13a-runtime-task-model.md new file mode 100644 index 000000000..ee107fb9b --- /dev/null +++ b/docs/zh/s13a-runtime-task-model.md @@ -0,0 +1,276 @@ +# s13a: Runtime Task Model (运行时任务模型) + +> 这篇桥接文档专门解决一个非常容易混淆的问题: +> +> **任务板里的 task,和后台/队友/监控这些“正在运行的任务”,不是同一个东西。** + +## 建议怎么联读 + +这篇最好夹在下面几份文档中间读: + +- 先看 [`s12-task-system.md`](./s12-task-system.md),确认工作图任务在讲什么。 +- 再看 [`s13-background-tasks.md`](./s13-background-tasks.md),确认后台执行在讲什么。 +- 如果词开始混,再回 [`glossary.md`](./glossary.md)。 +- 如果想把字段和状态彻底对上,再对照 [`data-structures.md`](./data-structures.md) 和 [`entity-map.md`](./entity-map.md)。 + +## 为什么必须单独讲这一篇 + +主线里: + +- `s12` 讲的是任务系统 +- `s13` 讲的是后台任务 + +这两章各自都没错。 +但如果不额外补一层桥接,很多读者很快就会把两种“任务”混在一起。 + +例如: + +- 任务板里的 “实现 auth 模块” +- 后台执行里的 “正在跑 pytest” +- 队友执行里的 “alice 正在做代码改动” + +这些都可以叫“任务”,但它们不在同一层。 + +为了让整个仓库接近满分,这一层必须讲透。 + +## 先解释两个完全不同的“任务” + +### 第一种:工作图任务 + +这就是 `s12` 里的任务板节点。 + +它回答的是: + +- 要做什么 +- 谁依赖谁 +- 谁认领了 +- 当前进度如何 + +它更像: + +> 工作计划中的一个可跟踪工作单元。 + +### 第二种:运行时任务 + +这类任务回答的是: + +- 现在有什么执行单元正在跑 +- 它是什么类型 +- 是在运行、完成、失败还是被杀掉 +- 输出文件在哪 + +它更像: + +> 系统当前活着的一条执行槽位。 + +## 最小心智模型 + +你可以先把两者画成两张表: + +```text +工作图任务 + - durable + - 面向目标与依赖 + - 生命周期更长 + +运行时任务 + - runtime + - 面向执行与输出 + - 生命周期更短 +``` + +它们的关系不是“二选一”,而是: + +```text +一个工作图任务 + 可以派生 +一个或多个运行时任务 +``` + +例如: + +```text +工作图任务: + "实现 auth 模块" + +运行时任务: + 1. 后台跑测试 + 2. 启动一个 coder teammate + 3. 监控一个 MCP 服务返回结果 +``` + +## 为什么这层区别非常重要 + +如果不区分这两层,后面很多章节都会开始缠在一起: + +- `s13` 的后台任务会和 `s12` 的任务板混淆 +- `s15-s17` 的队友任务会不知道该挂在哪 +- `s18` 的 worktree 到底绑定哪一层任务,也会变模糊 + +所以你要先记住一句: + +**工作图任务管“目标”,运行时任务管“执行”。** + +## 关键数据结构 + +### 1. WorkGraphTaskRecord + +这就是 `s12` 里的那条 durable task。 + +```python +task = { + "id": 12, + "subject": "Implement auth module", + "status": "in_progress", + "blockedBy": [], + "blocks": [13], + "owner": "alice", + "worktree": "auth-refactor", +} +``` + +### 2. RuntimeTaskState + +教学版可以先用这个最小形状: + +```python +runtime_task = { + "id": "b8k2m1qz", + "type": "local_bash", + "status": "running", + "description": "Run pytest", + "start_time": 1710000000.0, + "end_time": None, + "output_file": ".task_outputs/b8k2m1qz.txt", + "notified": False, +} +``` + +这里的字段重点在于: + +- `type`:它是什么执行单元 +- `status`:它现在在运行态还是终态 +- `output_file`:它的产出在哪 +- `notified`:结果有没有回通知系统 + +### 3. RuntimeTaskType + +你不必在教学版里一次性实现所有类型, +但应该让读者知道“运行时任务”是一个类型族,而不只是 `background shell` 一种。 + +最小类型表可以先这样讲: + +```text +local_bash +local_agent +remote_agent +in_process_teammate +monitor +workflow +``` + +## 最小实现 + +### 第一步:继续保留 `s12` 的任务板 + +这一层不要动。 + +### 第二步:单独加一个 RuntimeTaskManager + +```python +class RuntimeTaskManager: + def __init__(self): + self.tasks = {} +``` + +### 第三步:后台运行时创建 runtime task + +```python +def spawn_bash_task(command: str): + task_id = new_runtime_id() + runtime_tasks[task_id] = { + "id": task_id, + "type": "local_bash", + "status": "running", + "description": command, + } +``` + +### 第四步:必要时把 runtime task 关联回工作图任务 + +```python +runtime_tasks[task_id]["work_graph_task_id"] = 12 +``` + +这一步不是必须一上来就做,但如果系统进入多 agent / worktree 阶段,就会越来越重要。 + +## 一张真正清楚的图 + +```text +Work Graph + task #12: Implement auth module + | + +-- spawns runtime task A: local_bash (pytest) + +-- spawns runtime task B: local_agent (coder worker) + +-- spawns runtime task C: monitor (watch service status) + +Runtime Task Layer + A/B/C each have: + - own runtime ID + - own status + - own output + - own lifecycle +``` + +## 它和后面章节怎么连 + +这层一旦讲清楚,后面几章会顺很多: + +- `s13` 后台命令,本质上是 runtime task +- `s15-s17` 队友/agent,也可以看成 runtime task 的一种 +- `s18` worktree 主要绑定工作图任务,但也会影响运行时执行环境 +- `s19` 某些外部监控或异步调用,也可能落成 runtime task + +所以后面只要你看到“有东西在后台活着并推进工作”,都可以先问自己两句: + +- 它是不是某个 durable work graph task 派生出来的执行槽位。 +- 它的状态是不是应该放在 runtime layer,而不是任务板节点里。 + +## 初学者最容易犯的错 + +### 1. 把后台 shell 直接写成任务板状态 + +这样 durable task 和 runtime state 就混在一起了。 + +### 2. 认为一个工作图任务只能对应一个运行时任务 + +现实里很常见的是一个工作目标派生多个执行单元。 + +### 3. 用同一套状态名描述两层对象 + +例如: + +- 工作图任务的 `pending / in_progress / completed` +- 运行时任务的 `running / completed / failed / killed` + +这两套状态最好不要混。 + +### 4. 忽略 output file 和 notified 这类运行时字段 + +工作图任务不太关心这些,运行时任务非常关心。 + +## 教学边界 + +这篇最重要的,不是把运行时字段一次加满,而是先把下面三层对象彻底拆开: + +- durable task 是长期工作目标 +- runtime task 是当前活着的执行槽位 +- notification / output 只是运行时把结果带回来的通道 + +运行时任务类型枚举、增量输出 offset、槽位清理策略,都可以等你先把这三层边界手写清楚以后再扩展。 + +## 一句话记住 + +**工作图任务管“长期目标和依赖”,运行时任务管“当前活着的执行单元和输出”。** + +**`s12` 的 task 是工作图节点,`s13+` 的 runtime task 是系统里真正跑起来的执行单元。** diff --git a/docs/zh/s14-cron-scheduler.md b/docs/zh/s14-cron-scheduler.md new file mode 100644 index 000000000..044f4e86c --- /dev/null +++ b/docs/zh/s14-cron-scheduler.md @@ -0,0 +1,288 @@ +# s14: Cron Scheduler (定时调度) + +`s00 > s01 > s02 > s03 > s04 > s05 > s06 > s07 > s08 > s09 > s10 > s11 > s12 > s13 > [ s14 ] > s15 > s16 > s17 > s18 > s19` + +> *如果后台任务解决的是“稍后回来拿结果”,那么定时调度解决的是“将来某个时间再开始做事”。* + +## 这一章要解决什么问题 + +`s13` 已经让系统学会了把慢命令放到后台。 + +但后台任务默认还是“现在就启动”。 + +很多真实需求并不是现在做,而是: + +- 每天晚上跑一次测试 +- 每周一早上生成报告 +- 30 分钟后提醒我继续检查某个结果 + +如果没有调度能力,用户就只能每次手动再说一遍。 +这会让系统看起来像“只能响应当下”,而不是“能安排未来工作”。 + +所以这一章要加上的能力是: + +**把一条未来要执行的意图,先记下来,等时间到了再触发。** + +## 建议联读 + +- 如果你还没完全分清 `schedule`、`task`、`runtime task` 各自表示什么,先回 [`s13a-runtime-task-model.md`](./s13a-runtime-task-model.md)。 +- 如果你想重新看清“一条触发最终是怎样回到主循环里的”,可以配合读 [`s00b-one-request-lifecycle.md`](./s00b-one-request-lifecycle.md)。 +- 如果你开始把“未来触发”误以为“又多了一套执行系统”,先回 [`data-structures.md`](./data-structures.md),确认调度记录和运行时记录不是同一个表。 + +## 先解释几个名词 + +### 什么是调度器 + +调度器,就是一段专门负责“看时间、查任务、决定是否触发”的代码。 + +### 什么是 cron 表达式 + +`cron` 是一种很常见的定时写法。 + +最小 5 字段版本长这样: + +```text +分 时 日 月 周 +``` + +例如: + +```text +*/5 * * * * 每 5 分钟 +0 9 * * 1 每周一 9 点 +30 14 * * * 每天 14:30 +``` + +如果你是初学者,不用先背全。 + +这一章真正重要的不是语法细节,而是: + +> “系统如何把一条未来任务记住,并在合适时刻放回主循环。” + +### 什么是持久化调度 + +持久化,意思是: + +> 就算程序重启,这条调度记录还在。 + +## 最小心智模型 + +先把调度看成 3 个部分: + +```text +1. 调度记录 +2. 定时检查器 +3. 通知队列 +``` + +它们之间的关系是: + +```text +schedule_create(...) + -> +把记录写到列表或文件里 + -> +后台检查器每分钟看一次“现在是否匹配” + -> +如果匹配,就把 prompt 放进通知队列 + -> +主循环下一轮把它当成新的用户消息喂给模型 +``` + +这条链路很重要。 + +因为它说明了一点: + +**定时调度并不是另一套 agent。它最终还是回到同一条主循环。** + +## 关键数据结构 + +### 1. ScheduleRecord + +```python +schedule = { + "id": "job_001", + "cron": "0 9 * * 1", + "prompt": "Run the weekly status report.", + "recurring": True, + "durable": True, + "created_at": 1710000000.0, + "last_fired_at": None, +} +``` + +字段含义: + +- `id`:唯一编号 +- `cron`:定时规则 +- `prompt`:到点后要注入主循环的提示 +- `recurring`:是不是反复触发 +- `durable`:是否落盘保存 +- `created_at`:创建时间 +- `last_fired_at`:上次触发时间 + +### 2. 调度通知 + +```python +{ + "type": "scheduled_prompt", + "schedule_id": "job_001", + "prompt": "Run the weekly status report.", +} +``` + +### 3. 检查周期 + +教学版建议先按“分钟级”思考,而不是“秒级严格精度”。 + +因为大多数 cron 任务本来就不是为了卡秒执行。 + +## 最小实现 + +### 第一步:允许创建一条调度记录 + +```python +def create(self, cron_expr: str, prompt: str, recurring: bool = True): + job = { + "id": new_id(), + "cron": cron_expr, + "prompt": prompt, + "recurring": recurring, + "created_at": time.time(), + "last_fired_at": None, + } + self.jobs.append(job) + return job +``` + +### 第二步:写一个定时检查循环 + +```python +def check_loop(self): + while True: + now = datetime.now() + self.check_jobs(now) + time.sleep(60) +``` + +最小教学版先每分钟检查一次就足够。 + +### 第三步:时间到了就发通知 + +```python +def check_jobs(self, now): + for job in self.jobs: + if cron_matches(job["cron"], now): + self.queue.put({ + "type": "scheduled_prompt", + "schedule_id": job["id"], + "prompt": job["prompt"], + }) + job["last_fired_at"] = now.timestamp() +``` + +### 第四步:主循环像处理后台通知一样处理定时通知 + +```python +notifications = scheduler.drain() +for item in notifications: + messages.append({ + "role": "user", + "content": f"[scheduled:{item['schedule_id']}] {item['prompt']}", + }) +``` + +这样一来,定时任务最终还是由模型接手继续做。 + +## 为什么这章放在后台任务之后 + +因为这两章解决的问题很接近,但不是同一件事。 + +可以这样区分: + +| 机制 | 回答的问题 | +|---|---| +| 后台任务 | “已经启动的慢操作,结果什么时候回来?” | +| 定时调度 | “一件事应该在未来什么时候开始?” | + +这个顺序对初学者很友好。 + +因为先理解“异步结果回来”,再理解“未来触发一条新意图”,心智会更顺。 + +## 初学者最容易犯的错 + +### 1. 一上来沉迷 cron 语法细节 + +这章最容易跑偏到一大堆表达式规则。 + +但教学主线其实不是“背语法”,而是: + +**调度记录如何进入通知队列,又如何回到主循环。** + +### 2. 没有 `last_fired_at` + +没有这个字段,系统很容易在短时间内重复触发同一条任务。 + +### 3. 只放内存,不支持落盘 + +如果用户希望“明天再提醒我”,程序一重启就没了,这就不是真正的调度。 + +### 4. 把调度触发结果直接在后台默默执行 + +教学主线里更清楚的做法是: + +- 时间到了 +- 先发通知 +- 再让主循环决定怎么处理 + +这样系统行为更透明,读者也更容易理解。 + +### 5. 误以为定时任务必须绝对准点 + +很多初学者会把调度想成秒表。 + +但这里更重要的是“有计划地触发”,而不是追求毫秒级精度。 + +## 如何接到整个系统里 + +到了这一章,系统已经有两条重要的“外部事件输入”: + +- 后台任务完成通知 +- 定时调度触发通知 + +二者最好的统一方式是: + +**都走通知队列,再在下一次模型调用前统一注入。** + +这样主循环结构不会越来越乱。 + +## 教学边界 + +这一章先讲清一条主线就够了: + +**调度器做的是“记住未来”,不是“取代主循环”。** + +所以教学版先只需要让读者看清: + +- schedule record 负责记住未来何时开工 +- 真正执行工作时,仍然回到任务系统和通知队列 +- 它只是多了一种“开始入口”,不是多了一条新的主循环 + +多进程锁、漏触发补报、自然语言时间语法这些,都应该排在这条主线之后。 + +## 试一试 + +```sh +cd learn-claude-code +python agents/s14_cron_scheduler.py +``` + +可以试试这些任务: + +1. 建一个每分钟触发一次的小任务,观察它是否会按时进入通知队列。 +2. 建一个只触发一次的任务,确认触发后是否会消失。 +3. 重启程序,检查持久化的调度记录是否还在。 + +读完这一章,你应该能自己说清这句话: + +**后台任务是在“等结果”,定时调度是在“等开始”。** diff --git a/docs/zh/s15-agent-teams.md b/docs/zh/s15-agent-teams.md new file mode 100644 index 000000000..1f82cef3f --- /dev/null +++ b/docs/zh/s15-agent-teams.md @@ -0,0 +1,358 @@ +# s15: Agent Teams (智能体团队) + +`s00 > s01 > s02 > s03 > s04 > s05 > s06 > s07 > s08 > s09 > s10 > s11 > s12 > s13 > s14 > [ s15 ] > s16 > s17 > s18 > s19` + +> *子 agent 适合一次性委派;团队系统解决的是“有人长期在线、能继续接活、能互相协作”。* + +## 这一章要解决什么问题 + +`s04` 的 subagent 已经能帮主 agent 拆小任务。 + +但 subagent 有一个很明显的边界: + +```text +创建 -> 执行 -> 返回摘要 -> 消失 +``` + +这很适合一次性的小委派。 +可如果你想做这些事,就不够用了: + +- 让一个测试 agent 长期待命 +- 让两个 agent 长期分工 +- 让某个 agent 未来收到新任务后继续工作 + +也就是说,系统现在缺的不是“再开一个模型调用”,而是: + +**一批有身份、能长期存在、能反复协作的队友。** + +## 建议联读 + +- 如果你还在把 teammate 和 `s04` 的 subagent 混成一类,先回 [`entity-map.md`](./entity-map.md)。 +- 如果你准备继续读 `s16-s18`,建议把 [`team-task-lane-model.md`](./team-task-lane-model.md) 放在手边,它会把 teammate、protocol request、task、runtime slot、worktree lane 这五层一起拆开。 +- 如果你开始怀疑“长期队友”和“活着的执行槽位”到底是什么关系,配合看 [`s13a-runtime-task-model.md`](./s13a-runtime-task-model.md)。 + +## 先把几个词讲明白 + +### 什么是队友 + +这里的 `teammate` 指的是: + +> 一个拥有名字、角色、消息入口和生命周期的持久 agent。 + +### 什么是名册 + +名册就是团队成员列表。 + +它回答的是: + +- 现在队伍里有谁 +- 每个人是什么角色 +- 每个人现在是空闲、工作中还是已关闭 + +### 什么是邮箱 + +邮箱就是每个队友的收件箱。 + +别人把消息发给它, +它在自己的下一轮工作前先去收消息。 + +## 最小心智模型 + +这一章最简单的理解方式,是把每个队友都想成: + +> 一个有自己循环、自己收件箱、自己上下文的人。 + +```text +lead + | + +-- spawn alice (coder) + +-- spawn bob (tester) + | + +-- send message --> alice inbox + +-- send message --> bob inbox + +alice + | + +-- 自己的 messages + +-- 自己的 inbox + +-- 自己的 agent loop + +bob + | + +-- 自己的 messages + +-- 自己的 inbox + +-- 自己的 agent loop +``` + +和 `s04` 的最大区别是: + +**subagent 是一次性执行单元,teammate 是长期存在的协作成员。** + +## 关键数据结构 + +### 1. TeamMember + +```python +member = { + "name": "alice", + "role": "coder", + "status": "working", +} +``` + +教学版先只保留这 3 个字段就够了: + +- `name`:名字 +- `role`:角色 +- `status`:状态 + +### 2. TeamConfig + +```python +config = { + "team_name": "default", + "members": [member1, member2], +} +``` + +它通常可以放在: + +```text +.team/config.json +``` + +这份名册让系统重启以后,仍然知道: + +- 团队里曾经有谁 +- 每个人当前是什么角色 + +### 3. MessageEnvelope + +```python +message = { + "type": "message", + "from": "lead", + "content": "Please review auth module.", + "timestamp": 1710000000.0, +} +``` + +`envelope` 这个词本来是“信封”的意思。 +程序里用它表示: + +> 把消息正文和元信息一起包起来的一条记录。 + +## 最小实现 + +### 第一步:先有一份队伍名册 + +```python +class TeammateManager: + def __init__(self, team_dir: Path): + self.team_dir = team_dir + self.config_path = team_dir / "config.json" + self.config = self._load_config() +``` + +名册是本章的起点。 +没有名册,就没有真正的“团队实体”。 + +### 第二步:spawn 一个持久队友 + +```python +def spawn(self, name: str, role: str, prompt: str): + member = {"name": name, "role": role, "status": "working"} + self.config["members"].append(member) + self._save_config() + + thread = threading.Thread( + target=self._teammate_loop, + args=(name, role, prompt), + daemon=True, + ) + thread.start() +``` + +这里的关键不在于线程本身,而在于: + +**队友一旦被创建,就不只是一次性工具调用,而是一个有持续生命周期的成员。** + +### 第三步:给每个队友一个邮箱 + +教学版最简单的做法可以直接用 JSONL 文件: + +```text +.team/inbox/alice.jsonl +.team/inbox/bob.jsonl +``` + +发消息时追加一行: + +```python +def send(self, sender: str, to: str, content: str): + with open(f"{to}.jsonl", "a") as f: + f.write(json.dumps({ + "type": "message", + "from": sender, + "content": content, + "timestamp": time.time(), + }) + "\n") +``` + +收消息时: + +1. 读出全部 +2. 解析为消息列表 +3. 清空收件箱 + +### 第四步:队友每轮先看邮箱,再继续工作 + +```python +def teammate_loop(name: str, role: str, prompt: str): + messages = [{"role": "user", "content": prompt}] + + while True: + inbox = bus.read_inbox(name) + for item in inbox: + messages.append({"role": "user", "content": json.dumps(item)}) + + response = client.messages.create(...) + ... +``` + +这一步一定要讲透。 + +因为它说明: + +**队友不是靠“被重新创建”来获得新任务,而是靠“下一轮先检查邮箱”来接收新工作。** + +## 如何接到前面章节的系统里 + +这章最容易出现的误解是: + +> 好像系统突然“多了几个人”,但不知道这些人到底接在之前哪一层。 + +更准确的接法应该是: + +```text +用户目标 / lead 判断需要长期分工 + -> +spawn teammate + -> +写入 .team/config.json + -> +通过 inbox 分派消息、摘要、任务线索 + -> +teammate 先 drain inbox + -> +进入自己的 agent loop 和工具调用 + -> +把结果回送给 lead,或继续等待下一轮工作 +``` + +这里要特别看清三件事: + +1. `s12-s14` 已经给了你任务板、后台执行、时间触发这些“工作层”。 +2. `s15` 现在补的是“长期执行者”,也就是谁长期在线、谁能反复接活。 +3. 本章还没有进入“自己找活”或“自动认领”。 + +也就是说,`s15` 的默认工作方式仍然是: + +- 由 lead 手动创建队友 +- 由 lead 通过邮箱分派事情 +- 队友在自己的循环里持续处理 + +真正的自治认领,要到 `s17` 才展开。 + +## Teammate、Subagent、Runtime Task 到底怎么区分 + +这是这一组章节里最容易混的点。 + +可以直接记这张表: + +| 机制 | 更像什么 | 生命周期 | 关键边界 | +|---|---|---| +| subagent | 一次性外包助手 | 干完就结束 | 重点是“隔离一小段探索性上下文” | +| runtime task | 正在运行的后台执行槽位 | 任务跑完或取消就结束 | 重点是“慢任务稍后回来”,不是长期身份 | +| teammate | 长期在线队友 | 可以反复接任务 | 重点是“有名字、有邮箱、有独立循环” | + +再换成更口语的话说: + +- subagent 适合“帮我查一下再回来汇报” +- runtime task 适合“这件事你后台慢慢跑,结果稍后通知我” +- teammate 适合“你以后长期负责测试方向” + +## 这一章的教学边界 + +本章先只把 3 件事讲稳: + +- 名册 +- 邮箱 +- 独立循环 + +这已经足够把“长期队友”这个实体立起来。 + +但它还没有展开后面两层能力: + +### 第一层:结构化协议 + +也就是: + +- 哪些消息只是普通交流 +- 哪些消息是带 `request_id` 的结构化请求 + +这部分放到下一章 `s16`。 + +### 第二层:自治认领 + +也就是: + +- 队友空闲时能不能自己找活 +- 能不能自己恢复工作 + +这部分放到 `s17`。 + +## 初学者最容易犯的错 + +### 1. 把队友当成“名字不同的 subagent” + +如果生命周期还是“执行完就销毁”,那本质上还不是 teammate。 + +### 2. 队友之间共用同一份 messages + +这样上下文会互相污染。 + +每个队友都应该有自己的对话状态。 + +### 3. 没有持久名册 + +如果系统关掉以后完全不知道“团队里曾经有谁”,那就很难继续做长期协作。 + +### 4. 没有邮箱,靠共享变量直接喊话 + +教学上不建议一开始就这么做。 + +因为它会把“队友通信”和“进程内部细节”绑得太死。 + +## 学完这一章,你应该真正掌握什么 + +学完以后,你应该能独立说清下面几件事: + +1. teammate 的核心不是“多一个模型调用”,而是“多一个长期存在的执行者”。 +2. 团队系统至少需要“名册 + 邮箱 + 独立循环”。 +3. 每个队友都应该有自己的 `messages` 和自己的 inbox。 +4. subagent 和 teammate 的根本区别在生命周期,而不是名字。 + +如果这 4 点已经稳了,说明你已经真正理解了“多 agent 团队”是怎么从单 agent 演化出来的。 + +## 下一章学什么 + +这一章解决的是: + +> 团队成员如何长期存在、互相发消息。 + +下一章 `s16` 要解决的是: + +> 当消息不再只是自由聊天,而要变成可追踪、可批准、可拒绝的协作流程时,该怎么设计。 + +也就是从“有团队”继续走向“团队协议”。 diff --git a/docs/zh/s16-team-protocols.md b/docs/zh/s16-team-protocols.md new file mode 100644 index 000000000..0f4f13f02 --- /dev/null +++ b/docs/zh/s16-team-protocols.md @@ -0,0 +1,401 @@ +# s16: Team Protocols (团队协议) + +`s00 > s01 > s02 > s03 > s04 > s05 > s06 > s07 > s08 > s09 > s10 > s11 > s12 > s13 > s14 > s15 > [ s16 ] > s17 > s18 > s19` + +> *有了邮箱以后,团队已经能说话;有了协议以后,团队才开始会“按规矩协作”。* + +## 这一章要解决什么问题 + +`s15` 已经让队友之间可以互相发消息。 + +但如果所有事情都只靠自由文本,会有两个明显问题: + +- 某些动作必须明确批准或拒绝,不能只靠一句模糊回复 +- 一旦多个请求同时存在,系统很难知道“这条回复对应哪一件事” + +最典型的两个场景就是: + +1. 队友要不要优雅关机 +2. 某个高风险计划要不要先审批 + +这两件事看起来不同,但结构其实一样: + +```text +一方发请求 +另一方明确回复 +双方都能用同一个 request_id 对上号 +``` + +所以这一章要加的,不是更多自由聊天,而是: + +**一层结构化协议。** + +## 建议联读 + +- 如果你开始把普通消息和协议请求混掉,先回 [`glossary.md`](./glossary.md) 和 [`entity-map.md`](./entity-map.md)。 +- 如果你准备继续读 `s17` 和 `s18`,建议先看 [`team-task-lane-model.md`](./team-task-lane-model.md),这样后面自治认领和 worktree 车道不会一下子缠在一起。 +- 如果你想重新确认协议请求最终怎样回流到主系统,可以配合看 [`s00b-one-request-lifecycle.md`](./s00b-one-request-lifecycle.md)。 + +## 先把几个词讲明白 + +### 什么是协议 + +协议可以简单理解成: + +> 双方提前约定好“消息长什么样、收到以后怎么处理”。 + +### 什么是 request_id + +`request_id` 就是请求编号。 + +它的作用是: + +- 某个请求发出去以后有一个唯一身份 +- 之后的批准、拒绝、超时都能准确指向这一个请求 + +### 什么是请求-响应模式 + +这个词听起来像高级概念,其实很简单: + +```text +请求方:我发起一件事 +响应方:我明确回答同意还是不同意 +``` + +本章做的,就是把这种模式从“口头表达”升级成“结构化数据”。 + +## 最小心智模型 + +从教学角度,你可以先把协议看成两层: + +```text +1. 协议消息 +2. 请求追踪表 +``` + +### 协议消息 + +```python +{ + "type": "shutdown_request", + "from": "lead", + "to": "alice", + "request_id": "req_001", + "payload": {}, +} +``` + +### 请求追踪表 + +```python +requests = { + "req_001": { + "kind": "shutdown", + "status": "pending", + } +} +``` + +只要这两层都存在,系统就能同时回答: + +- 现在发生了什么 +- 这件事目前走到哪一步 + +## 关键数据结构 + +### 1. ProtocolEnvelope + +```python +message = { + "type": "shutdown_request", + "from": "lead", + "to": "alice", + "request_id": "req_001", + "payload": {}, + "timestamp": 1710000000.0, +} +``` + +它比普通消息多出来的关键字段就是: + +- `type` +- `request_id` +- `payload` + +### 2. RequestRecord + +```python +request = { + "request_id": "req_001", + "kind": "shutdown", + "from": "lead", + "to": "alice", + "status": "pending", +} +``` + +它负责记录: + +- 这是哪种请求 +- 谁发给谁 +- 当前状态是什么 + +如果你想把教学版再往真实系统推进一步,建议不要只放在内存字典里,而是直接落盘: + +```text +.team/requests/ + req_001.json + req_002.json +``` + +这样系统就能做到: + +- 请求状态可恢复 +- 协议过程可检查 +- 即使主循环继续往前,请求记录也不会丢 + +### 3. 状态机 + +本章里的状态机非常简单: + +```text +pending -> approved +pending -> rejected +pending -> expired +``` + +这里再次提醒读者: + +`状态机` 的意思不是复杂理论, +只是“状态之间如何变化的一张规则表”。 + +## 最小实现 + +### 协议 1:优雅关机 + +“优雅关机”的意思不是直接把线程硬砍掉。 +而是: + +1. 先发关机请求 +2. 队友明确回复同意或拒绝 +3. 如果同意,先收尾,再退出 + +发请求: + +```python +def request_shutdown(target: str): + request_id = new_id() + requests[request_id] = { + "kind": "shutdown", + "target": target, + "status": "pending", + } + bus.send( + "lead", + target, + msg_type="shutdown_request", + extra={"request_id": request_id}, + content="Please shut down gracefully.", + ) +``` + +收响应: + +```python +def handle_shutdown_response(request_id: str, approve: bool): + record = requests[request_id] + record["status"] = "approved" if approve else "rejected" +``` + +### 协议 2:计划审批 + +这其实还是同一个请求-响应模板。 + +比如某个队友想做高风险改动,可以先提计划: + +```python +def submit_plan(name: str, plan_text: str): + request_id = new_id() + requests[request_id] = { + "kind": "plan_approval", + "from": name, + "status": "pending", + "plan": plan_text, + } + bus.send( + name, + "lead", + msg_type="plan_approval", + extra={"request_id": request_id, "plan": plan_text}, + content="Requesting review.", + ) +``` + +领导审批: + +```python +def review_plan(request_id: str, approve: bool, feedback: str = ""): + record = requests[request_id] + record["status"] = "approved" if approve else "rejected" + bus.send( + "lead", + record["from"], + msg_type="plan_approval_response", + extra={"request_id": request_id, "approve": approve}, + content=feedback, + ) +``` + +看到这里,读者应该开始意识到: + +**本章最重要的不是“关机”或“计划”本身,而是同一个协议模板可以反复复用。** + +## 协议请求不是普通消息 + +这一点一定要讲透。 + +邮箱里虽然都叫“消息”,但 `s16` 以后其实已经分成两类: + +### 1. 普通消息 + +适合: + +- 讨论 +- 提醒 +- 补充说明 + +### 2. 协议消息 + +适合: + +- 审批 +- 关机 +- 交接 +- 签收 + +它至少要带: + +- `type` +- `request_id` +- `from` +- `to` +- `payload` + +最简单的记法是: + +- 普通消息解决“说了什么” +- 协议消息解决“这件事走到哪一步了” + +## 如何接到团队系统里 + +这章真正补上的,不只是两个新工具名,而是一条新的协作回路: + +```text +某个队友 / lead 发起请求 + -> +写入 RequestRecord + -> +把 ProtocolEnvelope 投递进对方 inbox + -> +对方下一轮 drain inbox + -> +按 request_id 更新请求状态 + -> +必要时再回一条 response + -> +请求方根据 approved / rejected 继续后续动作 +``` + +你可以把它理解成: + +- `s15` 给了团队“邮箱” +- `s16` 现在给邮箱里的某些消息加上“编号 + 状态机 + 回执” + +如果少了这条结构化回路,团队虽然能沟通,但无法稳定协作。 + +## MessageEnvelope、ProtocolEnvelope、RequestRecord、TaskRecord 的边界 + +这 4 个对象很容易一起打结。最稳的记法是: + +| 对象 | 它回答什么问题 | 典型字段 | +|---|---|---| +| `MessageEnvelope` | 谁跟谁说了什么 | `from` / `to` / `content` | +| `ProtocolEnvelope` | 这是不是一条结构化请求或响应 | `type` / `request_id` / `payload` | +| `RequestRecord` | 这件协作流程现在走到哪一步 | `kind` / `status` / `from` / `to` | +| `TaskRecord` | 真正的工作项是什么、谁在做、还卡着谁 | `subject` / `status` / `blockedBy` / `owner` | + +一定要牢牢记住: + +- 协议请求不是任务本身 +- 请求状态表也不是任务板 +- 协议只负责“协作流程” +- 任务系统才负责“真正的工作推进” + +## 这一章的教学边界 + +教学版先只讲 2 类协议就够了: + +- `shutdown` +- `plan_approval` + +因为这两类已经足够把下面几件事讲清楚: + +- 什么是结构化消息 +- 什么是 request_id +- 为什么要有请求状态表 +- 为什么协议不是自由文本 + +等这套模板学稳以后,你完全可以再扩展: + +- 任务认领协议 +- 交接协议 +- 结果签收协议 + +但这些都应该建立在本章的统一模板之上。 + +## 初学者最容易犯的错 + +### 1. 没有 `request_id` + +没有编号,多个请求同时存在时很快就会乱。 + +### 2. 收到请求以后只回一句自然语言 + +例如: + +```text +好的,我知道了 +``` + +人类可能看得懂,但系统很难稳定处理。 + +### 3. 没有请求状态表 + +如果系统不记录 `pending` / `approved` / `rejected`,协议其实就没有真正落地。 + +### 4. 把协议消息和普通消息混成一种结构 + +这样后面一多,处理逻辑会越来越混。 + +## 学完这一章,你应该真正掌握什么 + +学完以后,你应该能独立复述下面几件事: + +1. 团队协议的核心,是“请求-响应 + request_id + 状态表”。 +2. 协议消息和普通聊天消息不是一回事。 +3. 关机协议和计划审批虽然业务不同,但底层模板可以复用。 +4. 团队一旦进入结构化协作,就要靠协议,而不是只靠自然语言。 + +如果这 4 点已经非常稳定,说明这一章真正学到了。 + +## 下一章学什么 + +这一章解决的是: + +> 团队如何按规则协作。 + +下一章 `s17` 要解决的是: + +> 如果没有人每次都手动派活,队友能不能在空闲时自己找任务、自己恢复工作。 + +也就是从“协议化协作”继续走向“自治行为”。 diff --git a/docs/zh/s17-autonomous-agents.md b/docs/zh/s17-autonomous-agents.md new file mode 100644 index 000000000..3a7f0efe4 --- /dev/null +++ b/docs/zh/s17-autonomous-agents.md @@ -0,0 +1,540 @@ +# s17: Autonomous Agents (自治智能体) + +`s00 > s01 > s02 > s03 > s04 > s05 > s06 > s07 > s08 > s09 > s10 > s11 > s12 > s13 > s14 > s15 > s16 > [ s17 ] > s18 > s19` + +> *一个团队真正开始“自己运转”,不是因为 agent 数量变多,而是因为空闲的队友会自己去找下一份工作。* + +## 这一章要解决什么问题 + +到了 `s16`,团队已经有: + +- 持久队友 +- 邮箱 +- 协议 +- 任务板 + +但还有一个明显瓶颈: + +**很多事情仍然要靠 lead 手动分配。** + +例如任务板上已经有 10 条可做任务,如果还要 lead 一个个点名: + +- Alice 做 1 +- Bob 做 2 +- Charlie 做 3 + +那团队规模一大,lead 就会变成瓶颈。 + +所以这一章要解决的核心问题是: + +**让空闲队友自己扫描任务板,找到可做的任务并认领。** + +## 建议联读 + +- 如果你开始把 teammate、task、runtime slot 三层一起讲糊,先回 [`team-task-lane-model.md`](./team-task-lane-model.md)。 +- 如果你读到“auto-claim”时开始疑惑“活着的执行槽位”到底放在哪,继续看 [`s13a-runtime-task-model.md`](./s13a-runtime-task-model.md)。 +- 如果你开始忘记“长期队友”和“一次性 subagent”最根本的区别,回看 [`entity-map.md`](./entity-map.md)。 + +## 先解释几个名词 + +### 什么叫自治 + +这里的自治,不是完全没人管。 + +这里说的自治是: + +> 在提前给定规则的前提下,队友可以自己决定下一步接哪份工作。 + +### 什么叫认领 + +认领,就是把一条原本没人负责的任务,标记成“现在由我负责”。 + +### 什么叫空闲阶段 + +空闲阶段不是关机,也不是消失。 + +它表示: + +> 这个队友当前手头没有活,但仍然活着,随时准备接新活。 + +## 最小心智模型 + +最清楚的理解方式,是把每个队友想成在两个阶段之间切换: + +```text +WORK + | + | 当前轮工作做完,或者主动进入 idle + v +IDLE + | + +-- 看邮箱,有新消息 -> 回到 WORK + | + +-- 看任务板,有 ready task -> 认领 -> 回到 WORK + | + +-- 长时间什么都没有 -> shutdown +``` + +这里的关键不是“让它永远不停想”,而是: + +**空闲时,按规则检查两类新输入:邮箱和任务板。** + +## 关键数据结构 + +### 1. Claimable Predicate + +和 `s12` 一样,这里最重要的是: + +**什么任务算“当前这个队友可以安全认领”的任务。** + +在当前教学代码里,判定已经不是单纯看 `pending`,而是: + +```python +def is_claimable_task(task: dict, role: str | None = None) -> bool: + return ( + task.get("status") == "pending" + and not task.get("owner") + and not task.get("blockedBy") + and _task_allows_role(task, role) + ) +``` + +这 4 个条件缺一不可: + +- 任务还没开始 +- 还没人认领 +- 没有前置阻塞 +- 当前队友角色满足认领策略 + +最后一条很关键。 + +因为现在任务可以带: + +- `claim_role` +- `required_role` + +例如: + +```python +task = { + "id": 7, + "subject": "Implement login page", + "status": "pending", + "owner": "", + "blockedBy": [], + "claim_role": "frontend", +} +``` + +这表示: + +> 这条任务不是“谁空着谁就拿”,而是要先过角色条件。 + +### 2. 认领后的任务记录 + +一旦认领成功,任务记录至少会发生这些变化: + +```python +{ + "id": 7, + "owner": "alice", + "status": "in_progress", + "claimed_at": 1710000000.0, + "claim_source": "auto", +} +``` + +这里新增的两个字段很值得单独记住: + +- `claimed_at`:什么时候被认领 +- `claim_source`:这次认领是 `auto` 还是 `manual` + +因为到这一步,系统开始不只是知道“任务现在有人做了”,还开始知道: + +- 这是谁拿走的 +- 是主动扫描拿走,还是手动点名拿走 + +### 3. Claim Event Log + +除了回写任务文件,这章还会把认领动作追加到: + +```text +.tasks/claim_events.jsonl +``` + +每条事件大致长这样: + +```python +{ + "event": "task.claimed", + "task_id": 7, + "owner": "alice", + "role": "frontend", + "source": "auto", + "ts": 1710000000.0, +} +``` + +为什么这层日志重要? + +因为它回答的是“自治系统刚刚做了什么”。 + +只看最终任务文件,你知道的是: + +- 现在是谁 owner + +而看事件日志,你才能知道: + +- 它是什么时候被拿走的 +- 是谁拿走的 +- 是空闲时自动拿走,还是人工调用 `claim_task` + +### 4. Durable Request Record + +这章虽然重点是自治,但它**不能从 `s16` 退回到“协议请求只放内存里”**。 + +所以当前代码里仍然保留了持久化请求记录: + +```text +.team/requests/{request_id}.json +``` + +它保存的是: + +- shutdown request +- plan approval request +- 对应的状态更新 + +这层边界很重要,因为自治队友并不是在“脱离协议系统另起炉灶”,而是: + +> 在已有团队协议之上,额外获得“空闲时自己找活”的能力。 + +### 5. 身份块 + +当上下文被压缩后,队友有时会“忘记自己是谁”。 + +最小补法是重新注入一段身份提示: + +```python +identity = { + "role": "user", + "content": "You are 'alice', role: frontend, team: default. Continue your work.", +} +``` + +当前实现里还会同时补一条很短的确认语: + +```python +{"role": "assistant", "content": "I am alice. Continuing."} +``` + +这样做的目的不是好看,而是为了让恢复后的下一轮继续知道: + +- 我是谁 +- 我的角色是什么 +- 我属于哪个团队 + +## 最小实现 + +### 第一步:让队友拥有 `WORK -> IDLE` 的循环 + +```python +while True: + run_work_phase(...) + should_resume = run_idle_phase(...) + if not should_resume: + break +``` + +### 第二步:在 IDLE 里先看邮箱 + +```python +def idle_phase(name: str, messages: list) -> bool: + inbox = bus.read_inbox(name) + if inbox: + messages.append({ + "role": "user", + "content": json.dumps(inbox), + }) + return True +``` + +这一步的意思是: + +如果有人明确找我,那我优先处理“明确发给我的工作”。 + +### 第三步:如果邮箱没消息,再按“当前角色”扫描可认领任务 + +```python + unclaimed = scan_unclaimed_tasks(role) + if unclaimed: + task = unclaimed[0] + claim_result = claim_task( + task["id"], + name, + role=role, + source="auto", + ) +``` + +这里当前代码有两个很关键的升级: + +- `scan_unclaimed_tasks(role)` 不是无差别扫任务,而是带着角色过滤 +- `claim_task(..., source="auto")` 会把“这次是自治认领”显式写进任务与事件日志 + +也就是说,自治不是“空闲了就乱抢一条”,而是: + +> 按当前队友的角色、任务状态和阻塞关系,挑出一条真正允许它接手的工作。 + +### 第四步:认领后先补身份,再把任务提示塞回主循环 + +```python + ensure_identity_context(messages, name, role, team_name) + messages.append({ + "role": "user", + "content": f"Task #{task['id']}: {task['subject']}", + }) + messages.append({ + "role": "assistant", + "content": f"{claim_result}. Working on it.", + }) + return True +``` + +这一步非常关键。 + +因为“认领成功”本身还不等于“队友真的能顺利继续”。 + +还必须把两件事接回上下文里: + +- 身份上下文 +- 新任务提示 + +只有这样,下一轮 `WORK` 才不是无头苍蝇,而是: + +> 带着明确身份和明确任务恢复工作。 + +### 第五步:长时间没事就退出 + +```python + time.sleep(POLL_INTERVAL) + ... + return False +``` + +为什么需要这个退出路径? + +因为空闲队友不一定要永远占着资源。 +教学版先做“空闲一段时间后关闭”就够了。 + +## 为什么认领必须是原子动作 + +“原子”这个词第一次看到可能不熟。 + +这里它的意思是: + +> 认领这一步要么完整成功,要么不发生,不能一半成功一半失败。 + +为什么? + +因为两个队友可能同时扫描到同一个可做任务。 + +如果没有锁,就可能发生: + +- Alice 看见任务 3 没主人 +- Bob 也看见任务 3 没主人 +- 两人都把自己写成 owner + +所以最小教学版也应该加一个认领锁: + +```python +with claim_lock: + task = load(task_id) + if task["owner"]: + return "already claimed" + task["owner"] = name + task["status"] = "in_progress" + save(task) +``` + +## 身份重注入为什么重要 + +这是这章里一个很容易被忽视,但很关键的点。 + +当上下文压缩发生以后,队友可能丢掉这些关键信息: + +- 我是谁 +- 我的角色是什么 +- 我属于哪个团队 + +如果没有这些信息,队友后续行为很容易漂。 + +所以一个很实用的做法是: + +如果发现 messages 的开头已经没有身份块,就把身份块重新插回去。 + +这里你可以把它理解成一条恢复规则: + +> 任何一次从 idle 恢复、或任何一次压缩后恢复,只要身份上下文可能变薄,就先补身份,再继续工作。 + +## 为什么 s17 不能从 s16 退回“内存协议” + +这是一个很容易被漏讲,但其实非常重要的点。 + +很多人一看到“自治”,就容易只盯: + +- idle +- auto-claim +- 轮询 + +然后忘了 `s16` 已经建立过的另一条主线: + +- 请求必须可追踪 +- 协议状态必须可恢复 + +所以现在教学代码里,像: + +- shutdown request +- plan approval + +仍然会写进: + +```text +.team/requests/{request_id}.json +``` + +也就是说,`s17` 不是推翻 `s16`,而是在 `s16` 上继续加一条新能力: + +```text +协议系统继续存在 + + +自治扫描与认领开始存在 +``` + +这两条线一起存在,团队才会像一个真正的平台,而不是一堆各自乱跑的 worker。 + +## 如何接到前面几章里 + +这一章其实是前面几章第一次真正“串起来”的地方: + +- `s12` 提供任务板 +- `s15` 提供持久队友 +- `s16` 提供结构化协议 +- `s17` 则让队友在没有明确点名时,也能自己找活 + +所以你可以把 `s17` 理解成: + +**从“被动协作”升级到“主动协作”。** + +## 自治的是“长期队友”,不是“一次性 subagent” + +这层边界如果不讲清,读者很容易把 `s04` 和 `s17` 混掉。 + +`s17` 里的自治执行者,仍然是 `s15` 那种长期队友: + +- 有名字 +- 有角色 +- 有邮箱 +- 有 idle 阶段 +- 可以反复接活 + +它不是那种: + +- 接一条子任务 +- 做完返回摘要 +- 然后立刻消失 + +的一次性 subagent。 + +同样地,这里认领的也是: + +- `s12` 里的工作图任务 + +而不是: + +- `s13` 里的后台执行槽位 + +所以这章其实是在两条已存在的主线上再往前推一步: + +- 长期队友 +- 工作图任务 + +再把它们用“自治认领”连接起来。 + +如果你开始把下面这些词混在一起: + +- teammate +- protocol request +- task +- runtime task + +建议回看: + +- [`team-task-lane-model.md`](./team-task-lane-model.md) +- [`s13a-runtime-task-model.md`](./s13a-runtime-task-model.md) + +## 初学者最容易犯的错 + +### 1. 只看 `pending`,不看 `blockedBy` + +如果一个任务虽然是 `pending`,但前置任务还没完成,它就不应该被认领。 + +### 2. 只看状态,不看 `claim_role` / `required_role` + +这会让错误的队友接走错误的任务。 + +教学版虽然简单,但从这一章开始,已经应该明确告诉读者: + +- 并不是所有 ready task 都适合所有队友 +- 角色条件本身也是 claim policy 的一部分 + +### 3. 没有认领锁 + +这会直接导致重复抢同一条任务。 + +### 4. 空闲阶段只轮询任务板,不看邮箱 + +这样队友会错过别人明确发给它的消息。 + +### 5. 认领了任务,但没有写 claim event + +这样最后你只能看到“任务现在被谁做”,却看不到: + +- 它是什么时候被拿走的 +- 是自动认领还是手动认领 + +### 6. 队友永远不退出 + +教学版里,长时间无事可做时退出是合理的。 +否则读者会更难理解资源何时释放。 + +### 7. 上下文压缩后不重注入身份 + +这很容易让队友后面的行为越来越不像“它本来的角色”。 + +## 教学边界 + +这一章先只把自治主线讲清楚: + +**空闲检查 -> 安全认领 -> 恢复工作。** + +只要这条链路稳了,读者就已经真正理解了“自治”是什么。 + +更细的 claim policy、公平调度、事件驱动唤醒、长期保活,都应该建立在这条最小自治链之后,而不是抢在前面。 + +## 试一试 + +```sh +cd learn-claude-code +python agents/s17_autonomous_agents.py +``` + +可以试试这些任务: + +1. 先建几条 ready task,再生成两个队友,观察它们是否会自动分工。 +2. 建几条被阻塞的任务,确认队友不会错误认领。 +3. 让某个队友进入 idle,再发一条消息给它,观察它是否会重新被唤醒。 + +这一章要建立的核心心智是: + +**自治不是让 agent 乱跑,而是让它在清晰规则下自己接住下一份工作。** diff --git a/docs/zh/s18-worktree-task-isolation.md b/docs/zh/s18-worktree-task-isolation.md new file mode 100644 index 000000000..33f811725 --- /dev/null +++ b/docs/zh/s18-worktree-task-isolation.md @@ -0,0 +1,499 @@ +# s18: Worktree + Task Isolation (Worktree 任务隔离) + +`s00 > s01 > s02 > s03 > s04 > s05 > s06 > s07 > s08 > s09 > s10 > s11 > s12 > s13 > s14 > s15 > s16 > s17 > [ s18 ] > s19` + +> *任务板解决“做什么”,worktree 解决“在哪做而不互相踩到”。* + +## 这一章要解决什么问题 + +到 `s17` 为止,系统已经可以: + +- 拆任务 +- 认领任务 +- 让多个 agent 并行推进不同工作 + +但如果所有人都在同一个工作目录里改文件,很快就会出现这些问题: + +- 两个任务同时改同一个文件 +- 一个任务还没做完,另一个任务的修改已经把目录污染了 +- 想单独回看某个任务的改动范围时,很难分清 + +也就是说,任务系统已经回答了“谁做什么”,却还没有回答: + +**每个任务应该在哪个独立工作空间里执行。** + +这就是 worktree 要解决的问题。 + +## 建议联读 + +- 如果你开始把 task、runtime slot、worktree lane 三层混成一个词,先看 [`team-task-lane-model.md`](./team-task-lane-model.md)。 +- 如果你想确认 worktree 记录和任务记录分别该保存哪些字段,回看 [`data-structures.md`](./data-structures.md)。 +- 如果你想从“参考仓库主干”角度确认这一章为什么必须晚于 tasks / teams,再看 [`s00e-reference-module-map.md`](./s00e-reference-module-map.md)。 + +## 先解释几个名词 + +### 什么是 worktree + +如果你熟悉 git,可以把 worktree 理解成: + +> 同一个仓库的另一个独立检出目录。 + +如果你还不熟悉 git,也可以先把它理解成: + +> 一条属于某个任务的独立工作车道。 + +### 什么叫隔离执行 + +隔离执行就是: + +> 任务 A 在自己的目录里跑,任务 B 在自己的目录里跑,彼此默认不共享未提交改动。 + +### 什么叫绑定 + +绑定的意思是: + +> 把某个任务 ID 和某个 worktree 记录明确关联起来。 + +## 最小心智模型 + +最容易理解的方式,是把这一章拆成两张表: + +```text +任务板 + 负责回答:做什么、谁在做、状态如何 + +worktree 注册表 + 负责回答:在哪做、目录在哪、对应哪个任务 +``` + +两者通过 `task_id` 连起来: + +```text +.tasks/task_12.json + { + "id": 12, + "subject": "Refactor auth flow", + "status": "in_progress", + "worktree": "auth-refactor" + } + +.worktrees/index.json + { + "worktrees": [ + { + "name": "auth-refactor", + "path": ".worktrees/auth-refactor", + "branch": "wt/auth-refactor", + "task_id": 12, + "status": "active" + } + ] + } +``` + +看懂这两条记录,这一章的主线就已经抓住了: + +**任务记录工作目标,worktree 记录执行车道。** + +## 关键数据结构 + +### 1. TaskRecord 不再只记录 `worktree` + +到当前教学代码这一步,任务记录里和车道相关的字段已经不只一个: + +```python +task = { + "id": 12, + "subject": "Refactor auth flow", + "status": "in_progress", + "owner": "alice", + "worktree": "auth-refactor", + "worktree_state": "active", + "last_worktree": "auth-refactor", + "closeout": None, +} +``` + +这 4 个字段分别回答不同问题: + +- `worktree`:当前还绑定着哪条车道 +- `worktree_state`:这条绑定现在是 `active`、`kept`、`removed` 还是 `unbound` +- `last_worktree`:最近一次用过哪条车道 +- `closeout`:最后一次收尾动作是什么 + +为什么要拆这么细? + +因为到多 agent 并行阶段,系统已经不只需要知道“现在在哪做”,还需要知道: + +- 这条车道现在是不是还活着 +- 它最后是保留还是回收 +- 之后如果恢复或排查,应该看哪条历史车道 + +### 2. WorktreeRecord 不只是路径映射 + +```python +worktree = { + "name": "auth-refactor", + "path": ".worktrees/auth-refactor", + "branch": "wt/auth-refactor", + "task_id": 12, + "status": "active", + "last_entered_at": 1710000000.0, + "last_command_at": 1710000012.0, + "last_command_preview": "pytest tests/auth -q", + "closeout": None, +} +``` + +这里也要特别注意: + +worktree 记录回答的不只是“目录在哪”,还开始回答: + +- 最近什么时候进入过 +- 最近跑过什么命令 +- 最后是怎么收尾的 + +这就是为什么这章讲的是: + +**可观察的执行车道** + +而不只是“多开一个目录”。 + +### 3. CloseoutRecord + +这一章在当前代码里,一个完整的收尾记录大致是: + +```python +closeout = { + "action": "keep", + "reason": "Need follow-up review", + "at": 1710000100.0, +} +``` + +这层记录很重要,因为它把“结尾到底发生了什么”显式写出来,而不是靠人猜: + +- 是保留目录,方便继续追看 +- 还是回收目录,表示这条执行车道已经结束 + +### 4. EventRecord + +```python +event = { + "event": "worktree.closeout.keep", + "task_id": 12, + "worktree": "auth-refactor", + "reason": "Need follow-up review", + "ts": 1710000100.0, +} +``` + +为什么还要事件记录? + +因为 worktree 的生命周期经常跨很多步: + +- 创建 +- 进入 +- 运行命令 +- 保留 +- 删除 +- 删除失败 + +有显式事件日志,会比只看当前状态更容易排查问题。 + +## 最小实现 + +### 第一步:先有任务,再有 worktree + +不要先开目录再回头补任务。 + +更清楚的顺序是: + +1. 先创建任务 +2. 再为这个任务分配 worktree + +```python +task = tasks.create("Refactor auth flow") +worktrees.create("auth-refactor", task_id=task["id"]) +``` + +### 第二步:创建 worktree 并写入注册表 + +```python +def create(self, name: str, task_id: int): + path = self.root / ".worktrees" / name + branch = f"wt/{name}" + + run_git(["worktree", "add", "-b", branch, str(path), "HEAD"]) + + record = { + "name": name, + "path": str(path), + "branch": branch, + "task_id": task_id, + "status": "active", + } + self.index["worktrees"].append(record) + self._save_index() +``` + +### 第三步:同时更新任务记录,不只是写一个 `worktree` + +```python +def bind_worktree(task_id: int, name: str): + task = tasks.load(task_id) + task["worktree"] = name + task["last_worktree"] = name + task["worktree_state"] = "active" + if task["status"] == "pending": + task["status"] = "in_progress" + tasks.save(task) +``` + +为什么这一步很关键? + +因为如果只更新 worktree 注册表,不更新任务记录,系统就无法从任务板一眼看出“这个任务在哪个隔离目录里做”。 + +### 第四步:显式进入车道,再在对应目录里执行命令 + +当前代码里,进入和运行已经拆成两步: + +```python +worktree_enter("auth-refactor") +worktree_run("auth-refactor", "pytest tests/auth -q") +``` + +对应到底层,大致就是: + +```python +def enter(self, name: str): + self._update_entry(name, last_entered_at=time.time()) + self.events.emit("worktree.enter", ...) + +def run(self, name: str, command: str): + subprocess.run(command, cwd=worktree_path, ...) +``` + +```python +subprocess.run(command, cwd=worktree_path, ...) +``` + +这一行看起来普通,但它正是隔离的核心: + +**同一个命令,在不同 `cwd` 里执行,影响范围就不一样。** + +为什么还要单独补一个 `worktree_enter`? + +因为教学上你要让读者看见: + +- “分配车道”是一回事 +- “真正进入并开始在这条车道里工作”是另一回事 + +这层边界一清楚,后面的观察字段才有意义: + +- `last_entered_at` +- `last_command_at` +- `last_command_preview` + +### 第五步:收尾时显式走 `worktree_closeout` + +不要让收尾是隐式的。 + +当前更清楚的教学接口不是“分散记两个命令”,而是统一成一个 closeout 动作: + +```python +worktree_closeout( + name="auth-refactor", + action="keep", # or "remove" + reason="Need follow-up review", + complete_task=False, +) +``` + +这样读者会更容易理解: + +- 收尾一定要选动作 +- 收尾可以带原因 +- 收尾会同时回写任务记录、车道记录和事件日志 + +当然,底层仍然保留: + +- `worktree_keep(name)` +- `worktree_remove(name, reason=..., complete_task=True)` + +但教学主线最好先把: + +> `keep` 和 `remove` 看成同一个 closeout 决策的两个分支 + +这样读者心智会更顺。 + +## 为什么 `worktree_state` 和 `status` 要分开 + +这也是一个很容易被忽略的细点。 + +很多初学者会想: + +> “任务有 `status` 了,为什么还要 `worktree_state`?” + +因为这两个状态根本不是一层东西: + +- 任务 `status` 回答:这件工作现在是 `pending`、`in_progress` 还是 `completed` +- `worktree_state` 回答:这条执行车道现在是 `active`、`kept`、`removed` 还是 `unbound` + +举个最典型的例子: + +```text +任务已经 completed + 但 worktree 仍然 kept +``` + +这完全可能,而且很常见。 +比如你已经做完了,但还想保留目录给 reviewer 看。 + +所以: + +**任务状态和车道状态不能混成一个字段。** + +## 为什么 worktree 不是“只是一个 git 小技巧” + +很多初学者第一次看到这一章,会觉得: + +> “这不就是多开几个目录吗?” + +这句话只说对了一半。 + +真正关键的不只是“多开目录”,而是: + +**把任务和执行目录做显式绑定,让并行工作有清楚的边界。** + +如果没有这层绑定,系统仍然不知道: + +- 哪个目录属于哪个任务 +- 收尾时该完成哪条任务 +- 崩溃后该恢复哪条关系 + +## 如何接到前面章节里 + +这章和前面几章是强耦合的: + +- `s12` 提供任务 ID +- `s15-s17` 提供队友和认领机制 +- `s18` 则给这些任务提供独立执行车道 + +把三者连起来看,会变成: + +```text +任务被创建 + -> +队友认领任务 + -> +系统为任务分配 worktree + -> +命令在对应目录里执行 + -> +任务完成时决定保留还是删除 worktree +``` + +这条链一旦建立,多 agent 并行工作就会清楚很多。 + +## worktree 不是任务本身,而是任务的执行车道 + +这句话值得单独再说一次。 + +很多读者第一次学到这里时,会把这两个词混着用: + +- task +- worktree + +但它们回答的其实不是同一个问题: + +- task:做什么 +- worktree:在哪做 + +所以更完整、也更不容易混的表达方式是: + +- 工作图任务 +- worktree 执行车道 + +如果你开始分不清: + +- 任务 +- 运行时任务 +- worktree + +建议回看: + +- [`team-task-lane-model.md`](./team-task-lane-model.md) +- [`s13a-runtime-task-model.md`](./s13a-runtime-task-model.md) +- [`entity-map.md`](./entity-map.md) + +## 初学者最容易犯的错 + +### 1. 有 worktree 注册表,但任务记录里没有 `worktree` + +这样任务板就丢掉了最重要的一条执行信息。 + +### 2. 有任务 ID,但命令仍然在主目录执行 + +如果 `cwd` 没切过去,worktree 形同虚设。 + +### 3. 只会 `worktree_remove`,不会解释 closeout 的含义 + +这样读者最后只记住“删目录”这个动作,却不知道系统真正想表达的是: + +- 保留 +- 回收 +- 为什么这么做 +- 是否同时完结对应任务 + +### 4. 删除 worktree 前不看未提交改动 + +这是最危险的一类错误。 + +教学版也应该至少先建立一个原则: + +**删除前先检查是否有脏改动。** + +### 5. 没有 `worktree_state` / `closeout` 这类显式收尾状态 + +这样系统就会只剩下“现在目录还在不在”,而没有: + +- 这条车道最后怎么收尾 +- 是主动保留还是主动删除 + +### 6. 把 worktree 当成长期垃圾堆 + +如果从不清理,目录会越来越多,状态越来越乱。 + +### 7. 没有事件日志 + +一旦创建失败、删除失败或任务关系错乱,没有事件日志会很难排查。 + +## 教学边界 + +这章先要讲透的不是所有 worktree 运维细节,而是主干分工: + +- task 记录“做什么” +- worktree 记录“在哪做” +- enter / execute / closeout 串起这条隔离执行车道 + +只要这条主干清楚,教学目标就已经达成。 + +崩溃恢复、删除安全检查、全局缓存区、非 git 回退这些,都应该放在这条主干之后。 + +## 试一试 + +```sh +cd learn-claude-code +python agents/s18_worktree_task_isolation.py +``` + +可以试试这些任务: + +1. 为两个不同任务各建一个 worktree,观察任务板和注册表的对应关系。 +2. 分别在两个 worktree 里运行 `git status`,感受目录隔离。 +3. 删除一个 worktree,并确认对应任务是否被正确收尾。 + +读完这一章,你应该能自己说清楚这句话: + +**任务系统管“做什么”,worktree 系统管“在哪做且互不干扰”。** diff --git a/docs/zh/s19-mcp-plugin.md b/docs/zh/s19-mcp-plugin.md new file mode 100644 index 000000000..af745fc86 --- /dev/null +++ b/docs/zh/s19-mcp-plugin.md @@ -0,0 +1,392 @@ +# s19: MCP & Plugin System (MCP 与插件系统) + +`s00 > s01 > s02 > s03 > s04 > s05 > s06 > s07 > s08 > s09 > s10 > s11 > s12 > s13 > s14 > s15 > s16 > s17 > s18 > [ s19 ]` + +> *工具不必都写死在主程序里。外部进程也可以把能力接进你的 agent。* + +## 这一章到底在讲什么 + +前面所有章节里,工具基本都写在你自己的 Python 代码里。 + +这当然是最适合教学的起点。 + +但真实系统走到一定阶段以后,会很自然地遇到这个需求: + +> “能不能让外部程序也把工具接进来,而不用每次都改主程序?” + +这就是 MCP 要解决的问题。 + +## 先用最简单的话解释 MCP + +你可以先把 MCP 理解成: + +**一套让 agent 和外部工具程序对话的统一协议。** + +在教学版里,不必一开始就背很多协议细节。 +你只要先抓住这条主线: + +1. 启动一个外部工具服务进程 +2. 问它“你有哪些工具” +3. 当模型要用它的工具时,把请求转发给它 +4. 再把结果带回 agent 主循环 + +这已经够理解 80% 的核心机制了。 + +## 为什么这一章放在最后 + +因为 MCP 不是主循环的起点,而是主循环稳定之后的扩展层。 + +如果你还没真正理解: + +- agent loop +- tool call +- permission +- task +- worktree + +那 MCP 只会看起来像又一套复杂接口。 + +但当你已经有了前面的心智,再看 MCP,你会发现它本质上只是: + +**把“工具来源”从“本地硬编码”升级成“外部可插拔”。** + +## 建议联读 + +- 如果你只把 MCP 理解成“远程 tools”,先看 [`s19a-mcp-capability-layers.md`](./s19a-mcp-capability-layers.md),把 tools、resources、prompts、plugin 中介层一起放回平台边界里。 +- 如果你想确认外部能力为什么仍然要回到同一条执行面,回看 [`s02b-tool-execution-runtime.md`](./s02b-tool-execution-runtime.md)。 +- 如果你开始把“query 控制平面”和“外部能力路由”完全分开理解,建议配合看 [`s00a-query-control-plane.md`](./s00a-query-control-plane.md)。 + +## 最小心智模型 + +```text +LLM + | + | asks to call a tool + v +Agent tool router + | + +-- native tool -> 本地 Python handler + | + +-- MCP tool -> 外部 MCP server + | + v + return result +``` + +## 最小系统里最重要的三件事 + +### 1. 有一个 MCP client + +它负责: + +- 启动外部进程 +- 发送请求 +- 接收响应 + +### 2. 有一个工具名前缀规则 + +这是为了避免命名冲突。 + +最常见的做法是: + +```text +mcp__{server}__{tool} +``` + +比如: + +```text +mcp__postgres__query +mcp__browser__open_tab +``` + +这样一眼就知道: + +- 这是 MCP 工具 +- 它来自哪个 server +- 它原始工具名是什么 + +### 3. 有一个统一路由器 + +路由器只做一件事: + +- 如果是本地工具,就交给本地 handler +- 如果是 MCP 工具,就交给 MCP client + +## Plugin 又是什么 + +如果 MCP 解决的是“外部工具怎么通信”, +那 plugin 解决的是“这些外部工具配置怎么被发现”。 + +最小 plugin 可以非常简单: + +```text +.claude-plugin/ + plugin.json +``` + +里面写: + +- 插件名 +- 版本 +- 它提供哪些 MCP server +- 每个 server 的启动命令是什么 + +## 最小配置长什么样 + +```json +{ + "name": "my-db-tools", + "version": "1.0.0", + "mcpServers": { + "postgres": { + "command": "npx", + "args": ["-y", "@modelcontextprotocol/server-postgres"] + } + } +} +``` + +这个配置并不复杂。 + +它本质上只是在告诉主程序: + +> “如果你想接这个 server,就用这条命令把它拉起来。” + +## 最小实现步骤 + +### 第一步:写一个 `MCPClient` + +它至少要有三个能力: + +- `connect()` +- `list_tools()` +- `call_tool()` + +### 第二步:把外部工具标准化成 agent 能看懂的工具定义 + +也就是说,把 MCP server 暴露的工具,转成 agent 工具池里的统一格式。 + +### 第三步:加前缀 + +这样主程序就能区分: + +- 本地工具 +- 外部工具 + +### 第四步:写一个 router + +```python +if tool_name.startswith("mcp__"): + return mcp_router.call(tool_name, arguments) +else: + return native_handler(arguments) +``` + +### 第五步:仍然走同一条权限管道 + +这是非常关键的一点: + +**MCP 工具虽然来自外部,但不能绕开 permission。** + +不然你等于在系统边上开了个安全后门。 + +如果你想把这一层再收得更稳,最好再把结果也标准化回同一条总线: + +```python +{ + "source": "mcp", + "server": "figma", + "tool": "inspect", + "status": "ok", + "preview": "...", +} +``` + +这表示: + +- 路由前要过共享权限闸门 +- 路由后不论本地还是远程,结果都要转成主循环看得懂的统一格式 + +## 如何接到整个系统里 + +如果你读到这里还觉得 MCP 像“外挂”,通常是因为没有把它放回整条主回路里。 + +更完整的接法应该看成: + +```text +启动时 + -> +PluginLoader 找到 manifest + -> +得到 server 配置 + -> +MCP client 连接 server + -> +list_tools 并标准化名字 + -> +和 native tools 一起合并进同一个工具池 + +运行时 + -> +LLM 产出 tool_use + -> +统一权限闸门 + -> +native route 或 mcp route + -> +结果标准化 + -> +tool_result 回到同一个主循环 +``` + +这段流程里最关键的不是“外部”两个字,而是: + +**进入方式不同,但进入后必须回到同一条控制面和执行面。** + +## Plugin、MCP Server、MCP Tool 不要混成一层 + +这是初学者最容易在本章里打结的地方。 + +可以直接按下面三层记: + +| 层级 | 它是什么 | 它负责什么 | +|---|---|---| +| plugin manifest | 一份配置声明 | 告诉系统要发现和启动哪些 server | +| MCP server | 一个外部进程 / 连接对象 | 对外暴露一组能力 | +| MCP tool | server 暴露的一项具体调用能力 | 真正被模型点名调用 | + +换成一句最短的话说: + +- plugin 负责“发现” +- server 负责“连接” +- tool 负责“调用” + +只要这三层还分得清,MCP 这章的主体心智就不会乱。 + +## 这一章最关键的数据结构 + +### 1. server 配置 + +```python +{ + "command": "npx", + "args": ["-y", "..."], + "env": {} +} +``` + +### 2. 标准化后的工具定义 + +```python +{ + "name": "mcp__postgres__query", + "description": "Run a SQL query", + "input_schema": {...} +} +``` + +### 3. client 注册表 + +```python +clients = { + "postgres": mcp_client_instance +} +``` + +## 初学者最容易被带偏的地方 + +### 1. 一上来讲太多协议细节 + +这章最容易失控。 + +因为一旦开始讲完整协议生态,很快会出现: + +- transports +- auth +- resources +- prompts +- streaming +- connection recovery + +这些都存在,但不该挡住主线。 + +主线只有一句话: + +**外部工具也能像本地工具一样接进 agent。** + +### 2. 把 MCP 当成一套完全不同的工具系统 + +不是。 + +它最终仍然应该汇入你原来的工具体系: + +- 一样要注册 +- 一样要出现在工具池里 +- 一样要过权限 +- 一样要返回 `tool_result` + +### 3. 忽略命名与路由 + +如果没有统一前缀和统一路由,系统会很快乱掉。 + +## 教学边界 + +这一章正文先停在 `tools-first` 是对的。 + +因为教学主线最需要先讲清的是: + +- 外部能力怎样被发现 +- 怎样被统一命名和路由 +- 怎样继续经过同一条权限与 `tool_result` 回流 + +只要这一层已经成立,读者就已经真正理解了: + +**MCP / plugin 不是外挂,而是接回同一控制面的外部能力入口。** + +transport、认证、resources、prompts、插件生命周期这些更大范围的内容,应该放到平台桥接资料里继续展开。 + +## 正文先停在 tools-first,平台层再看桥接文档 + +这一章的正文故意停在“外部工具如何接进 agent”这一层。 +这是教学上的刻意取舍,不是缺失。 + +如果你准备继续补平台边界,再去看: + +- [`s19a-mcp-capability-layers.md`](./s19a-mcp-capability-layers.md) + +那篇会把 MCP 再往上补成一张平台地图,包括: + +- server 配置作用域 +- transport 类型 +- 连接状态:`connected / pending / needs-auth / failed / disabled` +- tools 之外的 `resources / prompts / elicitation` +- auth 该放在哪一层理解 + +这样安排的好处是: + +- 正文不失焦 +- 读者又不会误以为 MCP 只有一个 `list_tools + call_tool` + +## 这一章和全仓库的关系 + +如果说前 18 章都在教你把系统内部搭起来, +那 `s19` 在教你: + +**如何把系统向外打开。** + +从这里开始,工具不再只来自你手写的 Python 文件, +还可以来自别的进程、别的系统、别的服务。 + +这就是为什么它适合作为最后一章。 + +## 学完这章后,你应该能回答 + +- MCP 的核心到底是什么? +- 为什么它应该放在整个学习路径的最后? +- 为什么 MCP 工具也必须走同一条权限与路由逻辑? +- plugin 和 MCP 分别解决什么问题? + +--- + +**一句话记住:MCP 的本质,不是协议名词堆砌,而是把外部工具安全、统一地接进你的 agent。** diff --git a/docs/zh/s19a-mcp-capability-layers.md b/docs/zh/s19a-mcp-capability-layers.md new file mode 100644 index 000000000..cd7736507 --- /dev/null +++ b/docs/zh/s19a-mcp-capability-layers.md @@ -0,0 +1,266 @@ +# s19a: MCP Capability Layers (MCP 能力层地图) + +> `s19` 的主线仍然应该坚持“先做 tools-first”。 +> 这篇桥接文档负责补上另一层心智: +> +> **MCP 不只是外部工具接入,它是一组能力层。** + +## 建议怎么联读 + +如果你希望 MCP 这块既不学偏,也不学浅,推荐这样看: + +- 先看 [`s19-mcp-plugin.md`](./s19-mcp-plugin.md),先把 tools-first 主线走通。 +- 再看 [`s02a-tool-control-plane.md`](./s02a-tool-control-plane.md),确认外部能力最后怎样接回统一工具总线。 +- 如果状态结构开始混,再对照 [`data-structures.md`](./data-structures.md)。 +- 如果概念边界开始混,再回 [`glossary.md`](./glossary.md) 和 [`entity-map.md`](./entity-map.md)。 + +## 为什么要单独补这一篇 + +如果你是为了教学,从 0 到 1 手搓一个类似系统,那么 `s19` 主线先只讲外部工具,这是对的。 + +因为最容易理解的入口就是: + +- 连接一个外部 server +- 拿到工具列表 +- 调用工具 +- 把结果带回 agent + +但如果你想把系统做到接近 95%-99% 的还原度,你迟早会遇到这些问题: + +- server 是用 stdio、http、sse 还是 ws 连接? +- 为什么有些 server 是 connected,有些是 pending,有些是 needs-auth? +- tools 之外,resources 和 prompts 是什么位置? +- elicitation 为什么会变成一类特殊交互? +- OAuth / XAA 这种认证流程该放在哪一层理解? + +这时候如果没有一张“能力层地图”,MCP 就会越学越散。 + +## 先解释几个名词 + +### 什么是能力层 + +能力层,就是把一个复杂系统拆成几层职责清楚的面。 + +这里的意思是: + +> 不要把所有 MCP 细节混成一团,而要知道每一层到底解决什么问题。 + +### 什么是 transport + +`transport` 可以理解成“连接通道”。 + +比如: + +- stdio +- http +- sse +- websocket + +### 什么是 elicitation + +这个词比较生。 + +你可以先把它理解成: + +> 外部 MCP server 反过来向用户请求额外输入的一种交互。 + +也就是说,不再只是 agent 主动调工具,而是 server 也能说: + +“我还需要你给我一点信息,我才能继续。” + +## 最小心智模型 + +先把 MCP 画成 6 层: + +```text +1. Config Layer + server 配置长什么样 + +2. Transport Layer + 用什么通道连 server + +3. Connection State Layer + 现在是 connected / pending / failed / needs-auth + +4. Capability Layer + tools / resources / prompts / elicitation + +5. Auth Layer + 是否需要认证,认证状态如何 + +6. Router Integration Layer + 如何接回 tool router / permission / notifications +``` + +最重要的一点是: + +**tools 只是其中一层,不是全部。** + +## 为什么正文仍然应该坚持 tools-first + +这点非常重要。 + +虽然 MCP 平台本身有多层能力,但正文主线仍然应该这样安排: + +### 第一步:先教外部 tools + +因为它和前面的主线最自然衔接: + +- 本地工具 +- 外部工具 +- 同一条 router + +### 第二步:再告诉读者还有其他能力层 + +例如: + +- resources +- prompts +- elicitation +- auth + +### 第三步:再决定是否继续实现 + +这才符合你的教学目标: + +**先做出类似系统,再补平台层高级能力。** + +## 关键数据结构 + +### 1. ScopedMcpServerConfig + +最小教学版建议至少让读者看到这个概念: + +```python +config = { + "name": "postgres", + "type": "stdio", + "command": "npx", + "args": ["-y", "..."], + "scope": "project", +} +``` + +这里的 `scope` 很重要。 + +因为 server 配置不一定都来自同一个地方。 + +### 2. MCP Connection State + +```python +server_state = { + "name": "postgres", + "status": "connected", # pending / failed / needs-auth / disabled + "config": {...}, +} +``` + +### 3. MCPToolSpec + +```python +tool = { + "name": "mcp__postgres__query", + "description": "...", + "input_schema": {...}, +} +``` + +### 4. ElicitationRequest + +```python +request = { + "server_name": "some-server", + "message": "Please provide additional input", + "requested_schema": {...}, +} +``` + +这一步不是要求你主线立刻实现它,而是要让读者知道: + +**MCP 不一定永远只是“模型调工具”。** + +## 一张更完整但仍然清楚的图 + +```text +MCP Config + | + v +Transport + | + v +Connection State + | + +-- connected + +-- pending + +-- needs-auth + +-- failed + | + v +Capabilities + +-- tools + +-- resources + +-- prompts + +-- elicitation + | + v +Router / Permission / Notification Integration +``` + +## Auth 为什么不要在主线里讲太多 + +这也是教学取舍里很重要的一点。 + +认证是真实系统里确实存在的能力层。 +但如果正文一开始就掉进 OAuth/XAA 流程,初学者会立刻丢主线。 + +所以更好的讲法是: + +- 先告诉读者:有 auth layer +- 再告诉读者:connected / needs-auth 是不同连接状态 +- 只有做平台层进阶时,再详细展开认证流程 + +这就既没有幻觉,也没有把人带偏。 + +## 它和 `s19`、`s02a` 的关系 + +- `s19` 正文继续负责 tools-first 教学 +- 这篇负责补清平台层地图 +- `s02a` 的 Tool Control Plane 则解释 MCP 最终怎么接回统一工具总线 + +三者合在一起,读者才会真正知道: + +**MCP 是外部能力平台,而 tools 只是它最先进入主线的那个切面。** + +## 初学者最容易犯的错 + +### 1. 把 MCP 只理解成“外部工具目录” + +这会让后面遇到 auth / resources / prompts / elicitation 时很困惑。 + +### 2. 一上来就沉迷 transport 和 OAuth 细节 + +这样会直接打断主线。 + +### 3. 让 MCP 工具绕过 permission + +这会在系统边上开一个很危险的后门。 + +### 4. 不区分 server 配置、连接状态、能力暴露 + +这三层一混,平台层就会越学越乱。 + +## 教学边界 + +这篇最重要的,不是把 MCP 所有外设细节都讲完,而是先守住四层边界: + +- server 配置 +- 连接状态 +- capability 暴露 +- permission / routing 接入点 + +只要这四层不混,你就已经能自己手搓一个接近真实系统主脉络的外部能力入口。 +认证状态机、resource/prompt 接入、server 回问和重连策略,都属于后续平台扩展。 + +## 一句话记住 + +**`s19` 主线应该先教“外部工具接入”,而平台层还需要额外理解 MCP 的能力层地图。** diff --git a/docs/zh/teaching-scope.md b/docs/zh/teaching-scope.md new file mode 100644 index 000000000..3f87cd660 --- /dev/null +++ b/docs/zh/teaching-scope.md @@ -0,0 +1,213 @@ +# Teaching Scope (教学范围说明) + +> 这份文档不是讲某一章,而是说明整个教学仓库到底要教什么、不教什么,以及每一章应该怎么写才不会把读者带偏。 + +## 这份仓库的目标 + +这不是一份“逐行对照某份源码”的注释仓库。 + +这份仓库真正的目标是: + +**教开发者从 0 到 1 手搓一个结构完整、高保真的 coding agent harness。** + +这里强调 3 件事: + +1. 读者真的能自己实现出来。 +2. 读者能抓住系统主脉络,而不是淹没在边角细节里。 +3. 读者对关键机制的理解足够高保真,不会学到不存在的机制。 + +## 什么必须讲清楚 + +主线章节必须优先讲清下面这些内容: + +- 整个系统有哪些核心模块 +- 模块之间如何协作 +- 每个模块解决什么问题 +- 关键状态保存在哪里 +- 关键数据结构长什么样 +- 主循环如何把这些机制接进来 + +如果一个章节讲完以后,读者还不知道“这个机制到底放在系统哪一层、保存了哪些状态、什么时候被调用”,那这章就还没讲透。 + +## 什么不要占主线篇幅 + +下面这些内容,不是完全不能提,而是**不应该占用主线正文的大量篇幅**: + +- 打包、编译、发布流程 +- 跨平台兼容胶水 +- 遥测、企业策略、账号体系 +- 与教学主线无关的历史兼容分支 +- 只对特定产品环境有意义的接线细节 +- 某份上游源码里的函数名、文件名、行号级对照 + +这些内容最多作为: + +- 维护者备注 +- 附录 +- 桥接资料里的平台扩展说明 + +而不应该成为初学者第一次学习时的主线。 + +## 真正的“高保真”是什么意思 + +教学仓库追求的高保真,不是所有边角细节都 1:1。 + +这里的高保真,是指这些东西要尽量贴近真实系统主干: + +- 核心运行模式 +- 主要模块边界 +- 关键数据结构 +- 模块之间的协作方式 +- 关键状态转换 + +换句话说: + +**主干尽量高保真,外围细节可以做教学取舍。** + +## 面向谁来写 + +本仓库默认读者不是“已经做过复杂 agent 平台的人”。 + +更合理的默认读者应该是: + +- 会一点编程 +- 能读懂基本 Python +- 但没有系统实现过 agent + +所以写作时要假设: + +- 很多术语是第一次见 +- 很多系统设计名词不能直接甩出来不解释 +- 同一个概念不能分散在五个地方才拼得完整 + +## 每一章的推荐结构 + +主线章节尽量遵守这条顺序: + +1. `这一章要解决什么问题` +2. `先解释几个名词` +3. `最小心智模型` +4. `关键数据结构` +5. `最小实现` +6. `如何接到主循环里` +7. `初学者最容易犯的错` +8. `教学边界` + +这条顺序的价值在于: + +- 先让读者知道为什么需要这个机制 +- 再让读者知道这个机制到底是什么 +- 然后马上看到它怎么落地 + +这里把最后一节写成 `教学边界`,而不是“继续补一大串外围复杂度清单”,是因为教学仓库更应该先帮读者守住: + +- 这一章先学到哪里就够了 +- 哪些复杂度现在不要一起拖进来 +- 读者真正该自己手搓出来的最小正确版本是什么 + +## 术语使用规则 + +只要出现这些类型的词,就应该解释: + +- 软件设计模式 +- 数据结构名词 +- 并发与进程相关名词 +- 协议与网络相关名词 +- 初学者不熟悉的工程术语 + +例如: + +- 状态机 +- 调度器 +- 队列 +- worktree +- DAG +- 协议 envelope + +不要只给名字,不给解释。 + +## “最小正确版本”原则 + +很多真实机制都很复杂。 + +但教学版不应该一开始就把所有分支一起讲。 + +更好的顺序是: + +1. 先给出一个最小但正确的版本 +2. 解释它已经解决了哪部分核心问题 +3. 再讲如果继续迭代应该补什么 + +例如: + +- 权限系统先做 `deny -> mode -> allow -> ask` +- 错误恢复先做 3 条主恢复路径 +- 任务系统先做任务记录、依赖、解锁 +- 团队协议先做 request/response + request_id + +## 文档和代码要一起维护,而不是各讲各的 + +如果正文和本地 `agents/*.py` 没有对齐,读者一打开代码就会重新混乱。 + +所以维护者重写章节时,应该同步检查三件事: + +1. 这章正文里的关键状态,代码里是否真有对应结构 +2. 这章正文里的主回路,代码里是否真有对应入口函数 +3. 这章正文里强调的“教学边界”,代码里是否也没有提前塞进过多外层复杂度 + +最稳的做法是让每章都能对应到: + +- 1 个主文件 +- 1 组关键状态结构 +- 1 条最值得先看的执行路径 + +如果维护者需要一份“按章节读本仓库代码”的地图,建议配合看: + +- [`s00f-code-reading-order.md`](./s00f-code-reading-order.md) + +## 维护者重写时的检查清单 + +如果你在重写某一章,可以用下面这份清单自检: + +- 这章第一屏有没有明确说明“为什么需要它” +- 是否先解释了新名词,再使用新名词 +- 是否给出了最小心智模型图或流程 +- 是否明确列出关键数据结构 +- 是否说明了它如何接进主循环 +- 是否区分了“核心机制”和“产品化外围细节” +- 是否列出了初学者最容易混淆的点 +- 是否避免制造源码里并不存在的幻觉机制 + +## 维护者如何使用“逆向源码” + +逆向得到的源码,在这套仓库里应当只扮演一个角色: + +**维护者的校准参考。** + +它的用途是: + +- 校验主干机制有没有讲错 +- 校验关键状态和模块边界有没有遗漏 +- 校验教学实现有没有偏离到错误方向 + +它不应该成为读者理解正文的前提。 + +正文应该做到: + +> 即使读者完全不看那份源码,也能把核心系统自己做出来。 + +## 这份教学仓库应该追求什么分数 + +如果满分是 150 分,一个接近满分的教学仓库应同时做到: + +- 主线清楚 +- 章节顺序合理 +- 新名词解释完整 +- 数据结构清晰 +- 机制边界准确 +- 例子可运行 +- 升级路径自然 + +真正决定分数高低的,不是“提到了多少细节”,而是: + +**提到的关键细节是否真的讲透,没提的非关键细节是否真的可以安全省略。** diff --git a/docs/zh/team-task-lane-model.md b/docs/zh/team-task-lane-model.md new file mode 100644 index 000000000..6385733aa --- /dev/null +++ b/docs/zh/team-task-lane-model.md @@ -0,0 +1,339 @@ +# Team Task Lane Model (队友-任务-车道模型) + +> 到了 `s15-s18`,读者最容易混掉的,不是某个函数名,而是: +> +> **系统里到底是谁在工作、谁在协调、谁在记录目标、谁在提供执行目录。** + +## 这篇桥接文档解决什么问题 + +如果你一路从 `s15` 看到 `s18`,脑子里很容易把下面这些词混在一起: + +- teammate +- protocol request +- task +- runtime task +- worktree + +它们都和“工作推进”有关。 +但它们不是同一层。 + +如果这层边界不单独讲清,后面读者会经常出现这些困惑: + +- 队友是不是任务本身? +- `request_id` 和 `task_id` 有什么区别? +- worktree 是不是后台任务的一种? +- 一个任务完成了,为什么 worktree 还能保留? + +这篇就是专门用来把这几层拆开的。 + +## 建议怎么联读 + +最推荐的读法是: + +1. 先看 [`s15-agent-teams.md`](./s15-agent-teams.md),确认长期队友在讲什么。 +2. 再看 [`s16-team-protocols.md`](./s16-team-protocols.md),确认请求-响应协议在讲什么。 +3. 再看 [`s17-autonomous-agents.md`](./s17-autonomous-agents.md),确认自治认领在讲什么。 +4. 最后看 [`s18-worktree-task-isolation.md`](./s18-worktree-task-isolation.md),确认隔离执行车道在讲什么。 + +如果你开始混: + +- 回 [`entity-map.md`](./entity-map.md) 看模块边界。 +- 回 [`data-structures.md`](./data-structures.md) 看记录结构。 +- 回 [`s13a-runtime-task-model.md`](./s13a-runtime-task-model.md) 看“目标任务”和“运行时执行槽位”的差别。 + +## 先给结论 + +先记住这一组最重要的区分: + +```text +teammate + = 谁在长期参与协作 + +protocol request + = 团队内部一次需要被追踪的协调请求 + +task + = 要做什么 + +runtime task / execution slot + = 现在有什么执行单元正在跑 + +worktree + = 在哪做,而且不和别人互相踩目录 +``` + +这五层里,最容易混的是最后三层: + +- `task` +- `runtime task` +- `worktree` + +所以你必须反复问自己: + +- 这是“目标”吗? +- 这是“执行中的东西”吗? +- 这是“执行目录”吗? + +## 一张最小清晰图 + +```text +Team Layer + teammate: alice (frontend) + teammate: bob (backend) + +Protocol Layer + request_id=req_01 + kind=plan_approval + status=pending + +Work Graph Layer + task_id=12 + subject="Implement login page" + owner="alice" + status="in_progress" + +Runtime Layer + runtime_id=rt_01 + type=in_process_teammate + status=running + +Execution Lane Layer + worktree=login-page + path=.worktrees/login-page + status=active +``` + +你可以看到: + +- `alice` 不是任务 +- `request_id` 不是任务 +- `runtime_id` 也不是任务 +- `worktree` 更不是任务 + +真正表达“这件工作本身”的,只有 `task_id=12` 那层。 + +## 1. Teammate:谁在长期协作 + +这是 `s15` 开始建立的层。 + +它回答的是: + +- 这个长期 worker 叫什么 +- 它是什么角色 +- 它当前是 working、idle 还是 shutdown +- 它有没有独立 inbox + +最小例子: + +```python +member = { + "name": "alice", + "role": "frontend", + "status": "idle", +} +``` + +这层的核心不是“又多开一个 agent”。 + +而是: + +> 系统开始有长期存在、可重复接活、可被点名协作的身份。 + +## 2. Protocol Request:谁在协调什么 + +这是 `s16` 建立的层。 + +它回答的是: + +- 有谁向谁发起了一个需要追踪的请求 +- 这条请求是什么类型 +- 它现在是 pending、approved 还是 rejected + +最小例子: + +```python +request = { + "request_id": "a1b2c3d4", + "kind": "plan_approval", + "from": "alice", + "to": "lead", + "status": "pending", +} +``` + +这一层不要和普通聊天混。 + +因为它不是“发一条消息就算完”,而是: + +> 一条可以被继续更新、继续审核、继续恢复的协调记录。 + +## 3. Task:要做什么 + +这是 `s12` 的工作图任务,也是 `s17` 自治认领的对象。 + +它回答的是: + +- 目标是什么 +- 谁负责 +- 是否有阻塞 +- 当前进度如何 + +最小例子: + +```python +task = { + "id": 12, + "subject": "Implement login page", + "status": "in_progress", + "owner": "alice", + "blockedBy": [], +} +``` + +这层的关键词是: + +**目标** + +不是目录,不是协议,不是进程。 + +## 4. Runtime Task / Execution Slot:现在有什么执行单元在跑 + +这一层在 `s13` 的桥接文档里已经单独解释过,但到了 `s15-s18` 必须再提醒一次。 + +比如: + +- 一个后台 shell 正在跑 +- 一个长期 teammate 正在工作 +- 一个 monitor 正在观察外部状态 + +这些都更像: + +> 正在运行的执行槽位 + +而不是“任务目标本身”。 + +最小例子: + +```python +runtime = { + "id": "rt_01", + "type": "in_process_teammate", + "status": "running", + "work_graph_task_id": 12, +} +``` + +这里最重要的边界是: + +- 一个任务可以派生多个 runtime task +- 一个 runtime task 通常只是“如何执行”的一个实例 + +## 5. Worktree:在哪做 + +这是 `s18` 建立的执行车道层。 + +它回答的是: + +- 这份工作在哪个独立目录里做 +- 这条目录车道对应哪个任务 +- 这条车道现在是 active、kept 还是 removed + +最小例子: + +```python +worktree = { + "name": "login-page", + "path": ".worktrees/login-page", + "task_id": 12, + "status": "active", +} +``` + +这层的关键词是: + +**执行边界** + +它不是工作目标本身,而是: + +> 让这份工作在独立目录里推进的执行车道。 + +## 这五层怎么连起来 + +你可以把后段章节连成下面这条链: + +```text +teammate + 通过 protocol request 协调 + 认领 task + 作为一个 runtime execution slot 持续运行 + 在某条 worktree lane 里改代码 +``` + +如果写得更具体一点,会变成: + +```text +alice (teammate) + -> +收到或发起一个 request_id + -> +认领 task #12 + -> +开始作为执行单元推进工作 + -> +进入 worktree "login-page" + -> +在 .worktrees/login-page 里运行命令和改文件 +``` + +## 一个最典型的混淆例子 + +很多读者会把这句话说成: + +> “alice 就是在做 login-page 这个 worktree 任务。” + +这句话把三层东西混成了一句: + +- `alice`:队友 +- `login-page`:worktree +- “任务”:工作图任务 + +更准确的说法应该是: + +> `alice` 认领了 `task #12`,并在 `login-page` 这条 worktree 车道里推进它。 + +一旦你能稳定地这样表述,后面几章就不容易乱。 + +## 初学者最容易犯的错 + +### 1. 把 teammate 和 task 混成一个对象 + +队友是执行者,任务是目标。 + +### 2. 把 `request_id` 和 `task_id` 混成一个 ID + +一个负责协调,一个负责工作目标,不是同一层。 + +### 3. 把 runtime slot 当成 durable task + +运行时执行单元会结束,但 durable task 还可能继续存在。 + +### 4. 把 worktree 当成任务本身 + +worktree 只是执行目录边界,不是任务目标。 + +### 5. 只会讲“系统能并行”,却说不清每层对象各自负责什么 + +这是最常见也最危险的模糊表达。 + +真正清楚的教学,不是说“这里好多 agent 很厉害”,而是能把下面这句话讲稳: + +> 队友负责长期协作,请求负责协调流程,任务负责表达目标,运行时槽位负责承载执行,worktree 负责隔离执行目录。 + +## 读完这篇你应该能自己说清楚 + +至少能完整说出下面这两句话: + +1. `s17` 的自治认领,认领的是 `s12` 的工作图任务,不是 `s13` 的运行时槽位。 +2. `s18` 的 worktree,绑定的是任务的执行车道,而不是把任务本身变成目录。 + +如果这两句你已经能稳定说清,`s15-s18` 这一大段主线就基本不会再拧巴了。 diff --git a/tests/test_background_notifications.py b/tests/test_background_notifications.py new file mode 100644 index 000000000..b321c888c --- /dev/null +++ b/tests/test_background_notifications.py @@ -0,0 +1,156 @@ +import os +import sys +import types +import unittest +from pathlib import Path +from types import SimpleNamespace + + +REPO_ROOT = Path(__file__).resolve().parents[1] +if str(REPO_ROOT) not in sys.path: + sys.path.insert(0, str(REPO_ROOT)) + +os.environ.setdefault("MODEL_ID", "test-model") + +fake_anthropic = types.ModuleType("anthropic") + + +class FakeAnthropic: + def __init__(self, *args, **kwargs): + self.messages = SimpleNamespace(create=None) + + +setattr(fake_anthropic, "Anthropic", FakeAnthropic) +sys.modules.setdefault("anthropic", fake_anthropic) + +fake_dotenv = types.ModuleType("dotenv") +setattr(fake_dotenv, "load_dotenv", lambda *args, **kwargs: None) +sys.modules.setdefault("dotenv", fake_dotenv) + +import agents.s13_background_tasks as s13_background_tasks +import agents.s_full as s_full + + +class FakeMessagesAPI: + def __init__(self, responses): + self._responses = iter(responses) + self.call_count = 0 + + def create(self, **kwargs): + self.call_count += 1 + return next(self._responses) + + +class FakeS13BackgroundManager: + def __init__(self): + self._running = True + self.wait_called = False + + def drain_notifications(self): + return [] + + def has_running_tasks(self): + return self._running + + def wait_for_notifications(self): + self.wait_called = True + self._running = False + return [ + { + "task_id": "bg-1", + "status": "completed", + "preview": "done", + "output_file": ".runtime-tasks/bg-1.log", + } + ] + + +class FakeSFullBackgroundManager: + def __init__(self): + self._running = True + self.wait_called = False + + def drain(self): + return [] + + def has_running_tasks(self): + return self._running + + def wait_for_notifications(self): + self.wait_called = True + self._running = False + return [{"task_id": "bg-1", "status": "completed", "result": "done"}] + + +class BackgroundNotificationTests(unittest.TestCase): + def test_s13_agent_loop_waits_for_background_results_after_end_turn(self): + messages = [{"role": "user", "content": "Run tests in the background"}] + fake_bg = FakeS13BackgroundManager() + fake_api = FakeMessagesAPI( + [ + SimpleNamespace( + stop_reason="end_turn", content="Started background work." + ), + SimpleNamespace( + stop_reason="end_turn", content="Background work completed." + ), + ] + ) + original_bg = s13_background_tasks.BG + original_client = s13_background_tasks.client + try: + s13_background_tasks.BG = fake_bg + s13_background_tasks.client = SimpleNamespace(messages=fake_api) + s13_background_tasks.agent_loop(messages) + finally: + s13_background_tasks.BG = original_bg + s13_background_tasks.client = original_client + + self.assertTrue(fake_bg.wait_called) + self.assertEqual(fake_api.call_count, 2) + self.assertTrue( + any( + message["role"] == "user" + and isinstance(message["content"], str) + and "" in message["content"] + for message in messages + ) + ) + + def test_s_full_agent_loop_waits_for_background_results_after_end_turn(self): + messages = [{"role": "user", "content": "Run tests in the background"}] + fake_bg = FakeSFullBackgroundManager() + fake_api = FakeMessagesAPI( + [ + SimpleNamespace( + stop_reason="end_turn", content="Started background work." + ), + SimpleNamespace( + stop_reason="end_turn", content="Background work completed." + ), + ] + ) + original_bg = s_full.BG + original_client = s_full.client + try: + s_full.BG = fake_bg + s_full.client = SimpleNamespace(messages=fake_api) + s_full.agent_loop(messages) + finally: + s_full.BG = original_bg + s_full.client = original_client + + self.assertTrue(fake_bg.wait_called) + self.assertEqual(fake_api.call_count, 2) + self.assertTrue( + any( + message["role"] == "user" + and isinstance(message["content"], str) + and "" in message["content"] + for message in messages + ) + ) + + +if __name__ == "__main__": + unittest.main() diff --git a/web/next.config.ts b/web/next.config.ts index 4dd888c18..b4b7caf57 100644 --- a/web/next.config.ts +++ b/web/next.config.ts @@ -1,9 +1,13 @@ +import path from "node:path"; import type { NextConfig } from "next"; const nextConfig: NextConfig = { output: "export", images: { unoptimized: true }, trailingSlash: true, + turbopack: { + root: path.resolve(__dirname), + }, }; export default nextConfig; diff --git a/web/package.json b/web/package.json index 984b6028a..d1fe6fe36 100644 --- a/web/package.json +++ b/web/package.json @@ -8,7 +8,9 @@ "dev": "next dev", "prebuild": "npm run extract", "build": "next build", - "start": "next start" + "start": "next start", + "test:browser:smoke": "bash scripts/browser-smoke.sh", + "test:browser:flows": "bash scripts/browser-flows.sh" }, "dependencies": { "diff": "^8.0.3", diff --git a/web/scripts/browser-flows.sh b/web/scripts/browser-flows.sh new file mode 100644 index 000000000..c8b0397fc --- /dev/null +++ b/web/scripts/browser-flows.sh @@ -0,0 +1,377 @@ +#!/usr/bin/env bash +set -euo pipefail + +BASE_URL="${BASE_URL:-${1:-http://127.0.0.1:3002}}" +LOCALE="${LOCALE:-zh}" +SESSION_NAME="${SESSION_NAME:-learn-claude-code-flows-${LOCALE}}" + +ROOT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")/.." && pwd)" +source "$ROOT_DIR/scripts/browser-test-lib.sh" + +agent-browser() { + command agent-browser --session-name "$SESSION_NAME" "$@" +} + +trap 'stop_static_server_if_started; agent-browser close >/dev/null 2>&1 || true' EXIT + +locale_text() { + local key="$1" + case "$LOCALE:$key" in + zh:deep_dive) echo '深入探索' ;; + en:deep_dive) echo 'Deep Dive' ;; + ja:deep_dive) echo '深掘り' ;; + + zh:bridge_control_plane) echo '工具控制平面' ;; + en:bridge_control_plane) echo 'Tool Control Plane' ;; + ja:bridge_control_plane) echo 'ツール制御プレーン' ;; + + *) echo "Unknown locale text key: ${LOCALE}:${key}" >&2; return 1 ;; + esac +} + +wait_page() { + agent-browser wait --load networkidle >/dev/null 2>&1 || agent-browser wait 600 >/dev/null 2>&1 || true + agent-browser wait 1200 >/dev/null 2>&1 || true + agent-browser get title >/dev/null 2>&1 || true +} + +open_page() { + local path="$1" + local attempt + + agent-browser close >/dev/null 2>&1 || true + for attempt in 1 2 3; do + agent-browser --json errors --clear >/dev/null 2>&1 || true + if ! open_url_with_retry "${BASE_URL}${path}"; then + continue + fi + wait_page + if assert_url_contains "$path" >/dev/null 2>&1; then + return 0 + fi + agent-browser close >/dev/null 2>&1 || true + sleep 0.4 + done + + echo "Navigation failed for ${BASE_URL}${path}" >&2 + return 1 +} + +assert_url_contains() { + local expected="$1" + local url_json + url_json="$(agent-browser --json get url)" + URL_JSON="$url_json" EXPECTED="$expected" python3 - <<'PY' +import json +import os +import sys + +payload = json.loads(os.environ["URL_JSON"]) +url = payload.get("data", {}).get("url", "") +expected = os.environ["EXPECTED"] +if expected not in url: + print(f"Expected URL containing {expected!r}, got {url!r}", file=sys.stderr) + sys.exit(1) +PY +} + +assert_body_contains() { + local pattern="$1" + agent-browser get text body | rg -q "$pattern" +} + +assert_no_overflow() { + local info_json + info_json="$(agent-browser --json eval '({ + overflow: document.documentElement.scrollWidth > window.innerWidth, + width: window.innerWidth, + scrollWidth: document.documentElement.scrollWidth + })')" + INFO_JSON="$info_json" python3 - <<'PY' +import json +import os +import sys + +payload = json.loads(os.environ["INFO_JSON"]) +result = payload.get("data", {}).get("result", {}) +if result.get("overflow"): + print( + f"Overflow detected: width={result.get('width')} scrollWidth={result.get('scrollWidth')}", + file=sys.stderr, + ) + sys.exit(1) +PY +} + +assert_no_page_errors() { + local errors_json + errors_json="$(agent-browser --json errors)" + ERRORS_JSON="$errors_json" python3 - <<'PY' +import json +import os +import sys + +payload = json.loads(os.environ["ERRORS_JSON"]) +errors = payload.get("data", {}).get("errors", []) +if errors: + print(f"Unexpected page errors: {errors}", file=sys.stderr) + sys.exit(1) +PY +} + +click_locale_button() { + local label="$1" + agent-browser --json eval "(() => { + const buttons = Array.from(document.querySelectorAll('button')); + const match = buttons.find((button) => button.textContent.trim() === '${label}'); + if (!match) { + throw new Error('Locale button not found: ${label}'); + } + match.click(); + return true; + })() " >/dev/null +} + +click_link_exact() { + local label="$1" + agent-browser --json eval "(() => { + const links = Array.from(document.querySelectorAll('a')); + const match = links.find((link) => link.textContent.trim() === '${label}'); + if (!match) { + throw new Error('Link not found: ${label}'); + } + match.click(); + return true; + })() " >/dev/null +} + +click_link_containing() { + local label="$1" + agent-browser --json eval "(() => { + const normalize = (value) => value.replace(/\s+/g, ' ').trim(); + const links = Array.from(document.querySelectorAll('a')); + const match = links.find((link) => normalize(link.textContent).includes('${label}')); + if (!match) { + throw new Error('Link not found: ${label}'); + } + match.click(); + return true; + })() " >/dev/null +} + +click_link_by_href() { + local href_fragment="$1" + local label_fragment="${2:-}" + agent-browser --json eval "(() => { + const normalize = (value) => value.replace(/\s+/g, ' ').trim(); + const links = Array.from(document.querySelectorAll('a')); + const match = links.find((link) => { + const hrefMatches = link.href.includes('${href_fragment}'); + const labelMatches = '${label_fragment}' ? normalize(link.textContent).includes('${label_fragment}') : true; + return hrefMatches && labelMatches; + }); + if (!match) { + throw new Error('Link not found for href: ${href_fragment}'); + } + match.click(); + return true; + })() " >/dev/null +} + +run_flow() { + local name="$1" + shift + echo "FLOW\t${name}" + "$@" + echo "PASS\t${name}" +} + +flow_home_to_s01() { + open_page "/${LOCALE}/" + click_link_by_href "/${LOCALE}/s01/" + wait_page + assert_url_contains "/${LOCALE}/s01/" + assert_body_contains 's01' + assert_no_overflow + assert_no_page_errors +} + +flow_home_to_timeline() { + open_page "/${LOCALE}/timeline/" + assert_url_contains "/${LOCALE}/timeline/" + assert_body_contains 's01' + assert_body_contains 's19' + assert_no_overflow + assert_no_page_errors +} + +flow_home_to_layers() { + open_page "/${LOCALE}/layers/" + assert_url_contains "/${LOCALE}/layers/" + assert_body_contains 'P1' + assert_body_contains 's19' + assert_no_overflow + assert_no_page_errors +} + +flow_home_to_compare() { + open_page "/${LOCALE}/" + click_link_by_href "/${LOCALE}/compare/" + wait_page + assert_url_contains "/${LOCALE}/compare/" + assert_body_contains 's14 -> s15' + assert_no_overflow + assert_no_page_errors +} + +flow_compare_default_state() { + open_page "/${LOCALE}/compare" + assert_body_contains 's01' + assert_body_contains 's02' + assert_body_contains 's14 -> s15' + assert_no_overflow + assert_no_page_errors +} + +flow_timeline_to_stage_exit() { + open_page "/${LOCALE}/timeline" + click_link_by_href "/${LOCALE}/s06/" + wait_page + assert_url_contains "/${LOCALE}/s06/" + assert_body_contains 's06' + assert_no_overflow + assert_no_page_errors +} + +flow_layers_to_stage_entry() { + open_page "/${LOCALE}/layers" + click_link_by_href "/${LOCALE}/s15/" + wait_page + assert_url_contains "/${LOCALE}/s15/" + assert_body_contains 's15' + assert_no_overflow + assert_no_page_errors +} + +flow_chapter_to_bridge_doc() { + open_page "/${LOCALE}/s02" + agent-browser --json find text "$(locale_text deep_dive)" click >/dev/null + wait_page + click_link_by_href "/${LOCALE}/docs/s02a-tool-control-plane/" "$(locale_text bridge_control_plane)" + wait_page + assert_url_contains "/${LOCALE}/docs/s02a-tool-control-plane/" + assert_body_contains "$(locale_text bridge_control_plane)" + assert_no_overflow + assert_no_page_errors +} + +flow_bridge_doc_home_return() { + open_page "/${LOCALE}/docs/s00f-code-reading-order" + click_link_by_href "/${LOCALE}/" + wait_page + assert_url_contains "/${LOCALE}/" + assert_body_contains 's01' + assert_no_overflow + assert_no_page_errors +} + +flow_bridge_doc_back_to_chapter() { + open_page "/${LOCALE}/docs/s02a-tool-control-plane" + click_link_by_href "/${LOCALE}/s02/" 's02' + wait_page + assert_url_contains "/${LOCALE}/s02/" + assert_body_contains 's02' + assert_no_overflow + assert_no_page_errors +} + +flow_bridge_doc_locale_switching() { + open_page "/${LOCALE}/docs/s00f-code-reading-order" + click_locale_button 'EN' + wait_page + assert_url_contains '/en/docs/s00f-code-reading-order/' + click_locale_button '日本語' + wait_page + assert_url_contains '/ja/docs/s00f-code-reading-order/' + click_locale_button '中文' + wait_page + assert_url_contains '/zh/docs/s00f-code-reading-order/' + assert_no_page_errors +} + +flow_compare_preset() { + open_page "/${LOCALE}/compare" + agent-browser --json find text 's14 -> s15' click >/dev/null + agent-browser wait 800 >/dev/null 2>&1 || true + assert_body_contains 's14' + assert_body_contains 's15' + assert_no_overflow + assert_no_page_errors +} + +flow_chapter_next_navigation() { + open_page "/${LOCALE}/s15" + click_link_by_href "/${LOCALE}/s16/" + wait_page + assert_url_contains "/${LOCALE}/s16/" + assert_body_contains 's16' + assert_no_overflow + assert_no_page_errors +} + +flow_locale_switching() { + open_page "/${LOCALE}/s01" + click_locale_button 'EN' + wait_page + assert_url_contains '/en/s01/' + click_locale_button '日本語' + wait_page + assert_url_contains '/ja/s01/' + click_locale_button '中文' + wait_page + assert_url_contains '/zh/s01/' + assert_no_page_errors +} + +flow_mobile_core_pages() { + agent-browser set viewport 390 844 >/dev/null 2>&1 + for path in \ + "/${LOCALE}/" \ + "/${LOCALE}/timeline" \ + "/${LOCALE}/layers" \ + "/${LOCALE}/compare" \ + "/${LOCALE}/s15" \ + "/${LOCALE}/docs/s00f-code-reading-order" + do + open_page "$path" + assert_no_overflow + assert_no_page_errors + done + agent-browser set viewport 1440 960 >/dev/null 2>&1 +} + +main() { + start_static_server_if_needed "$BASE_URL" + agent-browser close >/dev/null 2>&1 || true + agent-browser set viewport 1440 960 >/dev/null 2>&1 || true + open_url_with_retry "${BASE_URL}/${LOCALE}/" >/dev/null 2>&1 || open_url_with_retry "${BASE_URL}/" >/dev/null 2>&1 || true + agent-browser wait 400 >/dev/null 2>&1 || true + + run_flow home-to-s01 flow_home_to_s01 + run_flow home-to-timeline flow_home_to_timeline + run_flow home-to-layers flow_home_to_layers + run_flow home-to-compare flow_home_to_compare + run_flow compare-default-state flow_compare_default_state + run_flow timeline-to-stage-exit flow_timeline_to_stage_exit + run_flow layers-to-stage-entry flow_layers_to_stage_entry + run_flow chapter-to-bridge-doc flow_chapter_to_bridge_doc + run_flow bridge-doc-home-return flow_bridge_doc_home_return + run_flow bridge-doc-back-to-chapter flow_bridge_doc_back_to_chapter + run_flow bridge-doc-locale-switching flow_bridge_doc_locale_switching + run_flow compare-preset flow_compare_preset + run_flow chapter-next-navigation flow_chapter_next_navigation + run_flow locale-switching flow_locale_switching + run_flow mobile-core-pages flow_mobile_core_pages +} + +main "$@" diff --git a/web/scripts/browser-smoke.sh b/web/scripts/browser-smoke.sh new file mode 100644 index 000000000..180698859 --- /dev/null +++ b/web/scripts/browser-smoke.sh @@ -0,0 +1,139 @@ +#!/usr/bin/env bash +set -euo pipefail + +ROOT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")/.." && pwd)" +BASE_URL="${BASE_URL:-${1:-http://127.0.0.1:3002}}" +LOCALES="${LOCALES:-zh}" + +TMP_DIR="$(mktemp -d)" +source "$ROOT_DIR/scripts/browser-test-lib.sh" + +trap 'rm -rf "$TMP_DIR"; stop_static_server_if_started; agent-browser close >/dev/null 2>&1 || true' EXIT + +discover_routes() { + local locale="$1" + find "$ROOT_DIR/out/$locale" -type f -name 'index.html' | sort | while read -r file; do + local route="${file#"$ROOT_DIR/out"}" + route="${route%index.html}" + echo "$route" + done +} + +check_route() { + local route="$1" + local safe_name="${route#/}" + local snapshot_file="$TMP_DIR/${safe_name//\//_}.png" + local info_json + local errors_json + local check_output="" + local attempt + + agent-browser --json errors --clear >/dev/null 2>&1 || true + agent-browser close >/dev/null 2>&1 || true + if ! open_url_with_retry "${BASE_URL}${route}"; then + echo "FAIL ${route} navigation-failed" + return 1 + fi + agent-browser wait --load networkidle >/dev/null 2>&1 || agent-browser wait 500 >/dev/null 2>&1 || true + agent-browser get title >/dev/null 2>&1 || true + + for attempt in 1 2 3 4 5; do + info_json="$(agent-browser --json eval '({ + title: document.title, + h1Count: document.querySelectorAll("h1").length, + mainExists: Boolean(document.querySelector("main")), + overflow: document.documentElement.scrollWidth > window.innerWidth, + notFound: document.body.innerText.includes("This page could not be found."), + bodyLength: document.body.innerText.trim().length + })')" + errors_json="$(agent-browser --json errors)" + + if check_output="$( + INFO_JSON="$info_json" ERRORS_JSON="$errors_json" python3 - "$route" <<'PY' +import json +import os +import sys + +route = sys.argv[1] +info = json.loads(os.environ["INFO_JSON"]) or {} +errors = json.loads(os.environ["ERRORS_JSON"]) or {} + +if not isinstance(info, dict): + info = {} +if not isinstance(errors, dict): + errors = {} + +result = (info.get("data") or {}).get("result") or {} +page_errors = (errors.get("data") or {}).get("errors") or [] +issues = [] + +if not result: + issues.append("missing-eval-result") +if not result.get("title"): + issues.append("missing-title") +if result.get("h1Count", 0) < 1: + issues.append("missing-h1") +if not result.get("mainExists"): + issues.append("missing-main") +if result.get("overflow"): + issues.append("horizontal-overflow") +if result.get("notFound"): + issues.append("rendered-404") +if result.get("bodyLength", 0) < 80: + issues.append("body-too-short") +if page_errors: + issues.append(f"page-errors:{len(page_errors)}") + +if issues: + print(f"FAIL\t{route}\t{','.join(issues)}") + sys.exit(1) + +print(f"OK\t{route}") +PY + )"; then + echo "$check_output" + return 0 + fi + + if [[ "$attempt" -lt 5 ]]; then + agent-browser wait 900 >/dev/null 2>&1 || true + fi + done + + echo "${check_output:-FAIL ${route} unknown-check-failure}" + agent-browser screenshot "$snapshot_file" >/dev/null 2>&1 || true + if [[ -f "$snapshot_file" ]]; then + echo "ARTIFACT ${route} ${snapshot_file}" >&2 + fi + return 1 +} + +main() { + local failed=0 + local total=0 + local warm_locale="${LOCALES%%,*}" + + start_static_server_if_needed "$BASE_URL" + agent-browser close >/dev/null 2>&1 || true + agent-browser set viewport 1440 960 >/dev/null 2>&1 || true + open_url_with_retry "${BASE_URL}/${warm_locale}/" >/dev/null 2>&1 || open_url_with_retry "${BASE_URL}/" >/dev/null 2>&1 || true + agent-browser wait 400 >/dev/null 2>&1 || true + + for locale in ${LOCALES//,/ }; do + while read -r route; do + [[ -z "$route" ]] && continue + total=$((total + 1)) + if ! check_route "$route"; then + failed=$((failed + 1)) + fi + done < <(discover_routes "$locale") + done + + echo + echo "Smoke summary: ${total} checked, ${failed} failed" + if [[ "$failed" -ne 0 ]]; then + exit 1 + fi +} + +main "$@" diff --git a/web/scripts/browser-test-lib.sh b/web/scripts/browser-test-lib.sh new file mode 100644 index 000000000..58a4472d0 --- /dev/null +++ b/web/scripts/browser-test-lib.sh @@ -0,0 +1,94 @@ +#!/usr/bin/env bash +set -euo pipefail + +ROOT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")/.." && pwd)" +OUT_DIR="$ROOT_DIR/out" +TEST_SERVER_PID="" + +base_url_port() { + python3 - "$1" <<'PY' +from urllib.parse import urlparse +import sys + +url = sys.argv[1] +parsed = urlparse(url) +if not parsed.scheme or not parsed.hostname or not parsed.port: + raise SystemExit(f"Unable to parse host/port from BASE_URL: {url}") +print(parsed.port) +PY +} + +base_url_ready() { + local base_url="$1" + curl -fsS -o /dev/null "${base_url}/" >/dev/null 2>&1 +} + +start_static_server_if_needed() { + local base_url="$1" + local port + local log_file + local attempt + + if base_url_ready "$base_url"; then + return 0 + fi + + if [[ ! -d "$OUT_DIR" ]]; then + echo "Static export not found at $OUT_DIR. Run 'npm run build' first." >&2 + return 1 + fi + + port="$(base_url_port "$base_url")" + log_file="${TMPDIR:-/tmp}/learn-claude-code-browser-tests-${port}.log" + + python3 -m http.server "$port" -d "$OUT_DIR" >"$log_file" 2>&1 & + TEST_SERVER_PID=$! + + for attempt in {1..40}; do + if base_url_ready "$base_url"; then + return 0 + fi + sleep 0.25 + done + + echo "Failed to start static server for ${base_url}" >&2 + if [[ -f "$log_file" ]]; then + cat "$log_file" >&2 + fi + return 1 +} + +stop_static_server_if_started() { + if [[ -n "${TEST_SERVER_PID:-}" ]]; then + kill "$TEST_SERVER_PID" >/dev/null 2>&1 || true + wait "$TEST_SERVER_PID" >/dev/null 2>&1 || true + TEST_SERVER_PID="" + fi +} + +open_url_with_retry() { + local url="$1" + local attempt + local current_url="" + + for attempt in 1 2 3; do + if agent-browser open "$url" >/dev/null 2>&1; then + agent-browser wait --load networkidle >/dev/null 2>&1 || agent-browser wait 800 >/dev/null 2>&1 || true + current_url="$(agent-browser get url 2>/dev/null | tr -d '\r' | tail -n 1)" + current_url="${current_url%/}" + if [[ -n "$current_url" && "$current_url" != "about:blank" ]]; then + return 0 + fi + agent-browser wait 600 >/dev/null 2>&1 || true + current_url="$(agent-browser get url 2>/dev/null | tr -d '\r' | tail -n 1)" + current_url="${current_url%/}" + if [[ -n "$current_url" && "$current_url" != "about:blank" ]]; then + return 0 + fi + fi + agent-browser close >/dev/null 2>&1 || true + sleep 0.4 + done + + return 1 +} diff --git a/web/scripts/extract-content.ts b/web/scripts/extract-content.ts index 6e35badd9..3f4fc33de 100644 --- a/web/scripts/extract-content.ts +++ b/web/scripts/extract-content.ts @@ -115,6 +115,14 @@ function extractDocVersion(filename: string): string | null { return m ? m[1] : null; } +function isMainlineChapterVersion(version: string | null): boolean { + return version !== null && (LEARNING_PATH as readonly string[]).includes(version); +} + +function slugFromFilename(filename: string): string { + return path.basename(filename, ".md"); +} + // Main extraction function main() { console.log("Extracting content from agents and docs..."); @@ -168,7 +176,7 @@ function main() { keyInsight: meta?.keyInsight ?? "", classes, functions, - layer: meta?.layer ?? "tools", + layer: meta?.layer ?? "core", source, }); } @@ -234,18 +242,22 @@ function main() { for (const filename of docFiles) { const version = extractDocVersion(filename); - if (!version) { - console.warn(` Skipping doc ${locale}/${filename}: could not determine version`); - continue; - } - + const kind = isMainlineChapterVersion(version) ? "chapter" : "bridge"; const filePath = path.join(localeDir, filename); const content = fs.readFileSync(filePath, "utf-8"); const titleMatch = content.match(/^#\s+(.+)$/m); const title = titleMatch ? titleMatch[1] : filename; - docs.push({ version, locale: locale as "en" | "zh" | "ja", title, content }); + docs.push({ + version: kind === "chapter" ? version : null, + slug: slugFromFilename(filename), + locale: locale as "en" | "zh" | "ja", + title, + kind, + filename, + content, + }); } } diff --git a/web/src/app/[locale]/(learn)/[version]/client.tsx b/web/src/app/[locale]/(learn)/[version]/client.tsx index 83c7850aa..32adb97f7 100644 --- a/web/src/app/[locale]/(learn)/[version]/client.tsx +++ b/web/src/app/[locale]/(learn)/[version]/client.tsx @@ -1,5 +1,6 @@ "use client"; +import Link from "next/link"; import { ArchDiagram } from "@/components/architecture/arch-diagram"; import { WhatsNew } from "@/components/diff/whats-new"; import { DesignDecisions } from "@/components/architecture/design-decisions"; @@ -8,8 +9,23 @@ import { SourceViewer } from "@/components/code/source-viewer"; import { AgentLoopSimulator } from "@/components/simulator/agent-loop-simulator"; import { ExecutionFlow } from "@/components/architecture/execution-flow"; import { SessionVisualization } from "@/components/visualizations"; +import { Card } from "@/components/ui/card"; import { Tabs } from "@/components/ui/tabs"; -import { useTranslations } from "@/lib/i18n"; +import { useLocale, useTranslations } from "@/lib/i18n"; + +interface GuideData { + focus: string; + confusion: string; + goal: string; +} + +interface BridgeDoc { + slug: string; + kind: "map" | "mechanism"; + title: string; + summary: Record<"zh" | "en" | "ja", string>; + fallbackLocale: string | null; +} interface VersionDetailClientProps { version: string; @@ -23,6 +39,9 @@ interface VersionDetailClientProps { } | null; source: string; filename: string; + guideData: GuideData | null; + bridgeDocs: BridgeDoc[]; + locale: string; } export function VersionDetailClient({ @@ -30,53 +49,130 @@ export function VersionDetailClient({ diff, source, filename, + guideData, + bridgeDocs, + locale: serverLocale, }: VersionDetailClientProps) { const t = useTranslations("version"); + const locale = useLocale() || serverLocale; const tabs = [ { id: "learn", label: t("tab_learn") }, - { id: "simulate", label: t("tab_simulate") }, { id: "code", label: t("tab_code") }, { id: "deep-dive", label: t("tab_deep_dive") }, ]; return ( -
- {/* Hero Visualization */} - + + {(activeTab) => ( + <> + {activeTab === "learn" && } - {/* Tabbed content */} - - {(activeTab) => ( - <> - {activeTab === "learn" && } - {activeTab === "simulate" && ( - - )} - {activeTab === "code" && ( - - )} - {activeTab === "deep-dive" && ( -
-
-

+ {activeTab === "code" && ( + + )} + + {activeTab === "deep-dive" && ( +
+ {/* Interactive visualization */} + + + {/* Execution flow + Architecture side by side */} +
+
+

{t("execution_flow")} -

+

-
-

+
+

{t("architecture")} -

+

- {diff && } -
- )} - - )} -
-
+ + {/* Simulator */} + + + {/* Diff / Design decisions */} + {diff && } + + + {/* Guide cards */} + {guideData && ( +
+ +

+ {t("guide_focus_title")} +

+

+ {guideData.focus} +

+
+ +

+ {t("guide_confusion_title")} +

+

+ {guideData.confusion} +

+
+ +

+ {t("guide_goal_title")} +

+

+ {guideData.goal} +

+
+
+ )} + + {/* Bridge doc links */} + {bridgeDocs.length > 0 && ( +
+

+ {t("bridge_docs_title")} +

+

+ {t("bridge_docs_intro")} +

+
+ {bridgeDocs.map((doc) => ( + +
+ + {doc.kind === "map" + ? t("bridge_docs_kind_map") + : t("bridge_docs_kind_mechanism")} + + {doc.fallbackLocale && ( + + {doc.fallbackLocale} + + )} +
+

+ {doc.title} +

+

+ {doc.summary[locale as "zh" | "en" | "ja"] ?? doc.summary.en} +

+ + ))} +
+
+ )} + + )} + + )} + ); } diff --git a/web/src/app/[locale]/(learn)/[version]/diff/diff-content.tsx b/web/src/app/[locale]/(learn)/[version]/diff/diff-content.tsx index d6e21011e..8c3fee0d1 100644 --- a/web/src/app/[locale]/(learn)/[version]/diff/diff-content.tsx +++ b/web/src/app/[locale]/(learn)/[version]/diff/diff-content.tsx @@ -2,8 +2,9 @@ import { useMemo } from "react"; import Link from "next/link"; -import { useLocale } from "@/lib/i18n"; +import { useLocale, useTranslations } from "@/lib/i18n"; import { VERSION_META } from "@/lib/constants"; +import { getVersionContent } from "@/lib/version-content"; import { Card, CardHeader, CardTitle } from "@/components/ui/card"; import { LayerBadge } from "@/components/ui/badge"; import { CodeDiff } from "@/components/diff/code-diff"; @@ -19,7 +20,9 @@ interface DiffPageContentProps { export function DiffPageContent({ version }: DiffPageContentProps) { const locale = useLocale(); + const tSession = useTranslations("sessions"); const meta = VERSION_META[version]; + const content = getVersionContent(version, locale); const { currentVersion, prevVersion, diff } = useMemo(() => { const current = data.versions.find((v) => v.id === version); @@ -48,9 +51,9 @@ export function DiffPageContent({ version }: DiffPageContentProps) { className="mb-6 inline-flex items-center gap-1 text-sm text-zinc-500 hover:text-zinc-700 dark:hover:text-zinc-300" > - Back to {meta.title} + Back to {tSession(version) || meta.title} -

{meta.title}

+

{tSession(version) || meta.title}

This is the first version -- there is no previous version to compare against.

@@ -59,6 +62,9 @@ export function DiffPageContent({ version }: DiffPageContentProps) { } const prevMeta = VERSION_META[prevVersion.id]; + const prevContent = getVersionContent(prevVersion.id, locale); + const currentTitle = tSession(version) || meta.title; + const prevTitle = tSession(prevVersion.id) || prevMeta?.title || prevVersion.id; return (
@@ -67,13 +73,13 @@ export function DiffPageContent({ version }: DiffPageContentProps) { className="mb-6 inline-flex items-center gap-1 text-sm text-zinc-500 hover:text-zinc-700 dark:hover:text-zinc-300" > - Back to {meta.title} + Back to {currentTitle} {/* Header */}

- {prevMeta?.title || prevVersion.id} → {meta.title} + {prevTitle} → {currentTitle}

{prevVersion.id} ({prevVersion.loc} LOC) → {version} ({currentVersion.loc} LOC) @@ -165,8 +171,8 @@ export function DiffPageContent({ version }: DiffPageContentProps) {

- {prevMeta?.title || prevVersion.id} -

{prevMeta?.subtitle}

+ {prevTitle} +

{prevContent.subtitle}

{prevVersion.loc} LOC

@@ -176,8 +182,8 @@ export function DiffPageContent({ version }: DiffPageContentProps) { - {meta.title} -

{meta.subtitle}

+ {currentTitle} +

{content.subtitle}

{currentVersion.loc} LOC

diff --git a/web/src/app/[locale]/(learn)/[version]/page.tsx b/web/src/app/[locale]/(learn)/[version]/page.tsx index 90c35a22b..bbf4a4831 100644 --- a/web/src/app/[locale]/(learn)/[version]/page.tsx +++ b/web/src/app/[locale]/(learn)/[version]/page.tsx @@ -2,8 +2,12 @@ import Link from "next/link"; import { LEARNING_PATH, VERSION_META, LAYERS } from "@/lib/constants"; import { LayerBadge } from "@/components/ui/badge"; import versionsData from "@/data/generated/versions.json"; +import docsData from "@/data/generated/docs.json"; import { VersionDetailClient } from "./client"; import { getTranslations } from "@/lib/i18n-server"; +import { getChapterGuide } from "@/lib/chapter-guides"; +import { getBridgeDocDescriptors } from "@/lib/bridge-docs"; +import { getVersionContent } from "@/lib/version-content"; export function generateStaticParams() { return LEARNING_PATH.map((version) => ({ version })); @@ -18,6 +22,7 @@ export default async function VersionPage({ const versionData = versionsData.versions.find((v) => v.id === version); const meta = VERSION_META[version]; + const content = getVersionContent(version, locale); const diff = versionsData.diffs.find((d) => d.to === version) ?? null; if (!versionData || !meta) { @@ -33,6 +38,66 @@ export default async function VersionPage({ const tSession = getTranslations(locale, "sessions"); const tLayer = getTranslations(locale, "layer_labels"); const layer = LAYERS.find((l) => l.id === meta.layer); + const guide = getChapterGuide(version, locale); + const bridgeDocs = getBridgeDocDescriptors( + version as (typeof LEARNING_PATH)[number] + ) + .map((descriptor) => { + const doc = + (docsData as Array<{ + slug?: string; + locale?: string; + kind?: string; + title?: string; + }>).find( + (item) => + item.slug === descriptor.slug && + item.kind === "bridge" && + item.locale === locale + ) ?? + (docsData as Array<{ + slug?: string; + locale?: string; + kind?: string; + title?: string; + }>).find( + (item) => + item.slug === descriptor.slug && + item.kind === "bridge" && + item.locale === "zh" + ) ?? + (docsData as Array<{ + slug?: string; + locale?: string; + kind?: string; + title?: string; + }>).find( + (item) => + item.slug === descriptor.slug && + item.kind === "bridge" && + item.locale === "en" + ); + + if (!doc?.slug || !doc.title) return null; + + return { + ...descriptor, + title: + descriptor.title[locale as "zh" | "en" | "ja"] ?? descriptor.title.en, + fallbackLocale: doc.locale !== locale ? doc.locale : null, + }; + }) + .filter( + ( + item + ): item is { + slug: string; + kind: "map" | "mechanism"; + title: string; + summary: Record<"zh" | "en" | "ja", string>; + fallbackLocale: string | null; + } => Boolean(item) + ); const pathIndex = LEARNING_PATH.indexOf(version as typeof LEARNING_PATH[number]); const prevVersion = pathIndex > 0 ? LEARNING_PATH[pathIndex - 1] : null; @@ -42,9 +107,9 @@ export default async function VersionPage({ : null; return ( -
- {/* Header */} -
+
+ {/* Compact header: 3 lines */} +
{version} @@ -54,31 +119,29 @@ export default async function VersionPage({ {tLayer(layer.id)} )}
-

- {meta.subtitle} -

-
+

+ {content.subtitle} + | {versionData.loc} LOC + | {versionData.tools.length} {t("tools")} - {meta.coreAddition && ( - - {meta.coreAddition} - - )} -

- {meta.keyInsight && ( +

+ {content.keyInsight && (
- {meta.keyInsight} + {content.keyInsight}
)}
- {/* Client-rendered interactive sections */} + {/* Main content: client-rendered tabs (Learn / Code / Deep Dive) */} {/* Prev / Next navigation */} diff --git a/web/src/app/[locale]/(learn)/compare/page.tsx b/web/src/app/[locale]/(learn)/compare/page.tsx index a38a4204e..b048fe551 100644 --- a/web/src/app/[locale]/(learn)/compare/page.tsx +++ b/web/src/app/[locale]/(learn)/compare/page.tsx @@ -1,166 +1,495 @@ "use client"; -import { useState, useMemo } from "react"; +import Link from "next/link"; +import { useMemo, useState } from "react"; import { useLocale, useTranslations } from "@/lib/i18n"; -import { LEARNING_PATH, VERSION_META } from "@/lib/constants"; +import { LEARNING_PATH } from "@/lib/constants"; import { Card, CardHeader, CardTitle } from "@/components/ui/card"; import { LayerBadge } from "@/components/ui/badge"; import { CodeDiff } from "@/components/diff/code-diff"; import { ArchDiagram } from "@/components/architecture/arch-diagram"; -import { ArrowRight, FileCode, Wrench, Box, FunctionSquare } from "lucide-react"; -import type { VersionIndex } from "@/types/agent-data"; +import { ExecutionFlow } from "@/components/architecture/execution-flow"; +import { ArrowRight, FileCode, Layers3, Lightbulb, Sparkles, Wrench } from "lucide-react"; +import type { DocContent, VersionIndex } from "@/types/agent-data"; import versionData from "@/data/generated/versions.json"; +import docsData from "@/data/generated/docs.json"; +import { getBridgeDocDescriptors } from "@/lib/bridge-docs"; +import { getChapterGuide } from "@/lib/chapter-guides"; const data = versionData as VersionIndex; +const docs = docsData as DocContent[]; +type RecommendedBridgeDoc = { + slug: string; + title: string; + summary: string; + fallbackLocale: DocContent["locale"] | null; +}; + +function extractLead(content?: string) { + if (!content) return ""; + const match = content.match(/> \*([^*]+)\*/); + if (!match) return ""; + return match[1].replace(/^"+|"+$/g, "").trim(); +} + +function pickText( + locale: string, + value: { zh: string; en: string; ja: string } +) { + if (locale === "zh") return value.zh; + if (locale === "ja") return value.ja; + return value.en; +} + +const COMPARE_EXTRA_TEXT = { + goal: { + zh: "学完 B 后", + en: "After B", + ja: "B を読み終えた後の到達点", + }, + emptyGoal: { + zh: "该章节的学习目标暂未整理。", + en: "The learning goal for this chapter has not been filled in yet.", + ja: "この章の学習目標はまだ整理されていません。", + }, + diagnosisLabel: { + zh: "跃迁诊断", + en: "Jump Diagnosis", + ja: "ジャンプ診断", + }, + nextBestLabel: { + zh: "更稳的读法", + en: "Safer Reading Move", + ja: "より安定した読み方", + }, + adjacentTitle: { + zh: "这是最稳的一步升级", + en: "This is the safest upgrade step", + ja: "これは最も安定した1段階の比較です", + }, + adjacentBody: { + zh: "A 和 B 相邻,最适合看“系统刚刚多了一条什么分支、一个什么状态容器、为什么现在引入它”。", + en: "A and B are adjacent, so this is the cleanest way to see the exact new branch, state container, and reason for introducing it now.", + ja: "A と B は隣接しているため、何が新しい分岐で、何が新しい状態容器で、なぜ今入るのかを最も素直に見られます。", + }, + adjacentNext: { + zh: "先看执行流,再看架构图,最后再决定要不要往下看源码 diff。", + en: "Read the execution flow first, then the architecture view, and only then decide whether you need the source diff.", + ja: "まず実行フロー、その後アーキテクチャ図を見て、最後に必要ならソース diff へ進みます。", + }, + sameLayerTitle: { + zh: "这是同阶段内的跳读", + en: "This is a same-stage skip", + ja: "これは同一段階内の飛び読みです", + }, + sameLayerBody: { + zh: "你仍然在同一个能力阶段里,但中间被跳过的章节往往刚好承担了“把概念拆开”的工作,所以阅读风险已经明显高于相邻章节对比。", + en: "You are still inside one stage, but the skipped chapters often carry the conceptual separation work, so the reading risk is already much higher than an adjacent comparison.", + ja: "同じ段階内ではありますが、飛ばした章が概念分離を担っていることが多く、隣接比較より理解リスクはかなり高くなります。", + }, + sameLayerNext: { + zh: "如果开始读混,先回看 B 的前一章,再回桥接资料,而不是直接硬啃源码差异。", + en: "If things start to blur, revisit the chapter right before B and then the bridge docs before forcing the source diff.", + ja: "混ざり始めたら、まず B の直前の章と bridge doc に戻ってからソース diff を見ます。", + }, + crossLayerTitle: { + zh: "这是一次跨阶段跃迁", + en: "This is a cross-stage jump", + ja: "これは段階をまたぐジャンプです", + }, + crossLayerBody: { + zh: "跨阶段对比最大的风险,不是“功能更多了”,而是系统边界已经重画了。你需要先确认自己稳住了前一个阶段的目标,再去看 B。", + en: "The main risk in a cross-stage jump is not more features. It is that the system boundary has been redrawn. Make sure you actually hold the previous stage before reading B.", + ja: "段階またぎの最大リスクは機能量ではなく、システム境界そのものが描き直されていることです。B を読む前に前段階を本当に保持している必要があります。", + }, + crossLayerNext: { + zh: "先补桥接文档,再用时间线确认阶段切换理由;如果还虚,就先比较 `B` 的前一章和 `B` 本章。", + en: "Start with the bridge docs, then use the timeline to confirm why the stage boundary changes here. If it still feels shaky, compare the chapter right before B with B first.", + ja: "先に bridge doc を見て、その後 timeline でなぜここで段階が切り替わるのかを確認します。まだ不安なら、まず B の直前章と B を比較します。", + }, + bridgeNudge: { + zh: "这次跳跃前最值得先补的桥接资料", + en: "Bridge docs most worth reading before this jump", + ja: "このジャンプ前に最も先に補いたい bridge doc", + }, + quickLabel: { + zh: "一键对比入口", + en: "One-Click Compare", + ja: "ワンクリック比較", + }, + quickTitle: { + zh: "先用这些最稳的比较入口,不必每次手选两章", + en: "Start with these safe comparison moves instead of selecting two chapters every time", + ja: "毎回2章を手で選ぶ前に、まず安定した比較入口を使う", + }, + quickBody: { + zh: "这些按钮优先覆盖最值得反复看的相邻升级和阶段切换,适合第一次理解章节边界,也适合读到一半开始混时快速重启。", + en: "These presets cover the most useful adjacent upgrades and stage boundaries. They work both for a first pass and for resetting when chapter boundaries start to blur.", + ja: "ここには最も見返す価値の高い隣接アップグレードと段階切り替えを置いてあります。初回読みにも、途中で境界が混ざった時の立て直しにも向いています。", + }, + quickPrevious: { + zh: "直接改成 B 的前一章 -> B", + en: "Use B's Previous Chapter -> B", + ja: "B の直前章と B を比べる", + }, + quickPreviousBody: { + zh: "如果现在这次跳跃太大,先退回 B 的前一章和 B 做相邻对比,会更容易看清这章真正新增了什么。", + en: "If the current jump is too large, compare the chapter right before B with B first. That is usually the clearest way to see what B really adds.", + ja: "今のジャンプが大きすぎるなら、まず B の直前章と B を比較すると、この章が本当に何を増やしたのかを最も見やすくなります。", + }, +} as const; + +const QUICK_COMPARE_PRESETS = [ + { a: "s01", b: "s02" }, + { a: "s06", b: "s07" }, + { a: "s11", b: "s12" }, + { a: "s14", b: "s15" }, + { a: "s18", b: "s19" }, +] as const; export default function ComparePage() { const t = useTranslations("compare"); + const tSession = useTranslations("sessions"); + const tLayer = useTranslations("layer_labels"); const locale = useLocale(); - const [versionA, setVersionA] = useState(""); - const [versionB, setVersionB] = useState(""); + const [versionA, setVersionA] = useState(QUICK_COMPARE_PRESETS[0].a); + const [versionB, setVersionB] = useState(QUICK_COMPARE_PRESETS[0].b); + + const previousOfB = useMemo(() => { + if (!versionB) return null; + const index = LEARNING_PATH.indexOf(versionB as (typeof LEARNING_PATH)[number]); + if (index <= 0) return null; + return LEARNING_PATH[index - 1]; + }, [versionB]); const infoA = useMemo(() => data.versions.find((v) => v.id === versionA), [versionA]); const infoB = useMemo(() => data.versions.find((v) => v.id === versionB), [versionB]); - const metaA = versionA ? VERSION_META[versionA] : null; - const metaB = versionB ? VERSION_META[versionB] : null; + + const docA = useMemo( + () => docs.find((doc) => doc.version === versionA && doc.locale === locale), + [locale, versionA] + ); + const docB = useMemo( + () => docs.find((doc) => doc.version === versionB && doc.locale === locale), + [locale, versionB] + ); + + const leadA = useMemo(() => extractLead(docA?.content), [docA]); + const leadB = useMemo(() => extractLead(docB?.content), [docB]); const comparison = useMemo(() => { if (!infoA || !infoB) return null; + const toolsA = new Set(infoA.tools); const toolsB = new Set(infoB.tools); - const onlyA = infoA.tools.filter((t) => !toolsB.has(t)); - const onlyB = infoB.tools.filter((t) => !toolsA.has(t)); - const shared = infoA.tools.filter((t) => toolsB.has(t)); - - const classesA = new Set(infoA.classes.map((c) => c.name)); - const classesB = new Set(infoB.classes.map((c) => c.name)); - const newClasses = infoB.classes.map((c) => c.name).filter((c) => !classesA.has(c)); - - const funcsA = new Set(infoA.functions.map((f) => f.name)); - const funcsB = new Set(infoB.functions.map((f) => f.name)); - const newFunctions = infoB.functions.map((f) => f.name).filter((f) => !funcsA.has(f)); return { + toolsOnlyA: infoA.tools.filter((tool) => !toolsB.has(tool)), + toolsOnlyB: infoB.tools.filter((tool) => !toolsA.has(tool)), + toolsShared: infoA.tools.filter((tool) => toolsB.has(tool)), + newSurface: infoB.classes.filter((cls) => !infoA.classes.some((other) => other.name === cls.name)).length + + infoB.functions.filter((fn) => !infoA.functions.some((other) => other.name === fn.name)).length, locDelta: infoB.loc - infoA.loc, - toolsOnlyA: onlyA, - toolsOnlyB: onlyB, - toolsShared: shared, - newClasses, - newFunctions, }; }, [infoA, infoB]); + const progression = useMemo(() => { + if (!infoA || !infoB) return ""; + + const indexA = LEARNING_PATH.indexOf(versionA as (typeof LEARNING_PATH)[number]); + const indexB = LEARNING_PATH.indexOf(versionB as (typeof LEARNING_PATH)[number]); + + if (indexA === indexB) return t("progression_same_chapter"); + if (indexB < indexA) return t("progression_reverse"); + if (indexB === indexA + 1) return t("progression_direct"); + if (infoA.layer === infoB.layer) return t("progression_same_layer"); + return t("progression_cross_layer"); + }, [infoA, infoB, t, versionA, versionB]); + + const chapterDistance = useMemo(() => { + const indexA = LEARNING_PATH.indexOf(versionA as (typeof LEARNING_PATH)[number]); + const indexB = LEARNING_PATH.indexOf(versionB as (typeof LEARNING_PATH)[number]); + if (indexA < 0 || indexB < 0) return 0; + return Math.abs(indexB - indexA); + }, [versionA, versionB]); + + const recommendedBridgeDocs = useMemo(() => { + if (!versionB) return []; + + return getBridgeDocDescriptors(versionB as (typeof LEARNING_PATH)[number]) + .map((descriptor) => { + const doc = + docs.find( + (item) => + item.slug === descriptor.slug && + item.kind === "bridge" && + item.locale === locale + ) ?? + docs.find( + (item) => + item.slug === descriptor.slug && + item.kind === "bridge" && + item.locale === "zh" + ) ?? + docs.find( + (item) => + item.slug === descriptor.slug && + item.kind === "bridge" && + item.locale === "en" + ); + + if (!doc?.slug) return null; + + return { + slug: doc.slug, + title: pickText(locale, descriptor.title), + summary: pickText(locale, descriptor.summary), + fallbackLocale: doc.locale !== locale ? doc.locale : null, + } satisfies RecommendedBridgeDoc; + }) + .filter( + (item): item is RecommendedBridgeDoc => Boolean(item) + ); + }, [locale, versionB]); + + const guideB = useMemo(() => { + if (!versionB) return null; + return ( + getChapterGuide(versionB as (typeof LEARNING_PATH)[number], locale) ?? + getChapterGuide(versionB as (typeof LEARNING_PATH)[number], "en") + ); + }, [locale, versionB]); + + const jumpDiagnosis = useMemo(() => { + if (!infoA || !infoB) return null; + + const crossLayer = infoA.layer !== infoB.layer; + if (chapterDistance <= 1) { + return { + title: pickText(locale, COMPARE_EXTRA_TEXT.adjacentTitle), + body: pickText(locale, COMPARE_EXTRA_TEXT.adjacentBody), + next: pickText(locale, COMPARE_EXTRA_TEXT.adjacentNext), + }; + } + + if (crossLayer) { + return { + title: pickText(locale, COMPARE_EXTRA_TEXT.crossLayerTitle), + body: pickText(locale, COMPARE_EXTRA_TEXT.crossLayerBody), + next: pickText(locale, COMPARE_EXTRA_TEXT.crossLayerNext), + }; + } + + return { + title: pickText(locale, COMPARE_EXTRA_TEXT.sameLayerTitle), + body: pickText(locale, COMPARE_EXTRA_TEXT.sameLayerBody), + next: pickText(locale, COMPARE_EXTRA_TEXT.sameLayerNext), + }; + }, [chapterDistance, infoA, infoB, locale]); + return ( -
+

{t("title")}

-

{t("subtitle")}

+

{t("subtitle")}

- {/* Selectors */} -
-
- - + + +

+ {t("learning_jump")} +

+ {t("selector_title")} +

+ {t("selector_note")} +

+
+ +
+
+ + +
+ +
+ +
+ +
+ + +
- - -
- - +
+
+

+ {pickText(locale, COMPARE_EXTRA_TEXT.quickLabel)} +

+

+ {pickText(locale, COMPARE_EXTRA_TEXT.quickTitle)} +

+

+ {pickText(locale, COMPARE_EXTRA_TEXT.quickBody)} +

+ +
+ {QUICK_COMPARE_PRESETS.map((preset) => ( + + ))} +
+
+ + {versionB && previousOfB && previousOfB !== versionA && ( +
+

+ {pickText(locale, COMPARE_EXTRA_TEXT.quickLabel)} +

+

+ {pickText(locale, COMPARE_EXTRA_TEXT.quickPrevious)} +

+

+ {pickText(locale, COMPARE_EXTRA_TEXT.quickPreviousBody)} +

+
+ +
+
+ )}
-
+
- {/* Results */} {infoA && infoB && comparison && (
- {/* Side-by-side version info */} -
- - - {metaA?.title || versionA} -

{metaA?.subtitle}

-
-
-

{infoA.loc} LOC

-

{infoA.tools.length} tools

- {metaA && {metaA.layer}} + + +

+ {t("learning_jump")} +

+ + {tSession(versionA)} + + {tSession(versionB)} + +

+ {progression} +

+
+ +
+
+
+ + {t("carry_from_a")} +
+

+ {leadA || t("empty_lead")} +

- - - - {metaB?.title || versionB} -

{metaB?.subtitle}

-
-
-

{infoB.loc} LOC

-

{infoB.tools.length} tools

- {metaB && {metaB.layer}} + +
+
+ + {t("new_in_b")} +
+

+ {leadB || t("empty_lead")} +

- -
- {/* Side-by-side Architecture Diagrams */} -
-

{t("architecture")}

-
-
-

- {metaA?.title || versionA} -

- +
+
+ + {t("progression")} +
+

+ {progression} +

-
-

- {metaB?.title || versionB} -

- + +
+
+ + {pickText(locale, COMPARE_EXTRA_TEXT.goal)} +
+

+ {guideB?.goal ?? pickText(locale, COMPARE_EXTRA_TEXT.emptyGoal)} +

+ + +
+ {[{ version: versionA, info: infoA, lead: leadA }, { version: versionB, info: infoB, lead: leadB }].map( + ({ version, info, lead }) => ( + + + {tSession(version)} +

+ {lead || t("empty_lead")} +

+
+
+ {info.loc} LOC + {info.tools.length} tools + {tLayer(info.layer)} +
+
+ ) + )}
- {/* Structural diff */} -
+
- - {t("loc_delta")} + + {t("chapter_distance")}
- - = 0 ? "text-green-600 dark:text-green-400" : "text-red-600 dark:text-red-400"}> - {comparison.locDelta >= 0 ? "+" : ""}{comparison.locDelta} - - {t("lines")} - + {chapterDistance}
@@ -170,64 +499,195 @@ export default function ComparePage() { {t("new_tools_in_b")}
- - {comparison.toolsOnlyB.length} - - {comparison.toolsOnlyB.length > 0 && ( -
- {comparison.toolsOnlyB.map((tool) => ( - - {tool} - - ))} -
- )} + {comparison.toolsOnlyB.length}
- - {t("new_classes_in_b")} + + {t("shared_tools_count")}
- - {comparison.newClasses.length} - - {comparison.newClasses.length > 0 && ( -
- {comparison.newClasses.map((cls) => ( - - {cls} - - ))} -
- )} + {comparison.toolsShared.length}
- - {t("new_functions_in_b")} + + {t("new_surface")}
- - {comparison.newFunctions.length} - - {comparison.newFunctions.length > 0 && ( -
- {comparison.newFunctions.map((fn) => ( - - {fn} - - ))} + {comparison.newSurface} + +
+ + {jumpDiagnosis && ( + + +

+ {pickText(locale, COMPARE_EXTRA_TEXT.diagnosisLabel)} +

+ {jumpDiagnosis.title} +

+ {jumpDiagnosis.body} +

+
+ +
+
+

+ {pickText(locale, COMPARE_EXTRA_TEXT.nextBestLabel)} +

+

+ {jumpDiagnosis.next} +

+
+ +
+

+ {pickText(locale, COMPARE_EXTRA_TEXT.bridgeNudge)} +

+
+ {recommendedBridgeDocs.slice(0, 3).map((doc) => ( + + {doc.title} + + ))} + {recommendedBridgeDocs.length === 0 && ( +

+ {t("empty_lead")} +

+ )} +
- )} +
+
+ )} + + {recommendedBridgeDocs.length > 0 && ( + + +

+ {pickText(locale, { + zh: "跳读辅助", + en: "Jump Reading Support", + ja: "飛び読み補助", + })} +

+ + {pickText(locale, { + zh: `从 ${tSession(versionA)} 跳到 ${tSession(versionB)} 前,先补这几张图`, + en: `Before jumping from ${tSession(versionA)} to ${tSession(versionB)}, read these bridge docs`, + ja: `${tSession(versionA)} から ${tSession(versionB)} へ飛ぶ前に、この橋渡し資料を読む`, + })} + +

+ {pickText(locale, { + zh: "对比页不只是告诉你“多了什么”,还应该告诉你为了消化这次跃迁,哪些结构地图和机制展开最值得先看。", + en: "A good comparison page should not only show what was added. It should also point you to the best bridge docs for understanding the jump.", + ja: "比較ページは「何が増えたか」だけでなく、そのジャンプを理解する前に何を補うべきかも示すべきです。", + })} +

+
+ +
+ {recommendedBridgeDocs.map((doc) => ( + +
+
+

+ {doc.title} +

+

+ {doc.summary} +

+
+ +
+ {doc.fallbackLocale && ( +

+ {pickText(locale, { + zh: `当前语言缺稿,自动回退到 ${doc.fallbackLocale}`, + en: `Missing in this locale, falling back to ${doc.fallbackLocale}`, + ja: `この言語では未整備のため ${doc.fallbackLocale} へフォールバック`, + })} +

+ )} + + ))} +
+ )} + +
+
+

+ {pickText(locale, { + zh: "主线执行对比", + en: "Mainline Flow Comparison", + ja: "主線実行の比較", + })} +

+

+ {pickText(locale, { + zh: "先看一条请求在两章之间是怎么变的:新的分支出现在哪里,哪些结果会回流到主循环,哪些部分只是侧车或外部车道。", + en: "Compare how one request evolves between the two chapters: where the new branch appears, what writes back into the loop, and what remains a side lane.", + ja: "1つの要求が2つの章の間でどう変わるかを先に見ます。どこで新しい分岐が生まれ、何が主ループへ戻り、何が側車レーンに残るのかを比較します。", + })} +

+
+
+
+

+ {tSession(versionA)} +

+ +
+
+

+ {tSession(versionB)} +

+ +
+
+
+ +
+
+

{t("architecture")}

+

+ {t("architecture_note")} +

+
+
+
+

+ {tSession(versionA)} +

+ +
+
+

+ {tSession(versionB)} +

+ +
+
- {/* Tool comparison */} {t("tool_comparison")} @@ -235,20 +695,21 @@ export default function ComparePage() {

- {t("only_in")} {metaA?.title || versionA} + {t("only_in")} {tSession(versionA)}

{comparison.toolsOnlyA.length === 0 ? (

{t("none")}

) : (
{comparison.toolsOnlyA.map((tool) => ( - + {tool} ))}
)}
+

{t("shared")} @@ -265,16 +726,17 @@ export default function ComparePage() {

)}
+

- {t("only_in")} {metaB?.title || versionB} + {t("only_in")} {tSession(versionB)}

{comparison.toolsOnlyB.length === 0 ? (

{t("none")}

) : (
{comparison.toolsOnlyB.map((tool) => ( - + {tool} ))} @@ -284,9 +746,18 @@ export default function ComparePage() {
- {/* Code Diff */}
-

{t("source_diff")}

+
+

{t("source_diff")}

+

+ {t("source_diff_note")} {t("loc_delta")}:{" "} + = 0 ? "text-emerald-600 dark:text-emerald-400" : "text-rose-600 dark:text-rose-400"}> + {comparison.locDelta >= 0 ? "+" : ""} + {comparison.locDelta} + {" "} + {t("lines")} +

+
)} - {/* Empty state */} {(!versionA || !versionB) && ( -
+

{t("empty_hint")}

)} diff --git a/web/src/app/[locale]/(learn)/docs/[slug]/page.tsx b/web/src/app/[locale]/(learn)/docs/[slug]/page.tsx new file mode 100644 index 000000000..0424a2e00 --- /dev/null +++ b/web/src/app/[locale]/(learn)/docs/[slug]/page.tsx @@ -0,0 +1,170 @@ +import Link from "next/link"; +import docsData from "@/data/generated/docs.json"; +import { DocRenderer } from "@/components/docs/doc-renderer"; +import { getTranslations } from "@/lib/i18n-server"; +import { BRIDGE_DOCS, getChaptersForBridgeDoc } from "@/lib/bridge-docs"; + +const SUPPORTED_LOCALES = ["en", "zh", "ja"] as const; + +function findBridgeDoc(locale: string, slug: string) { + return ( + (docsData as Array<{ + slug?: string; + locale?: string; + kind?: string; + title?: string; + }>).find( + (item) => item.kind === "bridge" && item.slug === slug && item.locale === locale + ) ?? + (docsData as Array<{ + slug?: string; + locale?: string; + kind?: string; + title?: string; + }>).find( + (item) => item.kind === "bridge" && item.slug === slug && item.locale === "zh" + ) ?? + (docsData as Array<{ + slug?: string; + locale?: string; + kind?: string; + title?: string; + }>).find( + (item) => item.kind === "bridge" && item.slug === slug && item.locale === "en" + ) + ); +} + +export function generateStaticParams() { + const slugs = Array.from( + new Set( + (docsData as Array<{ kind?: string; slug?: string }>) + .filter((doc) => doc.kind === "bridge" && doc.slug) + .map((doc) => doc.slug as string) + ) + ); + + return SUPPORTED_LOCALES.flatMap((locale) => + slugs.map((slug) => ({ locale, slug })) + ); +} + +export async function generateMetadata({ + params, +}: { + params: Promise<{ locale: string; slug: string }>; +}) { + const { locale, slug } = await params; + const descriptor = BRIDGE_DOCS[slug]; + const doc = findBridgeDoc(locale, slug); + const title = + descriptor?.title?.[locale as "en" | "zh" | "ja"] ?? + descriptor?.title?.en ?? + doc?.title ?? + "Learn Claude Code"; + const description = + descriptor?.summary?.[locale as "en" | "zh" | "ja"] ?? + descriptor?.summary?.en ?? + undefined; + + return { + title, + description, + }; +} + +export default async function BridgeDocPage({ + params, +}: { + params: Promise<{ locale: string; slug: string }>; +}) { + const { locale, slug } = await params; + const t = getTranslations(locale, "version"); + const tSession = getTranslations(locale, "sessions"); + const descriptor = BRIDGE_DOCS[slug]; + const doc = findBridgeDoc(locale, slug); + const relatedVersions = getChaptersForBridgeDoc(slug); + + if (!doc?.title) { + return ( +
+

Document not found

+

{slug}

+
+ ); + } + + return ( +
+
+ + + {t("bridge_docs_back")} + +
+ + {t("bridge_docs_standalone")} + +

+ {descriptor?.title?.[locale as "en" | "zh" | "ja"] ?? + descriptor?.title?.en ?? + doc.title} +

+ {doc.locale !== locale && ( +

+ {t("bridge_docs_fallback_note")} {doc.locale} +

+ )} +
+
+ +
+
+
+

+ {locale === "zh" + ? "这页适合什么时候回看" + : locale === "ja" + ? "このページへ戻るべき場面" + : "When This Page Helps"} +

+

+ {descriptor?.summary?.[locale as "en" | "zh" | "ja"] ?? + descriptor?.summary?.en} +

+
+ + {relatedVersions.length > 0 && ( +
+

+ {locale === "zh" + ? "最适合和这些章节一起读" + : locale === "ja" + ? "いっしょに読むと効く章" + : "Best Read Alongside"} +

+
+ {relatedVersions.map((version) => ( + + {version} · {tSession(version)} + + ))} +
+
+ )} +
+
+ +
+ +
+
+ ); +} diff --git a/web/src/app/[locale]/(learn)/layers/page.tsx b/web/src/app/[locale]/(learn)/layers/page.tsx index ceeee9245..4f4be0874 100644 --- a/web/src/app/[locale]/(learn)/layers/page.tsx +++ b/web/src/app/[locale]/(learn)/layers/page.tsx @@ -3,34 +3,257 @@ import Link from "next/link"; import { useTranslations, useLocale } from "@/lib/i18n"; import { LAYERS, VERSION_META } from "@/lib/constants"; -import { Card, CardHeader, CardTitle } from "@/components/ui/card"; +import { getVersionContent } from "@/lib/version-content"; +import { Card } from "@/components/ui/card"; import { LayerBadge } from "@/components/ui/badge"; import { cn } from "@/lib/utils"; import { ChevronRight } from "lucide-react"; import type { VersionIndex } from "@/types/agent-data"; import versionData from "@/data/generated/versions.json"; +import docsData from "@/data/generated/docs.json"; +import { BRIDGE_DOCS } from "@/lib/bridge-docs"; +import { getStageCheckpoint } from "@/lib/stage-checkpoints"; const data = versionData as VersionIndex; +const docs = docsData as Array<{ + slug?: string; + locale?: string; + kind?: string; + title?: string; +}>; + const LAYER_BORDER_CLASSES: Record = { - tools: "border-l-blue-500", - planning: "border-l-emerald-500", - memory: "border-l-purple-500", - concurrency: "border-l-amber-500", - collaboration: "border-l-red-500", + core: "border-l-blue-500", + hardening: "border-l-emerald-500", + runtime: "border-l-amber-500", + platform: "border-l-red-500", }; const LAYER_HEADER_BG: Record = { - tools: "bg-blue-500", - planning: "bg-emerald-500", - memory: "bg-purple-500", - concurrency: "bg-amber-500", - collaboration: "bg-red-500", + core: "bg-blue-500", + hardening: "bg-emerald-500", + runtime: "bg-amber-500", + platform: "bg-red-500", +}; + +const LAYER_CHECKPOINT_SHELL: Record = { + core: "border-blue-200/80 bg-blue-50/80 dark:border-blue-900/60 dark:bg-blue-950/20", + hardening: + "border-emerald-200/80 bg-emerald-50/80 dark:border-emerald-900/60 dark:bg-emerald-950/20", + runtime: "border-amber-200/80 bg-amber-50/80 dark:border-amber-900/60 dark:bg-amber-950/20", + platform: "border-red-200/80 bg-red-50/80 dark:border-red-900/60 dark:bg-red-950/20", }; +const RUNTIME_SUPPORT_DOCS = [ + "s13a-runtime-task-model", + "data-structures", + "entity-map", +] as const; + +const CORE_SUPPORT_DOCS = [ + "s00-architecture-overview", + "s00b-one-request-lifecycle", + "s02a-tool-control-plane", + "data-structures", +] as const; + +const HARDENING_SUPPORT_DOCS = [ + "s00a-query-control-plane", + "s02b-tool-execution-runtime", + "s10a-message-prompt-pipeline", + "s00c-query-transition-model", + "data-structures", +] as const; + +const PLATFORM_SUPPORT_DOCS = [ + "team-task-lane-model", + "s13a-runtime-task-model", + "s19a-mcp-capability-layers", + "entity-map", + "data-structures", +] as const; + +type SupportDocCard = { + slug: string; + title: string; + summary: string; + fallbackLocale: typeof docs[number]["locale"] | null; +}; + +type SupportSection = { + id: "core" | "hardening" | "runtime" | "platform"; + eyebrow: string; + title: string; + body: string; + docs: SupportDocCard[]; +}; + +function pickText( + locale: string, + value: { zh: string; en: string; ja: string } +) { + if (locale === "zh") return value.zh; + if (locale === "ja") return value.ja; + return value.en; +} + +const LAYER_CHECKPOINT_TEXT = { + label: { + zh: "阶段收口提醒", + en: "Stage Stop Reminder", + ja: "段階の収束ポイント", + }, + body: { + zh: "这一层不是读完最后一章就立刻往后冲。更稳的顺序是:先从入口重新走一遍,自己手搓到收口,再进入下一层。", + en: "Do not sprint past the last chapter of this layer. The steadier order is: reopen the entry point, rebuild the layer by hand, then enter the next one.", + ja: "この層の最後の章を読んだら、そのまま先へ走るのではありません。入口へ戻り、この層を自分で作り直してから次へ進む方が安定します。", + }, + rebuild: { + zh: "这一层现在应该能自己做出的东西", + en: "What You Should Now Be Able To Rebuild", + ja: "この層で今なら自分で作り直せるべきもの", + }, + entry: { + zh: "阶段入口", + en: "Stage Entry", + ja: "段階の入口", + }, + exit: { + zh: "阶段收口", + en: "Stage Exit", + ja: "段階の収束章", + }, +} as const; + export default function LayersPage() { const t = useTranslations("layers"); + const tSession = useTranslations("sessions"); + const tLayer = useTranslations("layer_labels"); const locale = useLocale(); + const resolveSupportDocs = (slugs: readonly string[]) => + slugs + .map((slug) => { + const descriptor = BRIDGE_DOCS[slug]; + if (!descriptor) return null; + + const doc = + docs.find( + (item) => + item.slug === slug && + item.kind === "bridge" && + item.locale === locale + ) ?? + docs.find( + (item) => + item.slug === slug && + item.kind === "bridge" && + item.locale === "zh" + ) ?? + docs.find( + (item) => + item.slug === slug && + item.kind === "bridge" && + item.locale === "en" + ); + + if (!doc?.slug) return null; + + return { + slug: doc.slug, + title: pickText(locale, descriptor.title), + summary: pickText(locale, descriptor.summary), + fallbackLocale: doc.locale !== locale ? doc.locale : null, + } satisfies SupportDocCard; + }) + .filter((item): item is SupportDocCard => Boolean(item)); + + const coreSupportDocs = resolveSupportDocs(CORE_SUPPORT_DOCS); + const hardeningSupportDocs = resolveSupportDocs(HARDENING_SUPPORT_DOCS); + const runtimeSupportDocs = resolveSupportDocs(RUNTIME_SUPPORT_DOCS); + const platformSupportDocs = resolveSupportDocs(PLATFORM_SUPPORT_DOCS); + const supportSections = [ + { + id: "core", + eyebrow: pickText(locale, { + zh: "核心闭环补课", + en: "Core Loop Support Docs", + ja: "基礎ループ補助資料", + }), + title: pickText(locale, { + zh: "读 `s01-s06` 时,先把主闭环、工具入口和数据结构边界守住", + en: "Before reading `s01-s06`, hold the main loop, tool entry path, and data-structure boundaries steady", + ja: "`s01-s06` を読む前に、主ループ・tool 入口・データ構造境界を先に安定させる", + }), + body: pickText(locale, { + zh: "前六章最容易被低估的,不是某个功能点,而是这条最小闭环到底怎样成立:用户输入怎么进入、工具结果怎么回写、状态容器到底有哪些。", + en: "The first six chapters are not mainly about isolated features. They are about how the minimal loop truly forms: how user input enters, how tool results write back, and which state containers exist.", + ja: "最初の6章で大事なのは個別機能ではなく、最小ループがどう成立するかです。ユーザー入力がどう入り、ツール結果がどう戻り、どんな状態容器があるかを先に押さえます。", + }), + docs: coreSupportDocs, + }, + { + id: "hardening", + eyebrow: pickText(locale, { + zh: "系统加固补课", + en: "Hardening Support Docs", + ja: "強化段階補助資料", + }), + title: pickText(locale, { + zh: "读 `s07-s11` 时,先把控制面、输入装配和续行原因这几层拆开", + en: "Before reading `s07-s11`, separate the control plane, input assembly, and continuation reasons", + ja: "`s07-s11` を読む前に、制御面・入力組み立て・継続理由を分けておく", + }), + body: pickText(locale, { + zh: "加固阶段最容易混的,不是权限、hook、memory 哪个更复杂,而是这些机制都在“控制系统如何继续推进”这一层相遇了。", + en: "The hardening stage gets confusing not because one feature is harder than another, but because permissions, hooks, memory, prompts, and recovery all meet at the control plane.", + ja: "強化段階で混ざりやすいのは個別機能の難しさではなく、権限・hook・memory・prompt・recovery がすべて制御面で交わる点です。", + }), + docs: hardeningSupportDocs, + }, + { + id: "runtime", + eyebrow: pickText(locale, { + zh: "运行时补课", + en: "Runtime Support Docs", + ja: "実行段階補助資料", + }), + title: pickText(locale, { + zh: "读 `s12-s14` 时,先把目标、执行槽位和定时触发这三层分清", + en: "Before reading `s12-s14`, separate goals, execution slots, and schedule triggers", + ja: "`s12-s14` を読む前に、goal・execution slot・schedule trigger を分けておく", + }), + body: pickText(locale, { + zh: "任务运行时最容易让人混的,不是某个函数,而是 task、runtime task、notification、schedule 这几层对象同时出现时,各自到底管什么。", + en: "The runtime chapters get confusing not because of one function, but because task goals, runtime tasks, notifications, and schedules begin to coexist and need clean boundaries.", + ja: "実行段階で難しくなるのは個別関数ではなく、作業目標・実行タスク・通知・スケジュールが同時に現れ、それぞれの境界を保つ必要がある点です。", + }), + docs: runtimeSupportDocs, + }, + { + id: "platform", + eyebrow: pickText(locale, { + zh: "平台层补课", + en: "Platform Support Docs", + ja: "プラットフォーム補助資料", + }), + title: pickText(locale, { + zh: "读 `s15-s19` 之前,先把这几份桥接资料放在手边", + en: "Keep these bridge docs nearby before reading `s15-s19`", + ja: "`s15-s19` を読む前に、まずこの橋渡し資料を手元に置く", + }), + body: pickText(locale, { + zh: "后五章最容易混的是队友、协议请求、任务、运行时槽位、worktree 车道,以及最后接进来的外部能力层。这几份文档就是专门用来反复校正这段心智模型的。", + en: "The last five chapters are where teammates, protocol requests, tasks, runtime slots, worktree lanes, and finally external capability layers start to blur together. These bridge docs are meant to keep that model clean.", + ja: "最後の5章では、チームメイト・プロトコル要求・タスク・実行スロット・worktree レーン、そして最後に入ってくる外部能力層の境界が混ざりやすくなります。ここに並べた資料は、その学習モデルを何度でも補正するためのものです。", + }), + docs: platformSupportDocs, + }, + ] satisfies SupportSection[]; + + const visibleSupportSections = supportSections.filter( + (section) => section.docs.length > 0 + ); return (
@@ -39,13 +262,90 @@ export default function LayersPage() {

{t("subtitle")}

+
+ +

{t("guide_label")}

+

{t("guide_start_title")}

+

+ {t("guide_start_desc")} +

+
+ +

{t("guide_label")}

+

{t("guide_middle_title")}

+

+ {t("guide_middle_desc")} +

+
+ +

{t("guide_label")}

+

{t("guide_finish_title")}

+

+ {t("guide_finish_desc")} +

+
+
+ + {visibleSupportSections.map((section) => ( +
+
+

+ {section.eyebrow} +

+

+ {section.title} +

+

+ {section.body} +

+
+ +
+ {section.docs.map((doc) => ( + + +
+
+

+ {doc.title} +

+

+ {doc.summary} +

+
+ +
+ {doc.fallbackLocale && ( +

+ {pickText(locale, { + zh: `当前语言缺稿,自动回退到 ${doc.fallbackLocale}`, + en: `Missing in this locale, falling back to ${doc.fallbackLocale}`, + ja: `この言語では未整備のため ${doc.fallbackLocale} へフォールバック`, + })} +

+ )} +
+ + ))} +
+
+ ))} +
{LAYERS.map((layer, index) => { const versionInfos = layer.versions.map((vId) => { const info = data.versions.find((v) => v.id === vId); const meta = VERSION_META[vId]; - return { id: vId, info, meta }; + const content = getVersionContent(vId, locale); + return { id: vId, info, meta, content }; }); + const checkpoint = getStageCheckpoint(layer.id); return (

- L{index + 1} + P{index + 1} {" "} - {layer.label} + {tLayer(layer.id)}

{t(layer.id)}

+

+ {t(`${layer.id}_outcome`)} +

{/* Version cards within this layer */}
+ {checkpoint && ( +
+
+
+

+ {pickText(locale, LAYER_CHECKPOINT_TEXT.label)} +

+

+ {pickText(locale, checkpoint.title)} +

+

+ {pickText(locale, LAYER_CHECKPOINT_TEXT.body)} +

+
+ +
+ + + {pickText(locale, LAYER_CHECKPOINT_TEXT.entry)} + + {checkpoint.entryVersion} + + + + {pickText(locale, LAYER_CHECKPOINT_TEXT.exit)} + + {checkpoint.endVersion} + +
+
+ +
+

+ {pickText(locale, LAYER_CHECKPOINT_TEXT.rebuild)} +

+

+ {pickText(locale, checkpoint.rebuild)} +

+
+
+ )} +
- {versionInfos.map(({ id, info, meta }) => ( - + {versionInfos.map(({ id, info, meta, content }) => ( +
{id} - {layer.id} + {tLayer(layer.id)}

- {meta?.title || id} + {tSession(id) || meta?.title || id}

- {meta?.subtitle && ( + {meta && (

- {meta.subtitle} + {content.subtitle}

)}
@@ -105,9 +457,9 @@ export default function LayersPage() { {info?.loc ?? "?"} LOC {info?.tools.length ?? "?"} tools
- {meta?.keyInsight && ( + {meta && (

- {meta.keyInsight} + {content.keyInsight}

)}
diff --git a/web/src/app/[locale]/(learn)/reference/page.tsx b/web/src/app/[locale]/(learn)/reference/page.tsx new file mode 100644 index 000000000..a7a3ae814 --- /dev/null +++ b/web/src/app/[locale]/(learn)/reference/page.tsx @@ -0,0 +1,79 @@ +"use client"; + +import Link from "next/link"; +import { useTranslations, useLocale } from "@/lib/i18n"; +import { + BRIDGE_DOCS, + FOUNDATION_DOC_SLUGS, + MECHANISM_DOC_SLUGS, +} from "@/lib/bridge-docs"; + +type SupportedLocale = "zh" | "en" | "ja"; + +export default function ReferencePage() { + const t = useTranslations("reference"); + const locale = useLocale() as SupportedLocale; + + const foundationDocs = FOUNDATION_DOC_SLUGS.map( + (slug) => BRIDGE_DOCS[slug] + ).filter(Boolean); + + const mechanismDocs = MECHANISM_DOC_SLUGS.map( + (slug) => BRIDGE_DOCS[slug] + ).filter(Boolean); + + return ( +
+
+

{t("title")}

+

+ {t("subtitle")} +

+
+ +
+

+ {t("foundation_title")} +

+
+ {foundationDocs.map((doc) => ( + +

+ {doc.title[locale] ?? doc.title.en} +

+

+ {doc.summary[locale] ?? doc.summary.en} +

+ + ))} +
+
+ +
+

+ {t("deep_dive_title")} +

+
+ {mechanismDocs.map((doc) => ( + +

+ {doc.title[locale] ?? doc.title.en} +

+

+ {doc.summary[locale] ?? doc.summary.en} +

+ + ))} +
+
+
+ ); +} diff --git a/web/src/app/[locale]/(learn)/timeline/page.tsx b/web/src/app/[locale]/(learn)/timeline/page.tsx index a490002be..a426b8018 100644 --- a/web/src/app/[locale]/(learn)/timeline/page.tsx +++ b/web/src/app/[locale]/(learn)/timeline/page.tsx @@ -1,10 +1,132 @@ "use client"; +import Link from "next/link"; import { useTranslations } from "@/lib/i18n"; +import { useLocale } from "@/lib/i18n"; import { Timeline } from "@/components/timeline/timeline"; +import { Card } from "@/components/ui/card"; +import { LayerBadge } from "@/components/ui/badge"; +import { STAGE_CHECKPOINTS } from "@/lib/stage-checkpoints"; + +const GUIDE_TEXT = { + label: { + zh: "怎么使用这页", + en: "How to Use This Page", + ja: "このページの使い方", + }, + cards: [ + { + title: { + zh: "第一次完整读", + en: "First Full Pass", + ja: "初回の通読", + }, + body: { + zh: "从上往下顺序读,不要急着横跳。前六章是主闭环,后面都建立在它上面。", + en: "Read top to bottom before jumping around. The first six chapters establish the main loop everything else depends on.", + ja: "まずは上から順に読む。最初の6章が主ループで、後半はその上に積まれています。", + }, + }, + { + title: { + zh: "中途开始混", + en: "If Things Start to Blur", + ja: "途中で混ざり始めたら", + }, + body: { + zh: "不要死盯源码。先看这章落在哪个阶段,再回桥接资料校正 task、runtime、teammate、worktree 这些边界。", + en: "Do not stare at code first. Identify the stage, then use bridge docs to reset boundaries like task, runtime, teammate, and worktree.", + ja: "先にコードへ潜らず、この章がどの段階に属するかを見て、bridge doc で task・runtime・teammate・worktree の境界を補正します。", + }, + }, + { + title: { + zh: "准备自己实现", + en: "If You Are Rebuilding It", + ja: "自分で実装するなら", + }, + body: { + zh: "每走完一个阶段,就停下来自己手写一版最小实现。不要等到 s19 再一次性回头补。", + en: "After each stage, stop and rebuild the minimal version yourself instead of waiting until s19 to backfill everything at once.", + ja: "各段階が終わるたびに最小版を自分で書き直す。一気に s19 まで進んでからまとめて補わない。", + }, + }, + ], + supportLabel: { + zh: "全程可反复回看的桥接资料", + en: "Bridge Docs Worth Re-reading", + ja: "何度も戻る価値のある橋渡し資料", + }, + supportBody: { + zh: "如果你读到中后段开始打结,先回这些资料,而不是硬闯下一章。", + en: "When the middle and late chapters start to tangle, revisit these before forcing the next chapter.", + ja: "中盤以降で混線し始めたら、次の章へ突っ込む前にまずここへ戻ります。", + }, + checkpointLabel: { + zh: "时间线不仅告诉你顺序,也告诉你哪里该停", + en: "The timeline shows both order and where to pause", + ja: "このタイムラインは順序だけでなく、どこで止まるべきかも示す", + }, + checkpointTitle: { + zh: "每走完一个阶段,先自己重建一版,再进入下一阶段", + en: "After each stage, rebuild one working slice before entering the next stage", + ja: "各段階のあとで 1 回作り直してから次の段階へ入る", + }, + checkpointBody: { + zh: "如果你只是一路往下读,章节边界迟早会糊。最稳的读法是在 `s06 / s11 / s14 / s19` 各停一次,确认自己真的能把该阶段已经成立的系统重新写出来。", + en: "If you only keep scrolling downward, chapter boundaries will eventually blur. The safer reading move is to pause at `s06 / s11 / s14 / s19` and confirm that you can rebuild the working system slice for that stage.", + ja: "ただ下へ読み進めるだけだと、章境界はいつか必ずぼやけます。`s06 / s11 / s14 / s19` で止まり、その段階で成立した system slice を作り直せるか確認する方が安定します。", + }, + checkpointRebuild: { + zh: "此时该能手搓出来的东西", + en: "What You Should Be Able To Rebuild Here", + ja: "この時点で作り直せるべきもの", + }, + checkpointOpen: { + zh: "打开阶段收口", + en: "Open Stage Exit", + ja: "段階の収束点を開く", + }, + links: [ + { + slug: "s00a-query-control-plane", + title: { zh: "查询控制平面", en: "Query Control Plane", ja: "クエリ制御プレーン" }, + }, + { + slug: "s02b-tool-execution-runtime", + title: { zh: "工具执行运行时", en: "Tool Execution Runtime", ja: "ツール実行ランタイム" }, + }, + { + slug: "s13a-runtime-task-model", + title: { zh: "运行时任务模型", en: "Runtime Task Model", ja: "ランタイムタスクモデル" }, + }, + { + slug: "team-task-lane-model", + title: { zh: "队友-任务-车道模型", en: "Team Task Lane Model", ja: "チームメイト・タスク・レーンモデル" }, + }, + { + slug: "s19a-mcp-capability-layers", + title: { zh: "MCP 能力层地图", en: "MCP Capability Layers", ja: "MCP 能力層マップ" }, + }, + ], +} as const; + +function pick( + locale: string, + value: { + zh: string; + en: string; + ja: string; + } +) { + if (locale === "zh") return value.zh; + if (locale === "ja") return value.ja; + return value.en; +} export default function TimelinePage() { const t = useTranslations("timeline"); + const locale = useLocale(); return (
@@ -14,6 +136,96 @@ export default function TimelinePage() { {t("subtitle")}

+ +
+
+

+ {pick(locale, GUIDE_TEXT.label)} +

+
+
+ {GUIDE_TEXT.cards.map((card) => ( +
+

+ {pick(locale, card.title)} +

+

+ {pick(locale, card.body)} +

+
+ ))} +
+ +
+

+ {pick(locale, GUIDE_TEXT.supportLabel)} +

+

+ {pick(locale, GUIDE_TEXT.supportBody)} +

+
+ {GUIDE_TEXT.links.map((link) => ( + + {pick(locale, link.title)} + + ))} +
+
+
+ +
+
+

+ {pick(locale, GUIDE_TEXT.checkpointLabel)} +

+

+ {pick(locale, GUIDE_TEXT.checkpointTitle)} +

+

+ {pick(locale, GUIDE_TEXT.checkpointBody)} +

+
+ +
+ {STAGE_CHECKPOINTS.map((checkpoint) => ( + +
+ {checkpoint.entryVersion}-{checkpoint.endVersion} +
+

+ {pick(locale, checkpoint.title)} +

+
+

+ {pick(locale, GUIDE_TEXT.checkpointRebuild)} +

+

+ {pick(locale, checkpoint.rebuild)} +

+
+
+ + {pick(locale, GUIDE_TEXT.checkpointOpen)}: {checkpoint.endVersion} + +
+
+ ))} +
+
+
); diff --git a/web/src/app/[locale]/page.tsx b/web/src/app/[locale]/page.tsx index 686d95615..aaaf0e18b 100644 --- a/web/src/app/[locale]/page.tsx +++ b/web/src/app/[locale]/page.tsx @@ -3,48 +3,25 @@ import Link from "next/link"; import { useTranslations, useLocale } from "@/lib/i18n"; import { LEARNING_PATH, VERSION_META, LAYERS } from "@/lib/constants"; -import { LayerBadge } from "@/components/ui/badge"; -import { Card } from "@/components/ui/card"; -import { cn } from "@/lib/utils"; -import versionsData from "@/data/generated/versions.json"; -import { MessageFlow } from "@/components/architecture/message-flow"; +import { getVersionContent } from "@/lib/version-content"; const LAYER_DOT_COLORS: Record = { - tools: "bg-blue-500", - planning: "bg-emerald-500", - memory: "bg-purple-500", - concurrency: "bg-amber-500", - collaboration: "bg-red-500", + core: "bg-blue-500", + hardening: "bg-emerald-500", + runtime: "bg-amber-500", + platform: "bg-red-500", }; -const LAYER_BORDER_COLORS: Record = { - tools: "border-blue-500/30 hover:border-blue-500/60", - planning: "border-emerald-500/30 hover:border-emerald-500/60", - memory: "border-purple-500/30 hover:border-purple-500/60", - concurrency: "border-amber-500/30 hover:border-amber-500/60", - collaboration: "border-red-500/30 hover:border-red-500/60", -}; - -const LAYER_BAR_COLORS: Record = { - tools: "bg-blue-500", - planning: "bg-emerald-500", - memory: "bg-purple-500", - concurrency: "bg-amber-500", - collaboration: "bg-red-500", -}; - -function getVersionData(id: string) { - return versionsData.versions.find((v) => v.id === id); -} - export default function HomePage() { const t = useTranslations("home"); + const tSession = useTranslations("sessions"); + const tLayer = useTranslations("layer_labels"); const locale = useLocale(); return ( -
- {/* Hero Section */} -
+
+ {/* Hero */} +

{t("hero_title")}

@@ -53,7 +30,7 @@ export default function HomePage() {

{t("start")} @@ -62,172 +39,45 @@ export default function HomePage() {
- {/* Core Pattern Section */} -
-
-

{t("core_pattern")}

-

- {t("core_pattern_desc")} -

-
-
-
- - - - agent_loop.py -
-
-            
-              while
-               
-              True
-              :
-              {"\n"}
-              {"    "}response = client.messages.
-              create
-              (
-              messages=
-              messages
-              ,
-               tools=
-              tools
-              )
-              {"\n"}
-              {"    "}if
-               response.stop_reason != 
-              "tool_use"
-              :
-              {"\n"}
-              {"        "}break
-              {"\n"}
-              {"    "}for
-               tool_call 
-              in
-               response.content
-              :
-              {"\n"}
-              {"        "}result = 
-              execute_tool
-              (
-              tool_call.name
-              ,
-               tool_call.input
-              )
-              {"\n"}
-              {"        "}messages.
-              append
-              (
-              result
-              )
-            
-          
-
-
- - {/* Message Flow Visualization */} -
-
-

{t("message_flow")}

-

- {t("message_flow_desc")} -

-
-
- -
-
- - {/* Learning Path Preview */} -
-
-

{t("learning_path")}

-

- {t("learning_path_desc")} -

-
-
- {LEARNING_PATH.map((versionId) => { - const meta = VERSION_META[versionId]; - const data = getVersionData(versionId); - if (!meta || !data) return null; - return ( - - -
- {versionId} - - {data.loc} {t("loc")} - -
-

- {meta.title} -

-

- {meta.keyInsight} -

-
- - ); - })} -
-
- - {/* Layer Overview */} -
-
-

{t("layers_title")}

-

- {t("layers_desc")} -

-
-
- {LAYERS.map((layer) => ( -
-
-
-
-

{layer.label}

- - {layer.versions.length} {t("versions_in_layer")} - -
-
- {layer.versions.map((vid) => { - const meta = VERSION_META[vid]; - return ( - - - {vid}: {meta?.title} - - - ); - })} -
-
+ {/* Chapter list by stage */} +
+ {LAYERS.map((layer) => ( +
+
+ + + {tLayer(layer.id)} +
- ))} -
+
    + {layer.versions.map((vId) => { + const meta = VERSION_META[vId]; + const content = getVersionContent(vId, locale); + if (!meta) return null; + return ( +
  • + +
    + + {vId} + + + {tSession(vId) || meta.title} + +
    +

    + {content.keyInsight} +

    + +
  • + ); + })} +
+
+ ))}
); diff --git a/web/src/app/globals.css b/web/src/app/globals.css index 7aeef1a62..dfd7ba99c 100644 --- a/web/src/app/globals.css +++ b/web/src/app/globals.css @@ -3,11 +3,10 @@ @custom-variant dark (&:where(.dark, .dark *)); :root { - --color-layer-tools: #3B82F6; - --color-layer-planning: #10B981; - --color-layer-memory: #8B5CF6; - --color-layer-concurrency: #F59E0B; - --color-layer-collaboration: #EF4444; + --color-layer-core: #2563eb; + --color-layer-hardening: #059669; + --color-layer-runtime: #d97706; + --color-layer-platform: #dc2626; --color-bg: #ffffff; --color-bg-secondary: #f4f4f5; --color-text: #09090b; @@ -368,10 +367,19 @@ body { /* -- Tables -- */ -.prose-custom table { +.prose-custom .table-scroll { width: 100%; + overflow-x: auto; margin-top: 1.25rem; margin-bottom: 1.25rem; + -webkit-overflow-scrolling: touch; +} + +.prose-custom table { + width: max-content; + min-width: 100%; + margin-top: 0; + margin-bottom: 0; border-collapse: separate; border-spacing: 0; font-size: 0.8125rem; diff --git a/web/src/components/architecture/arch-diagram.tsx b/web/src/components/architecture/arch-diagram.tsx index 2d8fa9e5e..cd931eb8d 100644 --- a/web/src/components/architecture/arch-diagram.tsx +++ b/web/src/components/architecture/arch-diagram.tsx @@ -1,228 +1,295 @@ "use client"; import { motion } from "framer-motion"; +import { useLocale } from "@/lib/i18n"; +import { VERSION_META } from "@/lib/constants"; +import { + pickDiagramText, + translateArchitectureText, +} from "@/lib/diagram-localization"; +import { getVersionContent } from "@/lib/version-content"; +import { + ARCHITECTURE_BLUEPRINTS, + type ArchitectureSliceId, +} from "@/data/architecture-blueprints"; import { cn } from "@/lib/utils"; -import { LAYERS } from "@/lib/constants"; -import versionsData from "@/data/generated/versions.json"; - -const CLASS_DESCRIPTIONS: Record = { - TodoManager: "Visible task planning with constraints", - SkillLoader: "Dynamic knowledge injection from SKILL.md files", - ContextManager: "Three-layer context compression pipeline", - Task: "File-based persistent task with dependencies", - TaskManager: "File-based persistent task CRUD with dependencies", - BackgroundTask: "Single background execution unit", - BackgroundManager: "Non-blocking thread execution + notification queue", - TeammateManager: "Multi-agent team lifecycle and coordination", - Teammate: "Individual agent identity and state tracking", - SharedBoard: "Cross-agent shared state coordination", -}; interface ArchDiagramProps { version: string; } -function getLayerColor(versionId: string): string { - const layer = LAYERS.find((l) => (l.versions as readonly string[]).includes(versionId)); - return layer?.color ?? "#71717a"; -} - -function getLayerColorClasses(versionId: string): { - border: string; - bg: string; -} { - const v = - versionsData.versions.find((v) => v.id === versionId) as { layer?: string } | undefined; - const layer = v?.layer; - switch (layer) { - case "tools": - return { - border: "border-blue-500", - bg: "bg-blue-500/10", - }; - case "planning": - return { - border: "border-emerald-500", - bg: "bg-emerald-500/10", - }; - case "memory": - return { - border: "border-purple-500", - bg: "bg-purple-500/10", - }; - case "concurrency": - return { - border: "border-amber-500", - bg: "bg-amber-500/10", - }; - case "collaboration": - return { - border: "border-red-500", - bg: "bg-red-500/10", - }; - default: - return { - border: "border-zinc-500", - bg: "bg-zinc-500/10", - }; - } -} - -function collectClassesUpTo( - targetId: string -): { name: string; introducedIn: string }[] { - const { versions, diffs } = versionsData; - const order = versions.map((v) => v.id); - const targetIdx = order.indexOf(targetId); - if (targetIdx < 0) return []; - - const result: { name: string; introducedIn: string }[] = []; - const seen = new Set(); - - for (let i = 0; i <= targetIdx; i++) { - const v = versions[i]; - if (!v.classes) continue; - for (const cls of v.classes) { - if (!seen.has(cls.name)) { - seen.add(cls.name); - result.push({ name: cls.name, introducedIn: v.id }); - } - } +const SLICE_STYLE: Record< + ArchitectureSliceId, + { + ring: string; + badge: string; + surface: string; + title: { zh: string; en: string; ja?: string }; + note: { zh: string; en: string; ja?: string }; } +> = { + mainline: { + ring: "ring-blue-500/20", + badge: + "border-blue-200 bg-blue-50 text-blue-700 dark:border-blue-900/60 dark:bg-blue-950/30 dark:text-blue-300", + surface: + "from-blue-500/12 via-blue-500/5 to-transparent dark:from-blue-500/10 dark:via-transparent", + title: { zh: "主线执行", en: "Mainline", ja: "主線実行" }, + note: { + zh: "真正把系统往前推的那条执行主线。", + en: "The path that actually pushes the system forward.", + ja: "実際にシステムを前へ進める主線です。", + }, + }, + control: { + ring: "ring-emerald-500/20", + badge: + "border-emerald-200 bg-emerald-50 text-emerald-700 dark:border-emerald-900/60 dark:bg-emerald-950/30 dark:text-emerald-300", + surface: + "from-emerald-500/12 via-emerald-500/5 to-transparent dark:from-emerald-500/10 dark:via-transparent", + title: { zh: "控制面", en: "Control Plane", ja: "制御面" }, + note: { + zh: "决定怎么运行、何时放行、何时转向。", + en: "Decides how execution is controlled, gated, and redirected.", + ja: "どう動かし、いつ通し、いつ向きを変えるかを決めます。", + }, + }, + state: { + ring: "ring-amber-500/20", + badge: + "border-amber-200 bg-amber-50 text-amber-700 dark:border-amber-900/60 dark:bg-amber-950/30 dark:text-amber-300", + surface: + "from-amber-500/12 via-amber-500/5 to-transparent dark:from-amber-500/10 dark:via-transparent", + title: { zh: "状态容器", en: "State Records", ja: "状態レコード" }, + note: { + zh: "真正需要被系统记住和回写的结构。", + en: "The structures the system must remember and write back.", + ja: "システムが記憶し、回写すべき構造です。", + }, + }, + lanes: { + ring: "ring-rose-500/20", + badge: + "border-rose-200 bg-rose-50 text-rose-700 dark:border-rose-900/60 dark:bg-rose-950/30 dark:text-rose-300", + surface: + "from-rose-500/12 via-rose-500/5 to-transparent dark:from-rose-500/10 dark:via-transparent", + title: { zh: "并行 / 外部车道", en: "Lanes / External", ja: "並行 / 外部レーン" }, + note: { + zh: "长期队友、后台槽位或外部能力的进入面。", + en: "Where long-lived workers, background slots, or external capability enter.", + ja: "長期ワーカー、バックグラウンドスロット、外部能力が入ってくる面です。", + }, + }, +}; - return result; -} - -function getNewClassNames(version: string): Set { - const diff = versionsData.diffs.find((d) => d.to === version); - if (!diff) { - const v = versionsData.versions.find((ver) => ver.id === version); - return new Set(v?.classes?.map((c) => c.name) ?? []); - } - return new Set(diff.newClasses ?? []); -} +const UI_TEXT = { + summaryTitle: { + zh: "这章在系统里真正新增了什么", + en: "What This Chapter Actually Adds", + ja: "この章でシステムに何が増えたか", + }, + recordsTitle: { + zh: "关键记录结构", + en: "Key Records", + ja: "主要レコード", + }, + recordsNote: { + zh: "这些不是实现细枝末节,而是开发者自己重建系统时最应该抓住的状态容器。", + en: "These are the state containers worth holding onto when you rebuild the system yourself.", + ja: "これらは実装の枝葉ではなく、自分で再構築するときに掴むべき状態容器です。", + }, + handoffTitle: { + zh: "主回流路径", + en: "Primary Handoff Path", + ja: "主回流経路", + }, + fresh: { + zh: "新增", + en: "NEW", + ja: "新規", + }, +}; export function ArchDiagram({ version }: ArchDiagramProps) { - const allClasses = collectClassesUpTo(version); - const newClassNames = getNewClassNames(version); - const versionData = versionsData.versions.find((v) => v.id === version); - const tools = versionData?.tools ?? []; + const locale = useLocale(); + const blueprint = + ARCHITECTURE_BLUEPRINTS[version as keyof typeof ARCHITECTURE_BLUEPRINTS]; + const meta = VERSION_META[version]; + const content = getVersionContent(version, locale); - const reversed = [...allClasses].reverse(); + if (!blueprint || !meta) return null; + + const sliceOrder: ArchitectureSliceId[] = [ + "mainline", + "control", + "state", + "lanes", + ]; + const visibleSlices = sliceOrder.filter( + (sliceId) => (blueprint.slices[sliceId] ?? []).length > 0 + ); return ( -
- {reversed.map((cls, i) => { - const isNew = newClassNames.has(cls.name); - const colorClasses = getLayerColorClasses(cls.introducedIn); +
+
+
+
+
+

+ {pickDiagramText(locale, UI_TEXT.summaryTitle)} +

+

+ {content.coreAddition} +

+

+ {translateArchitectureText( + locale, + pickDiagramText(locale, blueprint.summary) + )} +

+
+
+
+ +
+ {visibleSlices.map((sliceId, sliceIndex) => { + const slice = blueprint.slices[sliceId] ?? []; + const style = SLICE_STYLE[sliceId]; - return ( -
- {i > 0 && ( -
- - - - + return ( + +
+
+ + {pickDiagramText(locale, style.title)} + +
+

+ {pickDiagramText(locale, style.note)} +

- )} - -
-
- + {slice.map((item, itemIndex) => ( + +
+

+ {translateArchitectureText( + locale, + pickDiagramText(locale, item.name) + )} +

+ {item.fresh && ( + + {pickDiagramText(locale, UI_TEXT.fresh)} + + )} +
+

+ {translateArchitectureText( + locale, + pickDiagramText(locale, item.detail) + )} +

+
+ ))} +
+ + ); + })} +
+ +
+
+
+

+ {pickDiagramText(locale, UI_TEXT.recordsTitle)} +

+

+ {pickDiagramText(locale, UI_TEXT.recordsNote)} +

+
+
+ {blueprint.records.map((record, index) => ( + +
+ + {translateArchitectureText( + locale, + pickDiagramText(locale, record.name) + )} + + {record.fresh && ( + + {pickDiagramText(locale, UI_TEXT.fresh)} + )} - > - {cls.name} - -

+

+ {translateArchitectureText( + locale, + pickDiagramText(locale, record.detail) )} - > - {CLASS_DESCRIPTIONS[cls.name] || ""}

-
-
- - {cls.introducedIn} - - {isNew && ( - - NEW - - )} -
-
- + + ))}
- ); - })} - - {allClasses.length === 0 && ( -
- No classes in this version (functions only)
- )} +
- {tools.length > 0 && ( - - {tools.map((tool) => ( - +

+ {pickDiagramText(locale, UI_TEXT.handoffTitle)} +

+
+ {blueprint.handoff.map((step, index) => ( + - {tool} - +
+ + {index + 1} + +

+ {translateArchitectureText( + locale, + pickDiagramText(locale, step) + )} +

+
+
))} - - )} +
+
); } diff --git a/web/src/components/architecture/design-decisions.tsx b/web/src/components/architecture/design-decisions.tsx index 5fa47faa4..4a64d04ae 100644 --- a/web/src/components/architecture/design-decisions.tsx +++ b/web/src/components/architecture/design-decisions.tsx @@ -5,6 +5,10 @@ import { motion, AnimatePresence } from "framer-motion"; import { useTranslations, useLocale } from "@/lib/i18n"; import { ChevronDown } from "lucide-react"; import { cn } from "@/lib/utils"; +import { + isGenericAnnotationVersion, + resolveLegacySessionAssetVersion, +} from "@/lib/session-assets"; import s01Annotations from "@/data/annotations/s01.json"; import s02Annotations from "@/data/annotations/s02.json"; @@ -19,13 +23,19 @@ import s10Annotations from "@/data/annotations/s10.json"; import s11Annotations from "@/data/annotations/s11.json"; import s12Annotations from "@/data/annotations/s12.json"; +interface DecisionLocaleCopy { + title?: string; + description?: string; + alternatives?: string; +} + interface Decision { id: string; title: string; description: string; alternatives: string; - zh?: { title: string; description: string }; - ja?: { title: string; description: string }; + zh?: DecisionLocaleCopy; + ja?: DecisionLocaleCopy; } interface AnnotationFile { @@ -48,6 +58,646 @@ const ANNOTATIONS: Record = { s12: s12Annotations as AnnotationFile, }; +const GENERIC_ANNOTATIONS: Record = { + s07: { + version: "s07", + decisions: [ + { + id: "permission-before-execution", + title: "Permission Is a Gate Before Execution", + description: + "The model should not call tools directly as if intent were already trusted execution. Normalize the requested action first, then run it through a shared policy gate that returns allow, deny, or ask. This keeps safety rules consistent across every tool.", + alternatives: + "Tool-local safety checks are simpler at first, but they scatter policy into every handler and make behavior inconsistent. A single permission plane adds one more layer, but it is the only place where global execution policy can stay coherent.", + zh: { + title: "权限必须是执行前闸门", + description: + "模型不应该把 tool call 直接当成可信执行。先把请求规范化成统一意图,再送进共享权限层,返回 allow / deny / ask。这样所有工具都遵循同一套安全语义。", + alternatives: + "把安全判断散落到每个工具里实现起来更快,但策略会碎片化。独立权限层虽然多一层,却能让全局执行规则保持一致。", + }, + ja: { + title: "権限は実行前のゲートでなければならない", + description: + "model は tool call をそのまま信頼済みの実行として扱ってはいけません。まず要求を統一された intent に正規化し、共有 permission layer に通して allow / deny / ask を返します。これで全 tool が同じ安全意味論に従います。", + alternatives: + "安全判定を各 tool に分散すると最初は速く作れますが、policy がばらけます。独立した permission layer は一段増えますが、全体の実行方針を一貫して保てます。", + }, + }, + { + id: "structured-permission-result", + title: "Permission Results Must Be Structured and Visible", + description: + "A deny or ask outcome is not an implementation detail. The agent must append that result back into the loop so the model can re-plan from it. Otherwise the system silently blocks execution and the model loses the reason why.", + alternatives: + "Throwing an exception or returning a plain string is easy, but it hides the decision semantics. A structured permission result makes the next model step explainable and recoverable.", + zh: { + title: "权限结果必须结构化且可见", + description: + "deny 或 ask 不是内部细节。它们必须回写到主循环,让模型知道为什么没执行、接下来该怎么重规划。否则系统只是静默阻止执行,模型却看不到原因。", + alternatives: + "直接抛异常或回一段普通字符串最省事,但会把决策语义藏起来。结构化权限结果能让后续一步更可解释、更可恢复。", + }, + ja: { + title: "権限結果は構造化され、見える形で戻るべきだ", + description: + "deny や ask は内部実装の細部ではありません。main loop へ書き戻し、model が「なぜ実行されなかったか」「次にどう再計画するか」を見えるようにする必要があります。そうしないと system は黙って止め、model だけが理由を失います。", + alternatives: + "例外や単なる文字列で返す方が楽ですが、判断の意味が隠れます。構造化された permission result の方が、次の一手を説明可能で回復可能にします。", + }, + }, + ], + }, + s08: { + version: "s08", + decisions: [ + { + id: "hooks-observe-lifecycle", + title: "Hooks Extend Lifecycle, Not Core State Progression", + description: + "Hooks should attach around stable lifecycle boundaries such as pre_tool, post_tool, and on_error. The core loop still owns messages, tool execution, and stop conditions. That separation keeps the system teachable and prevents hidden control flow.", + alternatives: + "Letting hooks mutate core loop control directly feels flexible, but it makes execution order harder to reason about. Stable lifecycle boundaries keep extension power without dissolving the mainline.", + zh: { + title: "Hook 扩展生命周期,不接管主状态推进", + description: + "Hook 应该挂在 pre_tool、post_tool、on_error 这类稳定边界上。messages、工具执行和停止条件仍由主循环掌控。这样系统心智才清晰,不会出现隐藏控制流。", + alternatives: + "让 Hook 直接改主循环状态看似灵活,但执行顺序会越来越难推理。稳定生命周期边界能保留扩展力,又不破坏主线。", + }, + ja: { + title: "Hook はライフサイクルを拡張し、主状態の進行は奪わない", + description: + "Hook は pre_tool、post_tool、on_error のような安定境界に付けるべきです。messages、tool 実行、停止条件は main loop が持ち続けます。これで system の心智が崩れず、隠れた制御フローも生まれません。", + alternatives: + "Hook が main loop 制御を直接書き換えると柔軟そうに見えますが、実行順はどんどん読みにくくなります。安定した lifecycle 境界が、拡張力と主線の明瞭さを両立させます。", + }, + }, + { + id: "normalized-hook-event-shape", + title: "Hooks Need a Normalized Event Shape", + description: + "Each hook should receive the same event envelope: tool name, input, result, error, timing, and session identifiers. This lets audit, tracing, metrics, and policy hooks share one mental model instead of inventing custom payloads.", + alternatives: + "Passing ad hoc strings to each hook is fast, but every new hook then needs custom parsing and drifts from the rest of the system. A normalized event contract costs a little upfront and pays for itself quickly.", + zh: { + title: "Hook 必须共享统一事件结构", + description: + "每个 Hook 都应该收到同样的事件封包,例如 tool name、input、result、error、耗时、session id。这样审计、追踪、指标和策略 Hook 才共享同一心智模型。", + alternatives: + "给每个 Hook 传临时拼接的字符串最省事,但新 Hook 都得自己解析,系统会越来越散。统一事件结构前期多一点设计,后面会省很多心智成本。", + }, + ja: { + title: "Hook は正規化されたイベント形を共有する必要がある", + description: + "各 Hook は tool name、input、result、error、所要時間、session id のような同じ event envelope を受け取るべきです。これで audit、trace、metrics、policy hook が同じ心智モデルを共有できます。", + alternatives: + "その場しのぎの文字列を各 Hook に渡すのは楽ですが、新しい Hook のたびに独自解析が必要になり、system は散らかります。統一イベント契約は最初に少し設計が必要でも、すぐ元が取れます。", + }, + }, + ], + }, + s09: { + version: "s09", + decisions: [ + { + id: "memory-keeps-only-durable-facts", + title: "Memory Stores Durable Facts, Not Full History", + description: + "Long-term memory should hold cross-session facts such as user preferences, durable project constraints, and other information that cannot be cheaply re-derived. That keeps memory small, legible, and useful.", + alternatives: + "Saving every conversation turn feels safe, but it turns memory into an unbounded log and makes retrieval noisy. Selective durable memory is harder to teach at first, but it is the right system boundary.", + zh: { + title: "Memory 只保存长期有效事实", + description: + "长期记忆应该保存跨会话事实,例如用户偏好、稳定项目约束、无法轻易重新推导的信息。这样 memory 才会小而清晰,真正有用。", + alternatives: + "把整段历史全存进去看起来更稳,但长期会变成无边界日志,检索也会很脏。选择性保存长期事实更符合正确边界。", + }, + ja: { + title: "Memory は長く有効な事実だけを保存する", + description: + "long-term memory には、ユーザー設定、安定した project 制約、簡単には再導出できない情報のような、会話をまたいで有効な事実だけを置くべきです。そうすると memory は小さく、読みやすく、役に立つ状態を保てます。", + alternatives: + "会話履歴を全部保存すると安全そうですが、やがて無制限ログになり、検索も濁ります。長期事実だけを選んで残す方が正しい境界です。", + }, + }, + { + id: "memory-read-write-phases", + title: "Memory Needs Clear Read and Write Phases", + description: + "Load relevant memory before prompt assembly, then extract and persist new durable facts after the work turn completes. This keeps memory flow visible and prevents the loop from mutating long-term state at arbitrary moments.", + alternatives: + "Writing memory opportunistically at random tool boundaries is possible, but it makes memory updates hard to explain. Clear read and write phases keep the lifecycle teachable.", + zh: { + title: "Memory 需要明确读写阶段", + description: + "在 prompt 装配前读取相关 memory,在任务轮次结束后提炼并写回新的长期事实。这样读写边界清楚,也避免主循环在任意时刻偷偷修改长期状态。", + alternatives: + "在随机工具边界随手写 memory 虽然也能跑,但很难解释系统到底何时更新长期知识。清晰阶段更适合教学和实现。", + }, + ja: { + title: "Memory には明確な読取段階と書込段階が必要だ", + description: + "prompt 組み立て前に関連 memory を読み込み、作業ターンの後で新しい durable fact を抽出して書き戻します。こうすると読書き境界が見え、main loop が任意の瞬間に長期状態をこっそり変えることも防げます。", + alternatives: + "適当な tool 境界で memory を書くこともできますが、いつ長期知識が更新されたのか説明しにくくなります。明確な read/write phase の方が、学習にも実装にも向いています。", + }, + }, + ], + }, + s10: { + version: "s10", + decisions: [ + { + id: "prompt-is-a-pipeline", + title: "The System Prompt Should Be Built as a Pipeline", + description: + "Role policy, workspace state, tool catalog, memory, and task focus should be assembled as explicit prompt sections in a visible order. This makes model input auditable and keeps the control plane understandable.", + alternatives: + "A single giant string looks simpler in code, but no one can explain which part came from where or why its order matters. A pipeline adds structure where the system actually needs it.", + zh: { + title: "系统提示词应被实现成装配流水线", + description: + "角色策略、工作区状态、工具目录、memory、任务焦点都应该作为显式片段按顺序装配。这样模型输入才可审计,控制平面也才讲得清楚。", + alternatives: + "一整段大字符串在代码里看起来更省事,但没人能说清每部分从哪来、顺序为什么这样。Prompt pipeline 才符合真实系统结构。", + }, + ja: { + title: "System prompt は組み立てパイプラインとして作るべきだ", + description: + "role policy、workspace state、tool catalog、memory、task focus は、見える順序を持つ prompt section として明示的に組み立てるべきです。これで model input が監査可能になり、control plane も説明しやすくなります。", + alternatives: + "巨大な 1 本の文字列にすると実装は簡単に見えますが、どこから来た指示なのか、なぜその順番なのかを誰も説明できません。pipeline の方が実際の構造に合っています。", + }, + }, + { + id: "stable-policy-separated-from-runtime-state", + title: "Stable Policy Must Stay Separate from Runtime State", + description: + "Instruction hierarchy becomes clearer when stable rules live separately from volatile runtime data. That separation reduces accidental prompt drift and makes each prompt section easier to test.", + alternatives: + "Mixing durable policy with per-turn runtime details works for tiny demos, but it breaks down quickly once memory, tasks, and recovery hints all need to join the input.", + zh: { + title: "稳定策略与运行时状态必须分开", + description: + "当稳定规则和每轮运行时数据分离后,指令层级会清晰很多,也更不容易出现提示词结构漂移。每一段输入都更容易单独测试。", + alternatives: + "小 demo 里把所有东西揉在一起还能跑,但一旦 memory、任务状态、恢复提示都要加入输入,混写方式很快就失控。", + }, + ja: { + title: "安定した policy と runtime state は分けて保つべきだ", + description: + "変わりにくい規則と毎ターン変わる runtime data を分けると、指示の階層がずっと明確になります。prompt drift も起きにくくなり、各 section を個別にテストしやすくなります。", + alternatives: + "小さな demo では全部混ぜても動きますが、memory、task state、recovery hint まで入れ始めるとすぐ破綻します。分離が必要です。", + }, + }, + ], + }, + s11: { + version: "s11", + decisions: [ + { + id: "explicit-continuation-reasons", + title: "Recovery Needs Explicit Continuation Reasons", + description: + "After a failure, the agent should record whether it is retrying, degrading, requesting confirmation, or stopping. That reason becomes part of the visible state and lets the next model step act intentionally.", + alternatives: + "A blind retry loop is easy to implement, but neither the user nor the model can explain what branch the system is on. Explicit continuation reasons make recovery legible.", + zh: { + title: "恢复分支必须显式写出继续原因", + description: + "失败后,系统应该明确记录当前是在 retry、fallback、请求确认还是停止。这个原因本身也是可见状态,让下一步模型推理更有依据。", + alternatives: + "盲重试最容易写,但用户和模型都不知道系统现在处在哪条恢复分支。显式 continuation reason 才能让恢复过程可解释。", + }, + ja: { + title: "回復分岐は継続理由を明示して残すべきだ", + description: + "失敗後、system は retry・fallback・確認要求・停止のどれにいるのかを明示して記録すべきです。この理由自体が visible state になり、次の model step の判断材料になります。", + alternatives: + "盲目的な retry loop は実装しやすいですが、user も model も今どの回復分岐にいるのか説明できません。explicit continuation reason が回復を読めるものにします。", + }, + }, + { + id: "bounded-retry-branches", + title: "Retry Paths Must Be Bounded", + description: + "Recovery branches need caps, stop conditions, and alternative strategies. Otherwise the system only hides failure behind repetition instead of turning it into progress.", + alternatives: + "Infinite retries can appear robust in early demos, but they produce loops with no insight. Bounded branches force the design to define when the system should pivot or stop.", + zh: { + title: "重试分支必须有上限和转向条件", + description: + "恢复分支必须有次数上限、停止条件和降级路径。否则系统只是把失败藏进重复执行,并没有真正把失败转成进展。", + alternatives: + "无限重试在早期 demo 里看起来像“更稳”,但其实只是在制造无洞察的循环。明确边界能逼迫系统定义何时转向或停止。", + }, + ja: { + title: "Retry 分岐には上限と転向条件が必要だ", + description: + "回復分岐には試行回数の上限、停止条件、別戦略への切替経路が必要です。そうしないと system は失敗を繰り返しの中へ隠すだけで、進展に変えられません。", + alternatives: + "無限 retry は初期 demo では頑丈に見えますが、実際は洞察のないループを作るだけです。境界を定めることで、いつ pivot し、いつ止まるかを設計できます。", + }, + }, + ], + }, + s12: { + version: "s12", + decisions: [ + { + id: "task-records-are-durable-work-nodes", + title: "Task Records Should Describe Durable Work Nodes", + description: + "A task record should represent work that can survive across turns, not a temporary note for one model call. That means keeping explicit identifiers, states, and dependency edges on disk or in another durable store.", + alternatives: + "Session-local todo text is cheaper to explain at first, but it cannot coordinate larger work once the loop moves on. Durable records add structure where the runtime actually needs it.", + zh: { + title: "任务记录必须是可持久的工作节点", + description: + "Task record 应该表示一项能跨轮次继续推进的工作,而不是某一轮模型调用里的临时备注。这要求它拥有明确 id、status 和依赖边,并被持久化保存。", + alternatives: + "会话级 todo 文本一开始更容易讲,但主循环一旦继续往前,它就无法协调更大的工作。Durable record 才是正确的系统边界。", + }, + ja: { + title: "Task record は持続する作業ノードを表すべきだ", + description: + "task record は、複数ターンにまたがって進む work を表すべきで、1 回の model call のメモではありません。そのために明示的な id、status、dependency edge を持ち、永続化される必要があります。", + alternatives: + "session 内 todo text は最初は説明しやすいですが、loop が先へ進むと大きな仕事を調整できません。durable record の方が正しい境界です。", + }, + }, + { + id: "unlock-logic-belongs-to-the-board", + title: "Dependency Unlock Logic Belongs to the Task Board", + description: + "Completing one task should update the board, check dependency satisfaction, and unlock the next nodes. That logic belongs to the task system, not to whatever worker happened to finish the task.", + alternatives: + "Letting each worker manually decide what becomes available next is flexible, but it scatters dependency semantics across the codebase. Central board logic keeps the graph teachable.", + zh: { + title: "依赖解锁逻辑必须属于任务板", + description: + "完成一个任务以后,应该由任务板统一更新状态、检查依赖是否满足,并解锁后续节点。这段逻辑属于 task system,而不该散落到各个执行者手里。", + alternatives: + "让每个执行者自己判断后续任务是否解锁看似灵活,但依赖语义会散落到整个代码库。集中在任务板里才讲得清楚。", + }, + ja: { + title: "依存の解放ロジックは task board が持つべきだ", + description: + "1 つの task が完了したら、board が状態更新、依存充足の確認、次ノードの解放をまとめて行うべきです。このロジックは task system に属し、たまたま作業した worker に散らしてはいけません。", + alternatives: + "各 worker が次に何を解放するかを個別判断すると柔軟そうですが、dependency semantics がコード全体へ散ります。board に集中させる方が教えやすく、壊れにくいです。", + }, + }, + ], + }, + s13: { + version: "s13", + decisions: [ + { + id: "runtime-records-separate-goal-from-execution", + title: "Runtime Records Should Separate Goal from One Execution Attempt", + description: + "Background execution needs a record that describes the current run itself: status, timestamps, preview, and output location. That keeps the durable task goal separate from one live execution slot.", + alternatives: + "Reusing the same task record for both goal state and execution state saves one structure, but it blurs what is planned versus what is actively running right now.", + zh: { + title: "运行记录必须把目标和单次执行分开", + description: + "后台执行需要一份专门描述这次运行本身的记录,例如 status、时间戳、preview、output 位置。这样 durable task goal 和 live execution slot 才不会混在一起。", + alternatives: + "把 goal state 和 execution state 强行塞进同一条 task record 虽然省结构,但会模糊“计划中的工作”和“当前正在跑的这一趟执行”之间的边界。", + }, + ja: { + title: "Runtime record は goal と単発実行を分けて持つべきだ", + description: + "background execution には、その実行自身を表す record が必要です。status、timestamp、preview、output location を持たせることで、durable task goal と live execution slot が混ざらなくなります。", + alternatives: + "goal state と execution state を 1 つの task record へ押し込むと構造は減りますが、「計画された仕事」と「今走っている 1 回の実行」の境界が曖昧になります。", + }, + }, + { + id: "notifications-carry-preview-not-full-output", + title: "Notifications Should Carry a Preview, Not the Full Log", + description: + "Large command output should be written to durable storage, while the notification only carries a compact preview. That preserves the return path into the main loop without flooding the active context window.", + alternatives: + "Injecting the full background log back into prompt space looks convenient, but it burns context and hides the difference between alerting the loop and storing the artifact.", + zh: { + title: "通知只带摘要,不直接带全文日志", + description: + "大输出应该写入持久存储,notification 只带一段 compact preview。这样既保住回到主循环的 return path,又不会把活跃上下文塞满。", + alternatives: + "把整份后台日志直接塞回 prompt 看起来省事,但会快速吃掉上下文,还会模糊“提醒主循环”和“保存原始产物”这两层职责。", + }, + ja: { + title: "通知は全文ログではなく preview だけを運ぶべきだ", + description: + "大きな出力は durable storage に書き、notification には compact preview だけを載せるべきです。これで main loop へ戻る経路を保ちつつ、活性 context を膨らませずに済みます。", + alternatives: + "background log 全文を prompt へ戻すのは手軽ですが、context を急速に消費し、「loop への通知」と「artifact の保存」という 2 つの責務も混ざります。", + }, + }, + ], + }, + s14: { + version: "s14", + decisions: [ + { + id: "cron-only-triggers-runtime-work", + title: "Cron Should Trigger Runtime Work, Not Own Execution", + description: + "The scheduler's job is to decide when a rule matches. Once it does, it should create runtime work and hand execution off to the runtime layer. This preserves a clean boundary between time and work.", + alternatives: + "Letting cron directly execute task logic is tempting for small systems, but it mixes rule-matching with execution state and makes both harder to teach and debug.", + zh: { + title: "Cron 只负责触发,不直接承担执行", + description: + "调度器的职责是判断时间规则何时命中。命中后应创建 runtime work,再把执行交给运行时层。这样“时间”和“工作”两类职责边界才干净。", + alternatives: + "小系统里让 cron 直接执行业务逻辑很诱人,但会把规则匹配和执行状态搅在一起,教学和调试都会变难。", + }, + ja: { + title: "Cron は発火だけを担当し、実行を抱え込まない", + description: + "scheduler の役割は時間規則がいつ一致するかを判断することです。一致したら runtime work を生成し、実行は runtime layer へ渡すべきです。これで「時間」と「仕事」の境界がきれいに保てます。", + alternatives: + "小さな system では cron がそのまま仕事を実行したくなりますが、rule matching と execution state が混ざり、学習にもデバッグにも不利です。", + }, + }, + { + id: "schedule-records-separate-from-runtime-records", + title: "Schedule Records Must Stay Separate from Runtime Records", + description: + "A schedule says what should trigger and when. A runtime record says what is currently running, queued, retried, or completed. Keeping them separate makes both time semantics and execution semantics clearer.", + alternatives: + "A single merged record reduces file count, but it blurs whether the system is reasoning about recurring policy or one concrete execution instance.", + zh: { + title: "调度记录与运行时记录必须分离", + description: + "schedule 记录的是“何时触发什么”,runtime record 记录的是“当前运行、排队、重试或完成到哪一步”。分开后,时间语义和执行语义都更清楚。", + alternatives: + "把两者合成一条记录看似省事,但会混淆系统此刻究竟在描述长期规则,还是某次具体执行实例。", + }, + ja: { + title: "Schedule record と runtime record は分離すべきだ", + description: + "schedule は「いつ何を起動するか」を記録し、runtime record は「今どの実行が走り、待ち、再試行し、完了したか」を記録します。分けることで時間意味論と実行意味論の両方が明確になります。", + alternatives: + "両者を 1 レコードにまとめると楽そうですが、system が長期ルールを語っているのか、単発の実行インスタンスを語っているのかが分からなくなります。", + }, + }, + ], + }, + s15: { + version: "s15", + decisions: [ + { + id: "teammates-need-persistent-identity", + title: "Teammates Need Persistent Identity, Not One-Shot Delegation", + description: + "A teammate should keep a name, role, inbox, and status across multiple rounds of work. That persistence is what lets the platform assign responsibility instead of recreating a fresh subagent every time.", + alternatives: + "Disposable delegated workers are easier to implement, but they cannot carry stable responsibility or mailbox-based coordination over time.", + zh: { + title: "队友必须拥有长期身份,而不是一次性委派", + description: + "Teammate 应该在多轮工作之间保留名字、角色、inbox 和状态。只有这样,平台才能分配长期责任,而不是每次都重新创建一个临时 subagent。", + alternatives: + "一次性委派更容易实现,但它承载不了长期职责,也无法自然地进入 mailbox-based 协作。", + }, + ja: { + title: "チームメイトには使い捨てではない継続的な身元が必要だ", + description: + "teammate は複数ラウンドにわたり、名前、役割、inbox、状態を保つべきです。そうして初めて platform は長期責任を割り当てられ、毎回新しい subagent を作り直さずに済みます。", + alternatives: + "使い捨ての委譲 worker は作りやすいですが、安定した責務も mailbox ベースの協調も持ち運べません。", + }, + }, + { + id: "mailboxes-keep-collaboration-bounded", + title: "Independent Mailboxes Keep Collaboration Legible", + description: + "Each teammate should coordinate through an inbox boundary rather than sharing one giant message history. That keeps ownership, message flow, and wake-up conditions easier to explain.", + alternatives: + "A shared message buffer looks simpler, but it erases agent boundaries and makes it harder to see who is responsible for what.", + zh: { + title: "独立邮箱边界让协作保持清晰", + description: + "每个队友都应该通过 inbox 边界协作,而不是共用一段巨大的消息历史。这样 ownership、消息流和唤醒条件才更容易讲清楚。", + alternatives: + "共享消息缓冲区看起来更简单,但会抹平 agent 边界,也更难解释到底谁在负责什么。", + }, + ja: { + title: "独立 mailbox があると協調の境界が読みやすくなる", + description: + "各 teammate は巨大な共有 message history を使うのではなく、inbox 境界を通して協調すべきです。これで ownership、message flow、wake-up condition を説明しやすくなります。", + alternatives: + "共有 message buffer は単純そうですが、agent 境界を消してしまい、誰が何に責任を持つのかが見えにくくなります。", + }, + }, + ], + }, + s16: { + version: "s16", + decisions: [ + { + id: "protocols-need-request-correlation", + title: "Protocol Messages Need Request Correlation", + description: + "Structured workflows such as approvals or shutdowns need request_id correlation so every reply, timeout, or rejection can resolve against the right request.", + alternatives: + "Free-form reply text may work in a tiny demo, but it breaks as soon as several protocol flows exist at once.", + zh: { + title: "协议消息必须带请求关联 id", + description: + "审批、关机这类结构化工作流必须带 request_id,这样每条回复、超时或拒绝才能准确对应到正确请求。", + alternatives: + "自由文本回复在极小 demo 里还能凑合,但一旦同时存在多条协议流程,就很快会对不上号。", + }, + ja: { + title: "プロトコルメッセージには request 相関 id が必要だ", + description: + "approval や shutdown のような構造化 workflow では request_id が必要です。そうして初めて各 reply、timeout、reject を正しい request に結び付けられます。", + alternatives: + "自由文の返答は極小 demo では動いても、複数の protocol flow が同時に走るとすぐ対応関係が崩れます。", + }, + }, + { + id: "request-state-should-be-durable", + title: "Request State Should Be Durable and Inspectable", + description: + "Pending, approved, rejected, or expired states belong in a durable request record, not only in memory. That makes protocol state recoverable, inspectable, and teachable.", + alternatives: + "In-memory trackers are quick to write, but they disappear too easily and hide the real object the system is coordinating around.", + zh: { + title: "请求状态必须可持久、可检查", + description: + "pending、approved、rejected、expired 这些状态应该写进 durable request record,而不是只存在内存里。这样协议状态才能恢复、检查,也更适合教学。", + alternatives: + "内存追踪表写起来很快,但太容易消失,也会把系统真正围绕的对象藏起来。", + }, + ja: { + title: "Request state は永続化され、検査できるべきだ", + description: + "pending、approved、rejected、expired のような状態は durable request record に書くべきで、memory の中だけに置いてはいけません。そうすることで protocol state が回復可能・可視化可能になります。", + alternatives: + "in-memory tracker はすぐ書けますが、消えやすく、system が本当に中心にしている object も隠してしまいます。", + }, + }, + ], + }, + s17: { + version: "s17", + decisions: [ + { + id: "autonomy-starts-with-bounded-claim-rules", + title: "Autonomy Starts with Bounded Claim Rules", + description: + "Workers should only self-claim work when clear policies say they may do so. That prevents autonomy from turning into race conditions or duplicate execution.", + alternatives: + "Letting every idle worker grab anything looks energetic, but it makes the platform unpredictable. Claim rules keep autonomy controlled.", + zh: { + title: "自治从有边界的认领规则开始", + description: + "只有在明确策略允许的情况下,worker 才应该 self-claim 工作。这样才能避免自治变成撞车或重复执行。", + alternatives: + "让所有空闲 worker 见活就抢看起来很积极,但平台会变得不可预测。Claim rule 才能让自治保持可控。", + }, + ja: { + title: "自律は境界のある claim rule から始まる", + description: + "worker が self-claim してよいのは、明確な policy が許すときだけにすべきです。そうしないと autonomy は race condition や重複実行へ変わります。", + alternatives: + "空いている worker が何でも取りに行く設計は勢いがあるように見えますが、platform は予測不能になります。claim rule があって初めて自律を制御できます。", + }, + }, + { + id: "resume-must-come-from-visible-state", + title: "Resumption Must Come from Visible State", + description: + "A worker should resume from task state, protocol state, mailbox contents, and role state. That keeps autonomy explainable instead of making it look like spontaneous intuition.", + alternatives: + "Implicit resume logic hides too much. Visible state may feel verbose, but it is what makes autonomous behavior debuggable.", + zh: { + title: "恢复执行必须建立在可见状态上", + description: + "Worker 应该根据 task state、protocol state、mailbox 内容和角色状态恢复执行。这样自治才可解释,而不是看起来像神秘直觉。", + alternatives: + "隐式恢复逻辑会把太多关键条件藏起来。可见状态虽然更啰嗦,但能让自治行为真正可调试。", + }, + ja: { + title: "再開は見える state から始まるべきだ", + description: + "worker は task state、protocol state、mailbox 内容、role state をもとに実行を再開すべきです。そうすることで autonomy は説明可能になり、謎の直感のようには見えません。", + alternatives: + "暗黙の resume ロジックは重要条件を隠しすぎます。visible state は少し冗長でも、自律挙動を本当にデバッグ可能にします。", + }, + }, + ], + }, + s18: { + version: "s18", + decisions: [ + { + id: "worktree-is-a-lane-not-the-task", + title: "A Worktree Is an Execution Lane, Not the Task Itself", + description: + "Tasks describe goals and dependency state. Worktrees describe isolated directories where execution happens. Keeping those two objects separate prevents the runtime model from blurring.", + alternatives: + "Collapsing task and worktree into one object removes one layer, but it becomes harder to explain whether the system is talking about work intent or execution environment.", + zh: { + title: "Worktree 是执行车道,不是任务本身", + description: + "Task 描述目标和依赖状态,worktree 描述隔离执行发生在哪个目录里。把两者分开,运行时模型才不会糊成一团。", + alternatives: + "把 task 和 worktree 硬合成一个对象虽然少一层,但会让系统很难解释当前说的是工作意图还是执行环境。", + }, + ja: { + title: "Worktree は task そのものではなく execution lane だ", + description: + "task は goal と dependency state を表し、worktree は隔離された実行ディレクトリを表します。この 2 つを分けることで runtime model が曖昧になりません。", + alternatives: + "task と worktree を 1 つの object に潰すと層は減りますが、system が work intent を語っているのか execution environment を語っているのか分かりにくくなります。", + }, + }, + { + id: "closeout-needs-explicit-keep-remove-semantics", + title: "Closeout Needs Explicit Keep / Remove Semantics", + description: + "After isolated work finishes, the system should explicitly decide whether that lane is kept for follow-up or reclaimed. That makes lifecycle state observable instead of accidental.", + alternatives: + "Implicit cleanup feels automatic, but it hides important execution-lane decisions. Explicit closeout semantics teach the lifecycle much more clearly.", + zh: { + title: "收尾阶段必须显式决定保留还是回收", + description: + "隔离工作结束后,系统应该显式决定这个 lane 是继续保留给后续工作,还是立即回收。这样生命周期状态才可见,而不是碰运气。", + alternatives: + "隐式清理看起来很自动,但会把很多关键执行车道决策藏起来。显式 closeout 语义更适合教学,也更利于调试。", + }, + ja: { + title: "Closeout では保持か回収かを明示的に決めるべきだ", + description: + "隔離作業が終わった後、その lane を次の作業のために保持するのか、すぐ回収するのかを system が明示的に決めるべきです。これで lifecycle state が運任せではなく見える状態になります。", + alternatives: + "暗黙 cleanup は自動に見えますが、重要な execution-lane 判断を隠してしまいます。explicit closeout semantics の方が、学習にもデバッグにも向いています。", + }, + }, + ], + }, + s19: { + version: "s19", + decisions: [ + { + id: "external-capabilities-share-one-routing-model", + title: "External Capabilities Should Share the Same Routing Model as Native Tools", + description: + "Plugins and MCP servers should enter through the same capability-routing surface as native tools. That means discovery, routing, permission, execution, and result normalization all stay conceptually aligned.", + alternatives: + "Building a parallel external-capability subsystem may feel cleaner at first, but it doubles the mental model. One routing model keeps the platform understandable.", + zh: { + title: "外部能力必须共享同一套路由模型", + description: + "Plugin 和 MCP server 都应该从与本地工具相同的 capability routing 入口进入系统。这样发现、路由、权限、执行、结果标准化才保持同一心智。", + alternatives: + "单独给外部能力再造一套系统看似整洁,实际会把平台心智翻倍。共享一套 routing model 才更可教、也更可维护。", + }, + ja: { + title: "外部 capability は native tool と同じ routing model を共有すべきだ", + description: + "plugin と MCP server は、native tool と同じ capability routing surface から system へ入るべきです。そうすることで discovery、routing、permission、execution、result normalization が 1 つの心智に揃います。", + alternatives: + "外部 capability 用に並列 subsystem を作ると最初は整って見えますが、学習モデルが二重になります。1 つの routing model の方が platform を理解しやすく保てます。", + }, + }, + { + id: "scope-external-capabilities", + title: "External Capabilities Need Scope and Policy Boundaries", + description: + "Remote capability does not mean unrestricted capability. Servers, plugins, and credentials need explicit workspace or session scopes so the platform can explain who can call what and why.", + alternatives: + "Global capability exposure is easier to wire up, but it weakens permission reasoning. Scoped capability access adds a small amount of configuration and a large amount of clarity.", + zh: { + title: "外部能力必须带作用域和策略边界", + description: + "远程能力不代表无限能力。server、plugin、credential 都要有 workspace 或 session 级作用域,平台才解释得清楚“谁能调用什么,为什么能调”。", + alternatives: + "全局暴露所有外部能力接起来最简单,但会削弱权限推理。增加一点 scope 配置,却能换来大量清晰度。", + }, + ja: { + title: "外部 capability には scope と policy の境界が必要だ", + description: + "remote capability だからといって無制限 capability ではありません。server、plugin、credential には workspace あるいは session scope が必要で、誰が何を呼べるのか、なぜ呼べるのかを platform が説明できるようにする必要があります。", + alternatives: + "すべての外部 capability をグローバル公開するのが最も配線は簡単ですが、permission reasoning が弱くなります。少しの scope 設定で、大きな明瞭さが得られます。", + }, + }, + ], + }, +}; + interface DesignDecisionsProps { version: string; } @@ -63,10 +713,13 @@ function DecisionCard({ const t = useTranslations("version"); const localized = - locale !== "en" ? (decision as unknown as Record)[locale] as { title?: string; description?: string } | undefined : undefined; + locale !== "en" + ? ((decision as unknown as Record)[locale] as DecisionLocaleCopy | undefined) + : undefined; const title = localized?.title || decision.title; const description = localized?.description || decision.description; + const alternatives = localized?.alternatives || decision.alternatives; return (
@@ -100,13 +753,13 @@ function DecisionCard({ {description}

- {decision.alternatives && ( + {alternatives && (

{t("alternatives")}

- {decision.alternatives} + {alternatives}

)} @@ -122,7 +775,10 @@ export function DesignDecisions({ version }: DesignDecisionsProps) { const t = useTranslations("version"); const locale = useLocale(); - const annotations = ANNOTATIONS[version]; + const annotations = isGenericAnnotationVersion(version) + ? GENERIC_ANNOTATIONS[version] + : ANNOTATIONS[resolveLegacySessionAssetVersion(version)]; + if (!annotations || annotations.decisions.length === 0) { return null; } diff --git a/web/src/components/architecture/execution-flow.tsx b/web/src/components/architecture/execution-flow.tsx index efeb1b77f..0e7dca873 100644 --- a/web/src/components/architecture/execution-flow.tsx +++ b/web/src/components/architecture/execution-flow.tsx @@ -1,15 +1,17 @@ "use client"; -import { useEffect, useState } from "react"; import { motion } from "framer-motion"; import { getFlowForVersion } from "@/data/execution-flows"; +import { getChapterGuide } from "@/lib/chapter-guides"; +import { useLocale } from "@/lib/i18n"; +import { pickDiagramText, translateFlowText } from "@/lib/diagram-localization"; import type { FlowNode, FlowEdge } from "@/types/agent-data"; const NODE_WIDTH = 140; const NODE_HEIGHT = 40; const DIAMOND_SIZE = 50; -const LAYER_COLORS: Record = { +const NODE_COLORS: Record = { start: "#3B82F6", process: "#10B981", decision: "#F59E0B", @@ -17,6 +19,85 @@ const LAYER_COLORS: Record = { end: "#EF4444", }; +const NODE_GUIDE = { + start: { + title: { zh: "入口", en: "Entry", ja: "入口" }, + note: { + zh: "这轮从哪里开始进入系统。", + en: "Where the current turn enters the system.", + ja: "このターンがどこから入るかを示します。", + }, + }, + process: { + title: { zh: "主处理", en: "Process", ja: "主処理" }, + note: { + zh: "系统内部稳定推进的一步。", + en: "A stable internal processing step.", + ja: "システム内部で安定して進む一段です。", + }, + }, + decision: { + title: { zh: "分叉判断", en: "Decision", ja: "分岐判断" }, + note: { + zh: "系统在这里决定往哪条分支走。", + en: "Where the system chooses a branch.", + ja: "ここでどの分岐へ進むかを決めます。", + }, + }, + subprocess: { + title: { zh: "子流程 / 外部车道", en: "Subprocess / Lane", ja: "子過程 / 外部レーン" }, + note: { + zh: "常见于外部执行、侧车流程或隔离车道。", + en: "Often used for external execution, sidecars, or isolated lanes.", + ja: "外部実行、サイドカー、隔離レーンなどでよく現れます。", + }, + }, + end: { + title: { zh: "回流 / 结束", en: "Write-back / End", ja: "回流 / 終了" }, + note: { + zh: "这轮在这里结束或回到主循环。", + en: "Where the turn ends or writes back into the loop.", + ja: "このターンが終わるか、主ループへ戻る場所です。", + }, + }, +} as const; + +const UI_TEXT = { + readLabel: { zh: "读图方式", en: "How to Read", ja: "読み方" }, + readTitle: { + zh: "先看主线回流,再看左右分支", + en: "Read the mainline first, then inspect the side branches", + ja: "まず主線の回流を見て、その後で左右の分岐を見る", + }, + readNote: { + zh: "从上往下看时间顺序,中间通常是主线,左右是分支、隔离车道或恢复路径。真正重要的不是节点有多少,而是这一章新增的分叉与回流在哪里。", + en: "Read top to bottom for time order. The center usually carries the mainline, while the sides hold branches, isolated lanes, or recovery paths. The key question is not how many nodes exist, but where this chapter introduces a new split and write-back.", + ja: "上から下へ時間順に読みます。中央は主線、左右は分岐・隔離レーン・回復経路です。大事なのはノード数ではなく、この章で新しく増えた分岐と回流がどこかです。", + }, + focusLabel: { zh: "本章先盯住", en: "Focus First", ja: "まず注目" }, + confusionLabel: { zh: "最容易混", en: "Easy to Confuse", ja: "混同しやすい点" }, + goalLabel: { zh: "学完要会", en: "Build Goal", ja: "学習ゴール" }, + legendLabel: { zh: "节点图例", en: "Node Legend", ja: "ノード凡例" }, + laneTitle: { zh: "版面分区", en: "Visual Lanes", ja: "レーン区分" }, + mainline: { zh: "主线", en: "Mainline", ja: "主線" }, + mainlineNote: { + zh: "系统当前回合反复回到的那条路径。", + en: "The path the system keeps returning to during the turn.", + ja: "システムがこのターン中に繰り返し戻る経路です。", + }, + sideLane: { zh: "分支 / 侧车", en: "Branch / Side Lane", ja: "分岐 / サイドレーン" }, + sideLaneNote: { + zh: "权限分支、自治扫描、后台槽位、worktree 车道常在这里展开。", + en: "Permission branches, autonomy scans, background slots, and worktree lanes often expand here.", + ja: "権限分岐、自治スキャン、バックグラウンドスロット、worktree レーンはここで展開されます。", + }, + bottomNote: { + zh: "虚线边框通常表示子流程或外部车道;箭头标签说明当前分叉为什么发生。", + en: "Dashed borders usually indicate a subprocess or external lane; arrow labels explain why a branch was taken.", + ja: "破線の枠は子過程や外部レーンを示すことが多く、矢印ラベルはなぜ分岐したかを示します。", + }, +} as const; + function getNodeCenter(node: FlowNode): { cx: number; cy: number } { return { cx: node.x, cy: node.y }; } @@ -41,7 +122,7 @@ function getEdgePath(from: FlowNode, to: FlowNode): string { } function NodeShape({ node }: { node: FlowNode }) { - const color = LAYER_COLORS[node.type]; + const color = NODE_COLORS[node.type]; const lines = node.label.split("\n"); if (node.type === "decision") { @@ -137,10 +218,12 @@ function EdgePath({ edge, nodes, index, + locale, }: { edge: FlowEdge; nodes: FlowNode[]; index: number; + locale: string; }) { const from = nodes.find((n) => n.id === edge.from); const to = nodes.find((n) => n.id === edge.to); @@ -173,7 +256,7 @@ function EdgePath({ animate={{ opacity: 1 }} transition={{ delay: index * 0.12 + 0.3 }} > - {edge.label} + {translateFlowText(locale, edge.label)} )} @@ -185,54 +268,180 @@ interface ExecutionFlowProps { } export function ExecutionFlow({ version }: ExecutionFlowProps) { - const [flow, setFlow] = useState>(null); - - useEffect(() => { - setFlow(getFlowForVersion(version)); - }, [version]); + const locale = useLocale(); + const flow = getFlowForVersion(version); + const guide = getChapterGuide(version, locale) ?? getChapterGuide(version, "en"); if (!flow) return null; const maxY = Math.max(...flow.nodes.map((n) => n.y)) + 50; return ( -
- - - - - - - - {flow.edges.map((edge, i) => ( - - ))} +
+
+
+
+

+ {pickDiagramText(locale, UI_TEXT.readLabel)} +

+

+ {pickDiagramText(locale, UI_TEXT.readTitle)} +

+

+ {pickDiagramText(locale, UI_TEXT.readNote)} +

+
- {flow.nodes.map((node, i) => ( - - - - ))} - + {guide && ( +
+
+

+ {pickDiagramText(locale, UI_TEXT.focusLabel)} +

+

+ {guide.focus} +

+
+
+

+ {pickDiagramText(locale, UI_TEXT.confusionLabel)} +

+

+ {guide.confusion} +

+
+
+

+ {pickDiagramText(locale, UI_TEXT.goalLabel)} +

+

+ {guide.goal} +

+
+
+ )} +
+ +
+

+ {pickDiagramText(locale, UI_TEXT.legendLabel)} +

+
+ {( + Object.keys(NODE_GUIDE) as Array + ).map((nodeType) => ( +
+
+ + + {pickDiagramText(locale, NODE_GUIDE[nodeType].title)} + +
+

+ {pickDiagramText(locale, NODE_GUIDE[nodeType].note)} +

+
+ ))} +
+
+
+ +
+
+
+
+

+ {pickDiagramText(locale, UI_TEXT.sideLane)} +

+

+ {pickDiagramText(locale, UI_TEXT.sideLaneNote)} +

+
+
+

+ {pickDiagramText(locale, UI_TEXT.mainline)} +

+

+ {pickDiagramText(locale, UI_TEXT.mainlineNote)} +

+
+
+

+ {pickDiagramText(locale, UI_TEXT.sideLane)} +

+

+ {pickDiagramText(locale, UI_TEXT.sideLaneNote)} +

+
+
+ +
+
+
+
+
+
+ + + + + + + + + {flow.edges.map((edge, i) => ( + + ))} + + {flow.nodes.map((node, i) => ( + + + + ))} + +
+ +

+ {pickDiagramText(locale, UI_TEXT.bottomNote)} +

+
+
); } diff --git a/web/src/components/architecture/mechanism-lenses.tsx b/web/src/components/architecture/mechanism-lenses.tsx new file mode 100644 index 000000000..20e70fc0a --- /dev/null +++ b/web/src/components/architecture/mechanism-lenses.tsx @@ -0,0 +1,1288 @@ +import type { VersionId } from "@/lib/constants"; + +type LocaleText = { + zh: string; + en: string; + ja: string; +}; + +interface VersionMechanismLensesProps { + version: string; + locale: string; +} + +const SECTION_TEXT = { + label: { + zh: "关键机制镜头", + en: "Mechanism Lens", + ja: "重要メカニズムの見取り図", + }, + title: { + zh: "把本章最容易打结的一层单独拆开", + en: "Pull out the one mechanism most likely to tangle in this chapter", + ja: "この章で最も混線しやすい層を単独でほどく", + }, + body: { + zh: "这不是重复正文,而是把真正关键的运行规则、状态边界和回流方向压成一张能反复回看的教学图。先看这里,再回正文,会更容易守住主线。", + en: "This does not replace the chapter body. It compresses the most important runtime rule, state boundary, and write-back path into one reusable teaching view.", + ja: "本文の繰り返しではなく、重要な runtime rule・state boundary・write-back path を一枚に圧縮した補助図です。ここを先に見ると本文の主線を保ちやすくなります。", + }, +} as const; + +const TOOL_RUNTIME_VERSION_ANGLE: Partial> = { + s02: { + zh: "这一章第一次把 model 的 tool intent 接进统一执行面,所以重点不是“多了几个工具”,而是“调用如何进入稳定 runtime”。", + en: "This is the first chapter where model tool intent enters one execution plane. The point is not just more tools, but a stable runtime entry path.", + ja: "この章では model の tool intent が初めて 1 つの execution plane に入ります。増えた tool よりも、安定した runtime 入口を作ることが主題です。", + }, + s07: { + zh: "权限系统不是独立岛屿,它是插在真正执行之前的一道 runtime 闸门。", + en: "The permission system is not an isolated island. It is a runtime gate inserted before real execution.", + ja: "権限層は独立した島ではなく、実行直前に差し込まれる安全ゲートです。", + }, + s13: { + zh: "后台任务会让结果不再总是当前 turn 立即回写,所以你必须开始把执行槽位和回流顺序分开看。", + en: "Background tasks mean results do not always write back in the same turn, so execution slots and write-back order must become separate ideas.", + ja: "バックグラウンド実行が入ると、結果は同じ turn に即時回写されるとは限りません。だから実行スロットと回写順序を分けて見る必要があります。", + }, + s19: { + zh: "到了 MCP 与 Plugin,这一层的重点是:本地工具、插件和外部 server 虽然来源不同,但最终都要回到同一执行面。", + en: "With MCP and plugins, the key is that native tools, plugins, and external servers may come from different places but still return to one execution plane.", + ja: "MCP と plugin の段階では、native tool・plugin・外部 server が出自は違っても最終的には同じ execution plane へ戻ることが重要です。", + }, +}; + +const QUERY_TRANSITION_VERSION_ANGLE: Partial> = { + s06: { + zh: "压缩刚出现时,读者很容易还把 query 想成一个 while loop。这一章开始就该意识到:状态已经会影响下一轮为什么继续。", + en: "When compaction first appears, readers still tend to picture a plain while-loop. This is where state starts changing why the next turn exists.", + ja: "compact が出た直後は query を単なる while loop と見がちです。しかしこの章から、state が次の turn の存在理由を変え始めます。", + }, + s11: { + zh: "错误恢复真正提升系统完成度的地方,不是 try/except,而是系统能明确写出这次继续、重试或结束的原因。", + en: "What really raises completion in recovery is not `try/except`, but the system knowing exactly why it continues, retries, or stops.", + ja: "error recovery で完成度を押し上げるのは try/except そのものではなく、なぜ continue・retry・stop するのかを明示できる点です。", + }, + s17: { + zh: "自治车道会自己认领和恢复任务,所以 transition reason 不再只是单 agent 的内部细节,而是自治行为的稳定器。", + en: "Autonomous lanes claim and resume work on their own, so transition reasons stop being an internal detail and become part of the system stabilizer.", + ja: "自治レーンは自分で task を claim・resume するため、transition reason は単 agent の内部 detail ではなく、自治動作を安定化する要素になります。", + }, +}; + +const TASK_RUNTIME_VERSION_ANGLE: Partial> = { + s12: { + zh: "这一章只建立 durable work graph。现在最重要的护栏是:先把“目标任务”讲干净,不要提前把后台执行槽位塞进来。", + en: "This chapter only establishes the durable work graph. The main guardrail is to keep goal tasks clean before you push runtime execution slots into the same model.", + ja: "この章では durable work graph だけを作ります。最大のガードレールは、バックグラウンド実行スロットを混ぜる前に作業目標タスクをきれいに保つことです。", + }, + s13: { + zh: "后台任务真正新增的不是“又一种任务”,而是“任务目标之外,还要单独管理一层活着的执行槽位”。", + en: "Background tasks do not add just another task. They add a second layer of live execution slots outside the task goal itself.", + ja: "バックグラウンド実行が増やすのは task の別名ではなく、作業目標の外にある live execution slot という別層です。", + }, + s14: { + zh: "到了定时调度,读者最容易把 schedule、task、runtime slot 混成一团,所以必须把“谁定义目标、谁负责触发、谁真正执行”拆开看。", + en: "Cron scheduling is where schedule, task, and runtime slot start to blur together. The safe mental model is to separate who defines the goal, who triggers it, and who actually executes.", + ja: "cron に入ると schedule・task・runtime slot が混ざりやすくなります。goal を定義する層、発火させる層、実行する層を分けて見る必要があります。", + }, +}; + +const TEAM_BOUNDARY_VERSION_ANGLE: Partial> = { + s15: { + zh: "这章的重点不是“多开几个 agent”,而是让系统第一次拥有长期存在、可重复协作的 teammate 身份层。", + en: "The point of this chapter is not merely more agents. It is the first time the system gains persistent teammate identities that can collaborate repeatedly.", + ja: "この章の要点は agent を増やすことではなく、反復して協調できる persistent teammate identity を初めて持つことです。", + }, + s16: { + zh: "团队协议真正新增的是“可追踪的协调请求层”,不是普通聊天消息的花样变体。", + en: "Team protocols introduce a traceable coordination-request layer, not just another style of chat message.", + ja: "team protocol が増やすのは追跡可能な協調要求レイヤーであり、普通の chat message の変種ではありません。", + }, + s17: { + zh: "自治行为最容易讲糊的地方,是 teammate、task、runtime slot 三层同时动起来。所以这一章必须盯紧“谁在认领、谁在执行、谁在记录目标”。", + en: "Autonomy becomes confusing when teammate, task, and runtime slot all move at once. This chapter must keep clear who is claiming, who is executing, and who records the goal.", + ja: "autonomy で混線しやすいのは teammate・task・runtime slot が同時に動き出す点です。誰が claim し、誰が execute し、誰が goal を記録しているかを保つ必要があります。", + }, + s18: { + zh: "worktree 最容易被误解成另一种任务,其实它只是执行目录车道。任务管目标,runtime slot 管执行,worktree 管在哪做。", + en: "Worktrees are easy to misread as another kind of task, but they are execution-directory lanes. Tasks manage goals, runtime slots manage execution, and worktrees manage where execution happens.", + ja: "worktree は別種の task と誤解されがちですが、実際は実行ディレクトリのレーンです。task は goal、runtime slot は execution、worktree はどこで実行するかを管理します。", + }, +}; + +const CAPABILITY_LAYER_VERSION_ANGLE: Partial> = { + s19: { + zh: "这一章正文仍应坚持 tools-first,但页面必须额外提醒读者:MCP 平台真正长出来后,tools 只是 capability stack 里最先进入主线的那一层。", + en: "The chapter body should still stay tools-first, but the page should also remind readers that once the MCP platform grows up, tools are only the first layer of the capability stack to enter the mainline.", + ja: "本文は引き続き tools-first でよい一方、ページ上では tools が capability stack の最初の層にすぎないことも明示すべきです。", + }, +}; + +const TOOL_RUNTIME_TEXT = { + label: { + zh: "工具执行运行时", + en: "Tool Execution Runtime", + ja: "ツール実行の流れ", + }, + title: { + zh: "不要把工具调用压扁成“handler 一跑就完”", + en: "Do not flatten tool calls into one handler invocation", + ja: "tool call を単なる handler 呼び出しに潰さない", + }, + note: { + zh: "更完整的系统,会先判断这些 tool block 应该怎么分批、怎么执行、怎么稳定回写,而不是一股脑直接跑。", + en: "A more complete system first decides how tool blocks should be batched, executed, and written back instead of running everything immediately.", + ja: "より構造の整った system は、tool block を即座に全部走らせるのではなく、どう batch 化し、どう実行し、どう安定回写するかを先に決めます。", + }, + angleLabel: { + zh: "本章为什么要盯这层", + en: "Why This Lens Matters Here", + ja: "この章でこの層を見る理由", + }, + rulesLabel: { + zh: "运行规则", + en: "Runtime Rules", + ja: "実行ルール", + }, + recordsLabel: { + zh: "核心记录", + en: "Core Records", + ja: "主要レコード", + }, + safeLane: { + title: { + zh: "Safe 批次", + en: "Safe Batch", + ja: "安全バッチ", + }, + body: { + zh: "读多写少、共享状态风险低的工具可以并发执行,但 progress 和 context modifier 仍然要被跟踪。", + en: "Read-heavy, low-risk tools can execute concurrently, but progress and context modifiers still need tracking.", + ja: "読み取り中心で共有 state リスクの低い tool は並列実行できますが、progress と context modifier の追跡は必要です。", + }, + }, + exclusiveLane: { + title: { + zh: "Exclusive 批次", + en: "Exclusive Batch", + ja: "直列バッチ", + }, + body: { + zh: "会改文件、会改共享状态、会影响顺序的工具要留在串行车道,避免把 runtime 变成非确定性。", + en: "File writes, shared-state mutation, and order-sensitive tools stay in a serial lane to keep the runtime deterministic.", + ja: "file write・共有 state mutation・順序依存の tool は直列 lane に残し、runtime を非決定化させません。", + }, + }, + stages: [ + { + eyebrow: { + zh: "Step 1", + en: "Step 1", + ja: "ステップ 1", + }, + title: { + zh: "接住 tool blocks", + en: "Capture tool blocks", + ja: "tool blocks を受け止める", + }, + body: { + zh: "先把 model 产出的 tool_use block 视为一批待调度对象,而不是一出现就立刻执行。", + en: "Treat model-emitted tool_use blocks as a schedulable set before executing them immediately.", + ja: "model が出した tool_use block を、即実行する前にまず schedulable set として扱います。", + }, + }, + { + eyebrow: { + zh: "Step 2", + en: "Step 2", + ja: "ステップ 2", + }, + title: { + zh: "按并发安全性分批", + en: "Partition by concurrency safety", + ja: "concurrency safety で分割する", + }, + body: { + zh: "先决定哪些工具能并发,哪些必须串行,这一步本质上是在保护共享状态。", + en: "Decide which tools can run together and which must stay serial. This step protects shared state.", + ja: "どの tool が同時実行でき、どれが直列であるべきかを先に決めます。これは共有 state を守る工程です。", + }, + }, + { + eyebrow: { + zh: "Step 3", + en: "Step 3", + ja: "ステップ 3", + }, + title: { + zh: "稳定回写结果", + en: "Write back in stable order", + ja: "安定順で回写する", + }, + body: { + zh: "并发并不代表回写乱序。更完整的运行时会先排队 progress、结果和 context modifier,再按稳定顺序落地。", + en: "Concurrency does not imply chaotic write-back. A more complete runtime queues progress, results, and modifiers before landing them in stable order.", + ja: "並列実行は乱れた回写を意味しません。より整った runtime は progress・result・modifier をいったん整列させてから安定順で反映します。", + }, + }, + ], + rules: [ + { + title: { + zh: "progress 可以先走", + en: "progress can surface early", + ja: "progress は先に出してよい", + }, + body: { + zh: "慢工具不必一直沉默,先让上层知道它在做什么。", + en: "Slow tools do not need to stay silent. Let the upper layer see what they are doing.", + ja: "遅い tool を黙らせ続ける必要はありません。上位層へ今何をしているかを先に知らせます。", + }, + }, + { + title: { + zh: "modifier 先排队再合并", + en: "queue modifiers before merge", + ja: "modifier は queue してから merge する", + }, + body: { + zh: "共享 context 的修改最好不要按完成先后直接落地。", + en: "Shared context changes should not land directly in completion order.", + ja: "共有 context 変更を完了順でそのまま反映しない方が安全です。", + }, + }, + ], + records: [ + { + name: "ToolExecutionBatch", + note: { + zh: "表示一批可一起调度的 tool block。", + en: "Represents one schedulable batch of tool blocks.", + ja: "一緒に調度できる tool block の batch。", + }, + }, + { + name: "TrackedTool", + note: { + zh: "跟踪每个工具的排队、执行、完成、产出进度。", + en: "Tracks queued, executing, completed, and yielded progress states per tool.", + ja: "各 tool の queued・executing・completed・yielded progress を追跡します。", + }, + }, + { + name: "queued_context_modifiers", + note: { + zh: "把并发工具的共享状态修改先存起来,再稳定合并。", + en: "Stores shared-state mutations until they can be merged in stable order.", + ja: "並列 tool の共有 state 変更を一時保存し、後で安定順に merge します。", + }, + }, + ], +} as const; + +const QUERY_TRANSITION_TEXT = { + label: { + zh: "Query 转移模型", + en: "Query Transition Model", + ja: "クエリ継続モデル", + }, + title: { + zh: "不要把所有继续都看成同一个 `continue`", + en: "Do not treat every continuation as the same `continue`", + ja: "すべての継続を同じ `continue` と見なさない", + }, + note: { + zh: "只要系统开始长出恢复、压缩和自治行为,就必须知道:这一轮为什么结束、下一轮为什么存在、继续之前改了哪块状态。只有这样,这几层才不会搅成一团。", + en: "Once a system grows recovery, compaction, and autonomy, it must know why this turn ended, why the next turn exists, and what state changed before the jump.", + ja: "system に recovery・compact・autonomy が入り始めたら、この turn がなぜ終わり、次の turn がなぜ存在し、移行前にどの state を変えたかを知る必要があります。", + }, + angleLabel: { + zh: "本章为什么要盯这层", + en: "Why This Lens Matters Here", + ja: "この章でこの層を見る理由", + }, + chainLabel: { + zh: "转移链", + en: "Transition Chain", + ja: "遷移チェーン", + }, + reasonsLabel: { + zh: "常见继续原因", + en: "Common Continuation Reasons", + ja: "よくある継続理由", + }, + guardrailLabel: { + zh: "实现护栏", + en: "Implementation Guardrails", + ja: "実装ガードレール", + }, + chain: [ + { + title: { + zh: "当前轮撞到边界", + en: "The current turn hits a boundary", + ja: "現在の turn が境界に当たる", + }, + body: { + zh: "可能是 tool 结束、输出截断、compact 触发、transport 出错,或者外部 hook 改写了结束条件。", + en: "A tool may have finished, output may be truncated, compaction may have fired, transport may have failed, or a hook may have changed the ending condition.", + ja: "tool 完了、出力切断、compact 発火、transport error、hook による終了条件変更などが起こります。", + }, + }, + { + title: { + zh: "写入 reason + state patch", + en: "Write the reason and the state patch", + ja: "reason と state patch を書く", + }, + body: { + zh: "在真正继续前,把 transition、重试计数、compact 标志或补充消息写进状态。", + en: "Before continuing, record the transition, retry counters, compaction flags, or supplemental messages in state.", + ja: "続行前に transition、retry count、compact flag、補助 message などを state へ書き込みます。", + }, + }, + { + title: { + zh: "下一轮带着原因进入", + en: "The next turn enters with a reason", + ja: "次の turn は理由を持って入る", + }, + body: { + zh: "下一轮不再是盲目出现,它知道自己是正常回流、恢复重试还是预算延续。", + en: "The next turn is no longer blind. It knows whether it exists because of normal write-back, recovery, or budgeted continuation.", + ja: "次の turn は盲目的に現れるのではなく、通常回流・recovery retry・budget continuation のどれなのかを知っています。", + }, + }, + ], + reasons: [ + { + name: "tool_result_continuation", + note: { + zh: "工具完成后的正常回流。", + en: "Normal write-back after a tool finishes.", + ja: "tool 完了後の通常回流。", + }, + }, + { + name: "max_tokens_recovery", + note: { + zh: "输出被截断后的续写恢复。", + en: "Recovery after truncated model output.", + ja: "出力切断後の継続回復。", + }, + }, + { + name: "compact_retry", + note: { + zh: "上下文重排后的重试。", + en: "Retry after context reshaping.", + ja: "context 再構成後の retry。", + }, + }, + { + name: "transport_retry", + note: { + zh: "基础设施抖动后的再试一次。", + en: "Retry after infrastructure failure.", + ja: "基盤失敗後の再試行。", + }, + }, + ], + guardrails: [ + { + title: { + zh: "每个 continue site 都写 reason", + en: "every continue site writes a reason", + ja: "すべての continue site が reason を書く", + }, + }, + { + title: { + zh: "继续前先写 state patch", + en: "patch state before continuing", + ja: "続行前に state patch を書く", + }, + }, + { + title: { + zh: "重试和续写都要有 budget", + en: "retries and continuations need budgets", + ja: "retry と continuation には budget が必要", + }, + }, + ], +} as const; + +const TASK_RUNTIME_TEXT = { + label: { + zh: "任务运行时边界", + en: "Task Runtime Boundaries", + ja: "タスク実行の境界", + }, + title: { + zh: "把目标任务、执行槽位、调度触发拆成三层", + en: "Separate goal tasks, execution slots, and schedule triggers", + ja: "goal task・execution slot・schedule trigger を三層に分ける", + }, + note: { + zh: "从 `s12` 开始,读者最容易把所有“任务”混成一个词。更完整的系统会把 durable goal、live runtime slot 和 optional schedule trigger 分层管理。", + en: "From `s12` onward, readers start collapsing every kind of work into the word 'task'. More complete systems keep durable goals, live runtime slots, and optional schedule triggers on separate layers.", + ja: "`s12` 以降は、あらゆる仕事を task という一語へ潰しがちです。より構造の整った system は durable goal・live runtime slot・optional schedule trigger を分離して管理します。", + }, + angleLabel: { + zh: "本章为什么要盯这层", + en: "Why This Lens Matters Here", + ja: "この章でこの層を見る理由", + }, + layersLabel: { + zh: "三层对象", + en: "Three Layers", + ja: "三層の対象", + }, + flowLabel: { + zh: "真实推进关系", + en: "Actual Progression", + ja: "実際の進み方", + }, + recordsLabel: { + zh: "关键记录", + en: "Key Records", + ja: "主要レコード", + }, + layers: [ + { + title: { + zh: "Work-Graph Task", + en: "Work-Graph Task", + ja: "ワークグラフ・タスク", + }, + body: { + zh: "表示要做什么、谁依赖谁、谁负责。它关心目标和工作关系,不直接代表某个后台进程。", + en: "Represents what should be done, who depends on whom, and who owns the work. It is goal-oriented, not a live background process.", + ja: "何をやるか、誰が依存し、誰が owner かを表します。goal 指向であり、live background process そのものではありません。", + }, + }, + { + title: { + zh: "Runtime Slot", + en: "Runtime Slot", + ja: "ランタイムスロット", + }, + body: { + zh: "表示现在有什么执行单元活着:shell、teammate、monitor、workflow。它关心 status、output 和 notified。", + en: "Represents the live execution unit: shell, teammate, monitor, or workflow. It cares about status, output, and notification state.", + ja: "いま生きている execution unit を表します。shell・teammate・monitor・workflow などがここに入り、status・output・notified を持ちます。", + }, + }, + { + title: { + zh: "Schedule Trigger", + en: "Schedule Trigger", + ja: "スケジュールトリガー", + }, + body: { + zh: "表示什么时候要启动一次工作。它不是任务目标,也不是正在运行的槽位,而是触发规则。", + en: "Represents when work should start. It is neither the durable goal nor the live execution slot. It is the trigger rule.", + ja: "いつ仕事を起動するかを表します。durable goal でも live slot でもなく、trigger rule です。", + }, + }, + ], + flow: [ + { + title: { + zh: "目标先存在", + en: "The goal exists first", + ja: "goal が先に存在する", + }, + body: { + zh: "任务板先定义工作目标和依赖,不必立刻对应到某个后台执行体。", + en: "The task board defines goals and dependencies before any specific background execution exists.", + ja: "task board はまず goal と dependency を定義し、まだ特定の background execution を必要としません。", + }, + }, + { + title: { + zh: "执行时生成 runtime slot", + en: "Execution creates runtime slots", + ja: "実行時に runtime slot が生まれる", + }, + body: { + zh: "当系统真的开跑一个 shell、worker 或 monitor 时,再生成独立 runtime record。", + en: "Only when the system actually starts a shell, worker, or monitor does it create a separate runtime record.", + ja: "shell・worker・monitor を本当に起動した時点で、独立した runtime record を作ります。", + }, + }, + { + title: { + zh: "调度只是触发器", + en: "Scheduling is only the trigger", + ja: "schedule は trigger にすぎない", + }, + body: { + zh: "cron 负责到点触发,不负责代替任务目标,也不直接等同于执行槽位。", + en: "Cron decides when to fire. It does not replace the task goal and it is not the execution slot itself.", + ja: "cron は発火時刻を決める層であり、task goal を置き換えず、execution slot そのものでもありません。", + }, + }, + ], + records: [ + { + name: "TaskRecord", + note: { + zh: "durable goal 节点。", + en: "The durable goal node.", + ja: "durable goal node。", + }, + }, + { + name: "RuntimeTaskState", + note: { + zh: "活着的执行槽位记录。", + en: "The live execution-slot record.", + ja: "live execution-slot record。", + }, + }, + { + name: "ScheduleRecord", + note: { + zh: "描述何时触发工作的规则。", + en: "Describes when work should be triggered.", + ja: "いつ仕事を発火するかを記述する rule。", + }, + }, + { + name: "Notification", + note: { + zh: "把 runtime 结果重新带回主线。", + en: "Brings runtime results back into the mainline.", + ja: "runtime result を主線へ戻す record。", + }, + }, + ], +} as const; + +const TEAM_BOUNDARY_TEXT = { + label: { + zh: "团队边界模型", + en: "Team Boundary Model", + ja: "チーム境界モデル", + }, + title: { + zh: "把 teammate、协议请求、任务、执行槽位、worktree 车道分开", + en: "Separate teammates, protocol requests, tasks, runtime slots, and worktree lanes", + ja: "teammate・protocol request・task・runtime slot・worktree lane を分ける", + }, + note: { + zh: "到了 `s15-s18`,最容易让读者打结的不是某个函数,而是这五层对象一起动起来时,到底谁表示身份、谁表示目标、谁表示执行、谁表示目录车道。", + en: "From `s15` to `s18`, the hardest thing is not one function. It is keeping identity, coordination, goals, execution, and directory lanes distinct while all five move together.", + ja: "`s15-s18` で難しいのは個別の関数ではなく、identity・coordination・goal・execution・directory lane を同時に分けて保つことです。", + }, + angleLabel: { + zh: "本章为什么要盯这层", + en: "Why This Lens Matters Here", + ja: "この章でこの層を見る理由", + }, + layersLabel: { + zh: "五层对象", + en: "Five Layers", + ja: "五層の対象", + }, + rulesLabel: { + zh: "读的时候先守住", + en: "Read With These Guardrails", + ja: "読むときのガードレール", + }, + layers: [ + { + title: { + zh: "Teammate", + en: "Teammate", + ja: "Teammate", + }, + body: { + zh: "长期存在、可重复协作的身份层。", + en: "The persistent identity layer that can collaborate repeatedly.", + ja: "反復して協調できる persistent identity layer。", + }, + }, + { + title: { + zh: "Protocol Request", + en: "Protocol Request", + ja: "Protocol Request", + }, + body: { + zh: "团队内部一次可追踪的协调请求,带 `request_id`、kind 和状态。", + en: "A trackable coordination request inside the team, carrying a `request_id`, kind, and status.", + ja: "team 内の追跡可能な coordination request。`request_id`・kind・status を持ちます。", + }, + }, + { + title: { + zh: "Task", + en: "Task", + ja: "Task", + }, + body: { + zh: "表示要做什么的目标层。", + en: "The goal layer that records what should be done.", + ja: "何をやるかを表す goal layer。", + }, + }, + { + title: { + zh: "Runtime Slot", + en: "Runtime Slot", + ja: "ランタイムスロット", + }, + body: { + zh: "表示谁正在执行、执行到什么状态。", + en: "Represents who is actively executing and what execution state they are in.", + ja: "誰が実行中で、どの execution state にいるかを表します。", + }, + }, + { + title: { + zh: "Worktree Lane", + en: "Worktree Lane", + ja: "Worktree Lane", + }, + body: { + zh: "表示在哪个隔离目录里推进工作。", + en: "Represents the isolated directory lane where execution happens.", + ja: "どの分離ディレクトリ lane で仕事を進めるかを表します。", + }, + }, + ], + rules: [ + { + title: { + zh: "身份不是目标", + en: "identity is not the goal", + ja: "identity は goal ではない", + }, + body: { + zh: "teammate 表示谁长期存在,不表示这件工作本身。", + en: "A teammate tells you who persists in the system, not what the work item itself is.", + ja: "teammate は誰が system に長く存在するかを表し、仕事そのものではありません。", + }, + }, + { + title: { + zh: "`request_id` 不等于 `task_id`", + en: "`request_id` is not `task_id`", + ja: "`request_id` は `task_id` ではない", + }, + body: { + zh: "协议请求记录协调过程,任务记录工作目标,两者都可长期存在但职责不同。", + en: "Protocol requests record coordination, while tasks record work goals. Both can persist, but they serve different purposes.", + ja: "protocol request は coordination を記録し、task は work goal を記録します。どちらも残り得ますが役割は別です。", + }, + }, + { + title: { + zh: "worktree 不是另一种任务", + en: "a worktree is not another kind of task", + ja: "worktree は別種の task ではない", + }, + body: { + zh: "它只负责目录隔离和 closeout,不负责定义目标。", + en: "It manages directory isolation and closeout, not the work goal itself.", + ja: "directory isolation と closeout を管理する層であり、goal を定義する層ではありません。", + }, + }, + ], +} as const; + +const CAPABILITY_LAYER_TEXT = { + label: { + zh: "外部能力层地图", + en: "External Capability Layers", + ja: "外部 capability レイヤー", + }, + title: { + zh: "把 MCP 看成能力层,而不只是外部工具目录", + en: "See MCP as layered capability, not just an external tool catalog", + ja: "MCP を外部 tool catalog ではなく layered capability として見る", + }, + note: { + zh: "如果只把 MCP 当作远程工具列表,读者会在 resources、prompts、elicitation、auth 这些点上突然失去主线。更稳的办法是先守住 tools-first,再补整张能力层地图。", + en: "If MCP is taught only as a remote tool list, readers lose the thread when resources, prompts, elicitation, and auth appear. The steadier approach is tools-first in the mainline, then the full capability map.", + ja: "MCP を remote tool list だけで教えると、resources・prompts・elicitation・auth が出た瞬間に主線を失います。tools-first を守りつつ capability map を補う方が安定です。", + }, + angleLabel: { + zh: "本章为什么要盯这层", + en: "Why This Lens Matters Here", + ja: "この章でこの層を見る理由", + }, + layersLabel: { + zh: "六层能力面", + en: "Six Capability Layers", + ja: "六層の capability", + }, + teachLabel: { + zh: "教学顺序", + en: "Teaching Order", + ja: "教える順序", + }, + layers: [ + { title: { zh: "Config", en: "Config", ja: "設定" }, body: { zh: "server 配置来自哪里、长什么样。", en: "Where server configuration comes from and what it looks like.", ja: "server config がどこから来て、どんな形か。" } }, + { title: { zh: "Transport", en: "Transport", ja: "接続方式" }, body: { zh: "stdio / http / sse / ws 这些连接通道。", en: "The connection channel such as stdio, HTTP, SSE, or WebSocket.", ja: "stdio / HTTP / SSE / WS などの接続通路。" } }, + { title: { zh: "Connection State", en: "Connection State", ja: "接続状態" }, body: { zh: "connected / pending / needs-auth / failed。", en: "States such as connected, pending, needs-auth, and failed.", ja: "connected / pending / needs-auth / failed などの状態。" } }, + { title: { zh: "Capabilities", en: "Capabilities", ja: "能力層" }, body: { zh: "tools 只是其中之一,旁边还有 resources、prompts、elicitation。", en: "Tools are only one member of the layer beside resources, prompts, and elicitation.", ja: "tools は一員にすぎず、resources・prompts・elicitation も並びます。" } }, + { title: { zh: "Auth", en: "Auth", ja: "認証" }, body: { zh: "决定 server 能不能真正进入 connected 可用态。", en: "Determines whether a server can actually enter the usable connected state.", ja: "server が実際に使える connected 状態へ入れるかを決めます。" } }, + { title: { zh: "Router Integration", en: "Router Integration", ja: "ルーター統合" }, body: { zh: "最后怎么回到 tool router、permission 和 notification。", en: "How the result finally routes back into tool routing, permissions, and notifications.", ja: "最後に tool router・permission・notification へどう戻るか。" } }, + ], + teach: [ + { + title: { zh: "先讲 tools-first", en: "Teach tools-first first", ja: "まず tools-first を教える" }, + body: { zh: "先让读者能把外部工具接回来,不要一开始就被平台细节拖走。", en: "Let readers wire external tools back into the agent before platform details take over.", ja: "最初から platform detail に引き込まず、まず外部 tool を agent へ戻せるようにします。" }, + }, + { + title: { zh: "再补 capability map", en: "Then add the capability map", ja: "次に capability map を足す" }, + body: { zh: "告诉读者 tools 只是切面之一,平台还有别的面。", en: "Show readers that tools are only one slice of a broader platform.", ja: "tools が broader platform の一断面にすぎないことを見せます。" }, + }, + { + title: { zh: "最后再展开 auth 等重层", en: "Expand auth and heavier layers last", ja: "auth など重い層は最後に展開する" }, + body: { zh: "只有当前两层站稳后,再深入认证和更复杂状态机。", en: "Only after the first two layers are stable should auth and heavier state machines become the focus.", ja: "最初の二層が安定してから、auth や重い state machine を扱います。" }, + }, + ], +} as const; + +function pick(locale: string, value: LocaleText): string { + if (locale === "zh") return value.zh; + if (locale === "ja") return value.ja; + return value.en; +} + +function ToolRuntimeLens({ + locale, + angle, +}: { + locale: string; + angle: string; +}) { + return ( +
+
+

+ {pick(locale, TOOL_RUNTIME_TEXT.label)} +

+

+ {pick(locale, TOOL_RUNTIME_TEXT.title)} +

+

+ {pick(locale, TOOL_RUNTIME_TEXT.note)} +

+
+ +
+
+

+ {pick(locale, TOOL_RUNTIME_TEXT.angleLabel)} +

+

+ {angle} +

+
+ +
+
+
+ {TOOL_RUNTIME_TEXT.stages.map((stage) => ( +
+

+ {pick(locale, stage.eyebrow)} +

+

+ {pick(locale, stage.title)} +

+

+ {pick(locale, stage.body)} +

+
+ ))} +
+ +
+
+

+ {pick(locale, TOOL_RUNTIME_TEXT.safeLane.title)} +

+

+ {pick(locale, TOOL_RUNTIME_TEXT.safeLane.body)} +

+
+
+

+ {pick(locale, TOOL_RUNTIME_TEXT.exclusiveLane.title)} +

+

+ {pick(locale, TOOL_RUNTIME_TEXT.exclusiveLane.body)} +

+
+
+
+ +
+
+

+ {pick(locale, TOOL_RUNTIME_TEXT.rulesLabel)} +

+
+ {TOOL_RUNTIME_TEXT.rules.map((rule) => ( +
+

+ {pick(locale, rule.title)} +

+

+ {pick(locale, rule.body)} +

+
+ ))} +
+
+ +
+

+ {pick(locale, TOOL_RUNTIME_TEXT.recordsLabel)} +

+
+ {TOOL_RUNTIME_TEXT.records.map((record) => ( +
+ + {record.name} + +

+ {pick(locale, record.note)} +

+
+ ))} +
+
+
+
+
+
+ ); +} + +function QueryTransitionLens({ + locale, + angle, +}: { + locale: string; + angle: string; +}) { + return ( +
+
+

+ {pick(locale, QUERY_TRANSITION_TEXT.label)} +

+

+ {pick(locale, QUERY_TRANSITION_TEXT.title)} +

+

+ {pick(locale, QUERY_TRANSITION_TEXT.note)} +

+
+ +
+
+

+ {pick(locale, QUERY_TRANSITION_TEXT.angleLabel)} +

+

+ {angle} +

+
+ +
+
+

+ {pick(locale, QUERY_TRANSITION_TEXT.chainLabel)} +

+
+ {QUERY_TRANSITION_TEXT.chain.map((item, index) => ( +
+
+

+ {pick(locale, item.title)} +

+

+ {pick(locale, item.body)} +

+
+ {index < QUERY_TRANSITION_TEXT.chain.length - 1 && ( +
+
+
+ )} +
+ ))} +
+
+ +
+
+

+ {pick(locale, QUERY_TRANSITION_TEXT.reasonsLabel)} +

+
+ {QUERY_TRANSITION_TEXT.reasons.map((reason) => ( +
+ + {reason.name} + +

+ {pick(locale, reason.note)} +

+
+ ))} +
+
+ +
+

+ {pick(locale, QUERY_TRANSITION_TEXT.guardrailLabel)} +

+
+ {QUERY_TRANSITION_TEXT.guardrails.map((item) => ( +
+

+ {pick(locale, item.title)} +

+
+ ))} +
+
+
+
+
+
+ ); +} + +function TaskRuntimeLens({ + locale, + angle, +}: { + locale: string; + angle: string; +}) { + return ( +
+
+

+ {pick(locale, TASK_RUNTIME_TEXT.label)} +

+

+ {pick(locale, TASK_RUNTIME_TEXT.title)} +

+

+ {pick(locale, TASK_RUNTIME_TEXT.note)} +

+
+ +
+
+

+ {pick(locale, TASK_RUNTIME_TEXT.angleLabel)} +

+

+ {angle} +

+
+ +
+
+
+

+ {pick(locale, TASK_RUNTIME_TEXT.layersLabel)} +

+
+ {TASK_RUNTIME_TEXT.layers.map((layer) => ( +
+

+ {pick(locale, layer.title)} +

+

+ {pick(locale, layer.body)} +

+
+ ))} +
+
+ +
+

+ {pick(locale, TASK_RUNTIME_TEXT.flowLabel)} +

+
+ {TASK_RUNTIME_TEXT.flow.map((item, index) => ( +
+
+

+ {pick(locale, item.title)} +

+

+ {pick(locale, item.body)} +

+
+ {index < TASK_RUNTIME_TEXT.flow.length - 1 && ( +
+
+
+ )} +
+ ))} +
+
+
+ +
+

+ {pick(locale, TASK_RUNTIME_TEXT.recordsLabel)} +

+
+ {TASK_RUNTIME_TEXT.records.map((record) => ( +
+ + {record.name} + +

+ {pick(locale, record.note)} +

+
+ ))} +
+
+
+
+
+ ); +} + +function TeamBoundaryLens({ + locale, + angle, +}: { + locale: string; + angle: string; +}) { + return ( +
+
+

+ {pick(locale, TEAM_BOUNDARY_TEXT.label)} +

+

+ {pick(locale, TEAM_BOUNDARY_TEXT.title)} +

+

+ {pick(locale, TEAM_BOUNDARY_TEXT.note)} +

+
+ +
+
+

+ {pick(locale, TEAM_BOUNDARY_TEXT.angleLabel)} +

+

+ {angle} +

+
+ +
+
+

+ {pick(locale, TEAM_BOUNDARY_TEXT.layersLabel)} +

+
+ {TEAM_BOUNDARY_TEXT.layers.map((layer) => ( +
+

+ {pick(locale, layer.title)} +

+

+ {pick(locale, layer.body)} +

+
+ ))} +
+
+ +
+

+ {pick(locale, TEAM_BOUNDARY_TEXT.rulesLabel)} +

+
+ {TEAM_BOUNDARY_TEXT.rules.map((rule) => ( +
+

+ {pick(locale, rule.title)} +

+

+ {pick(locale, rule.body)} +

+
+ ))} +
+
+
+
+
+ ); +} + +function CapabilityLayerLens({ + locale, + angle, +}: { + locale: string; + angle: string; +}) { + return ( +
+
+

+ {pick(locale, CAPABILITY_LAYER_TEXT.label)} +

+

+ {pick(locale, CAPABILITY_LAYER_TEXT.title)} +

+

+ {pick(locale, CAPABILITY_LAYER_TEXT.note)} +

+
+ +
+
+

+ {pick(locale, CAPABILITY_LAYER_TEXT.angleLabel)} +

+

+ {angle} +

+
+ +
+
+

+ {pick(locale, CAPABILITY_LAYER_TEXT.layersLabel)} +

+
+ {CAPABILITY_LAYER_TEXT.layers.map((layer) => ( +
+

+ {pick(locale, layer.title)} +

+

+ {pick(locale, layer.body)} +

+
+ ))} +
+
+ +
+

+ {pick(locale, CAPABILITY_LAYER_TEXT.teachLabel)} +

+
+ {CAPABILITY_LAYER_TEXT.teach.map((step) => ( +
+

+ {pick(locale, step.title)} +

+

+ {pick(locale, step.body)} +

+
+ ))} +
+
+
+
+
+ ); +} + +export function VersionMechanismLenses({ + version, + locale, +}: VersionMechanismLensesProps) { + const toolAngle = TOOL_RUNTIME_VERSION_ANGLE[version as VersionId]; + const queryAngle = QUERY_TRANSITION_VERSION_ANGLE[version as VersionId]; + const taskAngle = TASK_RUNTIME_VERSION_ANGLE[version as VersionId]; + const teamAngle = TEAM_BOUNDARY_VERSION_ANGLE[version as VersionId]; + const capabilityAngle = CAPABILITY_LAYER_VERSION_ANGLE[version as VersionId]; + const lensCount = + Number(Boolean(toolAngle)) + + Number(Boolean(queryAngle)) + + Number(Boolean(taskAngle)) + + Number(Boolean(teamAngle)) + + Number(Boolean(capabilityAngle)); + + if (!lensCount) return null; + + return ( +
+
+

+ {pick(locale, SECTION_TEXT.label)} +

+

+ {pick(locale, SECTION_TEXT.title)} +

+

+ {pick(locale, SECTION_TEXT.body)} +

+
+ +
1 ? "2xl:grid-cols-2" : ""}`}> + {toolAngle && } + {queryAngle && } + {taskAngle && } + {teamAngle && } + {capabilityAngle && } +
+
+ ); +} diff --git a/web/src/components/diff/code-diff.tsx b/web/src/components/diff/code-diff.tsx index a62cfd34a..9973cf363 100644 --- a/web/src/components/diff/code-diff.tsx +++ b/web/src/components/diff/code-diff.tsx @@ -2,6 +2,7 @@ import { useState, useMemo } from "react"; import { diffLines, Change } from "diff"; +import { useTranslations } from "@/lib/i18n"; import { cn } from "@/lib/utils"; interface CodeDiffProps { @@ -13,11 +14,12 @@ interface CodeDiffProps { export function CodeDiff({ oldSource, newSource, oldLabel, newLabel }: CodeDiffProps) { const [viewMode, setViewMode] = useState<"unified" | "split">("unified"); + const t = useTranslations("diff"); const changes = useMemo(() => diffLines(oldSource, newSource), [oldSource, newSource]); return ( -
+
{oldLabel} @@ -34,7 +36,7 @@ export function CodeDiff({ oldSource, newSource, oldLabel, newLabel }: CodeDiffP : "text-zinc-500 hover:text-zinc-700 dark:text-zinc-400" )} > - Unified + {t("view_unified")}
@@ -79,8 +81,8 @@ function UnifiedView({ changes }: { changes: Change[] }) { } return ( -
- +
+
{rows.map((row, i) => ( -
+
+
{rows.map((row, i) => ( diff --git a/web/src/components/docs/doc-renderer.tsx b/web/src/components/docs/doc-renderer.tsx index f83f8561e..4bf29b08a 100644 --- a/web/src/components/docs/doc-renderer.tsx +++ b/web/src/components/docs/doc-renderer.tsx @@ -12,7 +12,8 @@ import rehypeHighlight from "rehype-highlight"; import rehypeStringify from "rehype-stringify"; interface DocRendererProps { - version: string; + version?: string; + slug?: string; } function renderMarkdown(md: string): string { @@ -55,23 +56,38 @@ function postProcessHtml(html: string): string { (_, start) => `
    ` ); + // Wrap markdown tables so wide teaching maps scroll locally instead of + // stretching the whole doc page. + html = html.replace(/
/g, '
'); + html = html.replace(/<\/table>/g, "
"); + return html; } -export function DocRenderer({ version }: DocRendererProps) { +export function DocRenderer({ version, slug }: DocRendererProps) { const locale = useLocale(); const doc = useMemo(() => { + if (!version && !slug) return null; + const match = docsData.find( - (d: { version: string; locale: string }) => - d.version === version && d.locale === locale + (d: { version?: string | null; slug?: string; locale: string; kind?: string }) => + (version ? d.version === version && d.kind === "chapter" : d.slug === slug) && + d.locale === locale ); if (match) return match; + const zhFallback = docsData.find( + (d: { version?: string | null; slug?: string; locale: string; kind?: string }) => + (version ? d.version === version && d.kind === "chapter" : d.slug === slug) && + d.locale === "zh" + ); + if (zhFallback) return zhFallback; return docsData.find( - (d: { version: string; locale: string }) => - d.version === version && d.locale === "en" + (d: { version?: string | null; slug?: string; locale: string; kind?: string }) => + (version ? d.version === version && d.kind === "chapter" : d.slug === slug) && + d.locale === "en" ); - }, [version, locale]); + }, [version, slug, locale]); if (!doc) return null; diff --git a/web/src/components/layout/header.tsx b/web/src/components/layout/header.tsx index 3749743e5..d49d1ff53 100644 --- a/web/src/components/layout/header.tsx +++ b/web/src/components/layout/header.tsx @@ -8,9 +8,8 @@ import { useState, useEffect } from "react"; import { cn } from "@/lib/utils"; const NAV_ITEMS = [ - { key: "timeline", href: "/timeline" }, + { key: "reference", href: "/reference" }, { key: "compare", href: "/compare" }, - { key: "layers", href: "/layers" }, ] as const; const LOCALES = [ diff --git a/web/src/components/layout/sidebar.tsx b/web/src/components/layout/sidebar.tsx index 7d2f6d90d..fa224b29b 100644 --- a/web/src/components/layout/sidebar.tsx +++ b/web/src/components/layout/sidebar.tsx @@ -6,14 +6,17 @@ import { LAYERS, VERSION_META } from "@/lib/constants"; import { useTranslations } from "@/lib/i18n"; import { cn } from "@/lib/utils"; -const LAYER_DOT_BG: Record = { - tools: "bg-blue-500", - planning: "bg-emerald-500", - memory: "bg-purple-500", - concurrency: "bg-amber-500", - collaboration: "bg-red-500", +const LAYER_DOT_COLORS: Record = { + core: "bg-blue-500", + hardening: "bg-emerald-500", + runtime: "bg-amber-500", + platform: "bg-red-500", }; +function isActiveLink(pathname: string, href: string) { + return pathname === href || pathname === `${href}/`; +} + export function Sidebar() { const pathname = usePathname(); const locale = pathname.split("/")[1] || "en"; @@ -21,12 +24,12 @@ export function Sidebar() { const tLayer = useTranslations("layer_labels"); return ( -