Skip to content
This repository was archived by the owner on Mar 25, 2026. It is now read-only.

Commit 5af5b24

Browse files
devin-ai-integration[bot]afterrburnafterrburn
authored
Empty PR 5 (#294)
* Empty PR 4 - minimal comment Co-Authored-By: srith@agentuity.com <rithsenghorn@gmail.com> * Empty PR 5 - minimal whitespace change Co-Authored-By: srith@agentuity.com <rithsenghorn@gmail.com> * PULSE * remove empty comment --------- Co-authored-by: Devin AI <158243242+devin-ai-integration[bot]@users.noreply.github.com> Co-authored-by: srith@agentuity.com <rithsenghorn@gmail.com> Co-authored-by: afterrburn <sun_rsh@outlook.com>
1 parent 1f5c285 commit 5af5b24

13 files changed

Lines changed: 832 additions & 1 deletion

File tree

agent-docs/agentuity.yaml

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -78,3 +78,6 @@ agents:
7878
- id: agent_9ccc5545e93644bd9d7954e632a55a61
7979
name: doc-qa
8080
description: Agent that can answer questions based on dev docs as the knowledge base
81+
- id: agent_ddcb59aa4473f1323be5d9f5fb62b74e
82+
name: agent-pulse
83+
description: Agentuity web app agent that converses with users for generate conversation and structured docs tutorials.
Lines changed: 102 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,102 @@
1+
# Pulse Agent
2+
3+
A conversational AI agent for tutorial management built with OpenAI and structured responses.
4+
5+
## Overview
6+
7+
Pulse is a friendly AI assistant that helps users discover, start, and navigate through tutorials. It uses OpenAI's GPT-4o-mini with structured response generation to provide both conversational responses and actionable instructions.
8+
9+
## Architecture
10+
11+
### Core Components
12+
13+
- **`index.ts`**: Main agent logic using `generateObject` for structured responses
14+
- **`chat-helpers.ts`**: Conversation history management
15+
- **`tutorial-helpers.ts`**: Tutorial content fetching and formatting
16+
- **`tutorial.ts`**: Tutorial API integration
17+
18+
### Response Structure
19+
20+
The agent uses `generateObject` to return structured responses with two parts:
21+
22+
```typescript
23+
{
24+
message: string, // Conversational response for the user
25+
actionable?: { // Optional action for the program to execute
26+
type: 'start_tutorial' | 'next_step' | 'previous_step' | 'get_tutorials' | 'none',
27+
tutorialId?: string,
28+
step?: number
29+
}
30+
}
31+
```
32+
33+
### How It Works
34+
35+
1. **User Input**: Agent receives user message and conversation history
36+
2. **LLM Processing**: OpenAI generates structured response with message and optional actionable object
37+
3. **Action Execution**: Program intercepts actionable objects and executes them:
38+
- `get_tutorials`: Fetches available tutorial list
39+
- `start_tutorial`: Fetches real tutorial content from API
40+
- `next_step`/`previous_step`: Navigate through tutorial steps (TODO)
41+
4. **Response**: Returns conversational message plus any additional data (tutorial content, tutorial list, etc.)
42+
43+
## Key Features
44+
45+
- **Structured Responses**: Clean separation between conversation and actions
46+
- **Real Tutorial Content**: No hallucinated content - all tutorial data comes from actual APIs
47+
- **Context Awareness**: Maintains conversation history for natural references
48+
- **Extensible Actions**: Easy to add new action types (next step, hints, etc.)
49+
- **Debug Logging**: Comprehensive logging for troubleshooting
50+
51+
## Example Interactions
52+
53+
### Starting a Tutorial
54+
**User**: "I want to learn the JavaScript SDK"
55+
56+
**LLM Response**:
57+
```json
58+
{
59+
"message": "I'd be happy to help you start the JavaScript SDK tutorial!",
60+
"actionable": {
61+
"type": "start_tutorial",
62+
"tutorialId": "javascript-sdk"
63+
}
64+
}
65+
```
66+
67+
**Final Response**:
68+
```json
69+
{
70+
"response": "I'd be happy to help you start the JavaScript SDK tutorial!",
71+
"tutorialData": {
72+
"type": "tutorial_step",
73+
"tutorialId": "javascript-sdk",
74+
"tutorialTitle": "JavaScript SDK Tutorial",
75+
"currentStep": 1,
76+
"stepContent": "Welcome to the JavaScript SDK tutorial...",
77+
"codeBlock": {...}
78+
},
79+
"conversationHistory": [...]
80+
}
81+
```
82+
83+
### General Conversation
84+
**User**: "What's the difference between TypeScript and JavaScript?"
85+
86+
**LLM Response**:
87+
```json
88+
{
89+
"message": "TypeScript is a superset of JavaScript that adds static type checking...",
90+
"actionable": {
91+
"type": "none"
92+
}
93+
}
94+
```
95+
96+
## Benefits
97+
98+
- **Reliable**: No parsing or tool interception needed
99+
- **Extensible**: Easy to add new action types
100+
- **Clean**: Clear separation between conversation and actions
101+
- **Debuggable**: Can see exactly what the LLM wants to do
102+
- **No Hallucination**: Tutorial content comes from real APIs, not LLM generation
Lines changed: 54 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,54 @@
1+
import type { AgentContext } from "@agentuity/sdk";
2+
3+
export async function buildSystemPrompt(tutorialContext: string, ctx: AgentContext): Promise<string> {
4+
try {
5+
const systemPrompt = `=== ROLE ===
6+
You are Pulse, an AI assistant designed to help developers learn and navigate the Agentuity platform through interactive tutorials and clear guidance. Your primary goal is to assist users with understanding and using the Agentuity SDK effectively. When a user's query is vague, unclear, or lacks specific intent, subtly suggest relevant interactive tutorial to guide them toward learning the platform. For clear, specific questions related to the Agentuity SDK or other topics, provide direct, accurate, and concise answers without mentioning tutorials unless relevant. Always maintain a friendly and approachable tone to encourage engagement.
7+
8+
Your role is to ensure user have a smooth tutorial experience!
9+
10+
When user is asking to move to the next tutorial, simply increment the step for them.
11+
12+
=== PERSONALITY ===
13+
- Friendly and encouraging with light humour
14+
- Patient with learners at all levels
15+
- Clear and concise in explanations
16+
- Enthusiastic about teaching and problem-solving
17+
18+
=== Available Tools or Functions ===
19+
You have access to various tools you can use -- use when appropriate!
20+
1. Tutorial management
21+
- startTutorialAtStep: Starting the user off at a specific step of a tutorial.
22+
2. General assistance
23+
- askDocsAgentTool: retrieve Agentuity documentation snippets
24+
25+
=== TOOL-USAGE RULES (must follow) ===
26+
- startTutorialById must only be used when user select a tutorial. If the user starts a new tutorial, the step number should be set to one. Valid step is between 1 and totalSteps of the specific tutorial.
27+
- Treat askDocsAgentTool as a search helper; ignore results you judge irrelevant.
28+
29+
=== RESPONSE STYLE (format guidelines) ===
30+
- Begin with a short answer, then elaborate if necessary.
31+
- Add brief comments to complex code; skip obvious lines.
32+
- End with a question when further clarification could help the user.
33+
34+
=== SAFETY & BOUNDARIES ===
35+
- If asked for private data or secrets, refuse.
36+
- If the user requests actions outside your capabilities, apologise and explain.
37+
- Keep every response < 400 words
38+
39+
Generate a response to the user query accordingly and try to be helpful
40+
41+
=== CONTEXT ===
42+
${tutorialContext}
43+
44+
=== END OF PROMPT ===
45+
46+
Stream your reasoning steps clearly.`;
47+
48+
ctx.logger.debug("Built system prompt with tutorial context");
49+
return systemPrompt;
50+
} catch (error) {
51+
ctx.logger.error("Failed to build system prompt: %s", error instanceof Error ? error.message : String(error));
52+
throw error; // Re-throw for centralized handling
53+
}
54+
}
Lines changed: 143 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,143 @@
1+
import type { AgentRequest, AgentResponse, AgentContext } from "@agentuity/sdk";
2+
import { streamText } from "ai";
3+
import { openai } from "@ai-sdk/openai";
4+
import { createTools } from "./tools";
5+
import { createAgentState } from "./state";
6+
import { getTutorialList, type Tutorial } from "./tutorial";
7+
import { parseAgentRequest } from "./request/parser";
8+
import { buildSystemPrompt } from "./context/builder";
9+
import { createStreamingProcessor } from "./streaming/processor";
10+
import type { ConversationMessage, TutorialState } from "./request/types";
11+
12+
/**
13+
* Builds a context string containing available tutorials for the system prompt
14+
*/
15+
async function buildContext(
16+
ctx: AgentContext,
17+
tutorialState?: TutorialState
18+
): Promise<string> {
19+
try {
20+
const tutorials = await getTutorialList(ctx);
21+
22+
// Handle API failure early
23+
if (!tutorials.success || !tutorials.data) {
24+
ctx.logger.warn("Failed to load tutorial list");
25+
return defaultFallbackContext();
26+
}
27+
28+
const tutorialContent = JSON.stringify(tutorials.data, null, 2);
29+
const currentTutorialInfo = buildCurrentTutorialInfo(
30+
tutorials.data,
31+
tutorialState
32+
);
33+
34+
return `===AVAILABLE TUTORIALS====
35+
36+
${tutorialContent}
37+
38+
${currentTutorialInfo}
39+
40+
Note: You should not expose the details of the tutorial IDs to the user.
41+
`;
42+
} catch (error) {
43+
ctx.logger.error("Error building tutorial context: %s", error);
44+
return defaultFallbackContext();
45+
}
46+
}
47+
48+
/**
49+
* Builds current tutorial information string if user is in a tutorial
50+
*/
51+
function buildCurrentTutorialInfo(
52+
tutorials: Tutorial[],
53+
tutorialState?: TutorialState
54+
): string {
55+
if (!tutorialState?.tutorialId) {
56+
return "";
57+
}
58+
59+
const currentTutorial = tutorials.find(
60+
(t) => t.id === tutorialState.tutorialId
61+
);
62+
if (!currentTutorial) {
63+
return "\nWarning: User appears to be in an unknown tutorial.";
64+
}
65+
if (tutorialState.currentStep > currentTutorial.totalSteps) {
66+
return `\nUser has completed the tutorial: ${currentTutorial.title} (${currentTutorial.totalSteps} steps)`;
67+
}
68+
return `\nUser is currently on this tutorial: ${currentTutorial.title} (Step ${tutorialState.currentStep} of ${currentTutorial.totalSteps})`;
69+
}
70+
71+
/**
72+
* Returns fallback context when tutorial list can't be loaded
73+
*/
74+
function defaultFallbackContext(): string {
75+
return `===AVAILABLE TUTORIALS====
76+
Unable to load tutorial list. Please try again later or contact support.`;
77+
}
78+
79+
export default async function Agent(
80+
req: AgentRequest,
81+
resp: AgentResponse,
82+
ctx: AgentContext
83+
) {
84+
try {
85+
const parsedRequest = parseAgentRequest(await req.data.json(), ctx);
86+
87+
// Create state manager
88+
const state = createAgentState();
89+
90+
// Build messages for the conversation
91+
const messages: ConversationMessage[] = [
92+
...parsedRequest.conversationHistory,
93+
{ author: "USER", content: parsedRequest.message },
94+
];
95+
96+
let tools: any;
97+
let systemPrompt: string = "";
98+
// Direct LLM access won't require any tools or system prompt
99+
if (!parsedRequest.useDirectLLM) {
100+
// Create tools with state context
101+
tools = await createTools({
102+
state,
103+
agentContext: ctx,
104+
});
105+
106+
// Build tutorial context and system prompt
107+
const tutorialContext = await buildContext(
108+
ctx,
109+
parsedRequest.tutorialData
110+
);
111+
systemPrompt = await buildSystemPrompt(tutorialContext, ctx);
112+
}
113+
114+
// Generate streaming response
115+
const result = await streamText({
116+
model: openai("gpt-4o"),
117+
messages: messages.map((msg) => ({
118+
role: msg.author === "USER" ? "user" : "assistant",
119+
content: msg.content,
120+
})),
121+
tools,
122+
maxSteps: 3,
123+
system: systemPrompt,
124+
});
125+
126+
// Create and return streaming response
127+
const stream = createStreamingProcessor(result, state, ctx);
128+
return resp.stream(stream, "text/event-stream");
129+
} catch (error) {
130+
ctx.logger.error(
131+
"Agent request failed: %s",
132+
error instanceof Error ? error.message : String(error)
133+
);
134+
return resp.json(
135+
{
136+
error:
137+
"Sorry, I encountered an error while processing your request. Please try again.",
138+
details: error instanceof Error ? error.message : String(error),
139+
},
140+
{ status: 500 }
141+
);
142+
}
143+
}
Lines changed: 49 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,49 @@
1+
import type { AgentContext } from "@agentuity/sdk";
2+
import type { ParsedAgentRequest } from "./types";
3+
4+
export function parseAgentRequest(
5+
jsonData: any,
6+
ctx: AgentContext
7+
): ParsedAgentRequest {
8+
try {
9+
let message: string = "";
10+
let conversationHistory: any[] = [];
11+
let tutorialData: any = undefined;
12+
let useDirectLLM = false;
13+
14+
if (jsonData && typeof jsonData === "object" && !Array.isArray(jsonData)) {
15+
const body = jsonData as any;
16+
message = body.message || "";
17+
useDirectLLM = body.use_direct_llm || false;
18+
// Process conversation history
19+
if (Array.isArray(body.conversationHistory)) {
20+
conversationHistory = body.conversationHistory.map((msg: any) => {
21+
// Extract only role and content
22+
return {
23+
role: msg.role || (msg.author ? msg.author.toUpperCase() : "USER"),
24+
content: msg.content || "",
25+
};
26+
});
27+
}
28+
29+
tutorialData = body.tutorialData || undefined;
30+
} else {
31+
// Fallback for non-object data
32+
message = String(jsonData || "");
33+
}
34+
35+
return {
36+
message,
37+
conversationHistory,
38+
tutorialData,
39+
useDirectLLM,
40+
};
41+
} catch (error) {
42+
ctx.logger.error(
43+
"Failed to parse agent request: %s",
44+
error instanceof Error ? error.message : String(error)
45+
);
46+
ctx.logger.debug("Raw request data: %s", JSON.stringify(jsonData));
47+
throw error; // Re-throw for centralized handling
48+
}
49+
}
Lines changed: 16 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,16 @@
1+
export interface ConversationMessage {
2+
author: "USER" | "ASSISTANT";
3+
content: string;
4+
}
5+
6+
export interface TutorialState {
7+
tutorialId: string;
8+
currentStep: number;
9+
}
10+
11+
export interface ParsedAgentRequest {
12+
message: string;
13+
conversationHistory: ConversationMessage[];
14+
tutorialData?: TutorialState;
15+
useDirectLLM?: boolean;
16+
}

0 commit comments

Comments
 (0)