Most AI integrations are bolted on. A chat sidebar. A text improver. A copilot that doesn’t know where it is.
AppKask’s agent system is different. Agents are first-class workspace citizens — they understand folder types, read and write files, and create content that the platform’s smart folders deliver directly.
Every agent is a markdown file with YAML frontmatter:
---
role: content-writer
model: claude-sonnet-4.5
tools: [workspace_read_file, workspace_write_file, workspace_list_files, workspace_search]
temperature: 0.7
folder_types: [static-site, course]
autonomy: supervised
---
# Content Writer
You create and edit content for websites and training programs.
Read existing files to understand tone and structure before writing...
Store agents in an agent-collection folder inside your workspace. They’re versioned alongside your work, shareable, and portable.
Agents interact with your workspace through a secure tool system:
Every tool call is path-validated against the workspace root. No traversal outside the boundary. Write operations require explicit scope.
The magic is in the two-way matching:
static-site folder expects a content-writer and an seo-optimizerstatic-site and courseWhen you open a folder, only compatible agents appear. Context is automatic: the agent receives the folder type description, any ai-instructions.md you’ve written, and key files from the folder.
Bring your own API key: Anthropic (Claude), OpenAI, or any OpenAI-compatible endpoint including Ollama for local models. Keys are AES-256-GCM encrypted on your server. No data routed through the platform. No platform-level AI fee.
For air-gapped deployments, run Ollama locally. The agent system works identically — same tools, same streaming, same autonomy controls.
The pattern:
AI creates the content. The platform delivers it. Your audience owns the environment.
This is the cascade — applied, delivered, and now agent-powered.