Shiny LLM Tools

Published

2026-05-11

The applications in Chapter 23 (golem) can be accessed with the launch() or get() functions from the shinypak R package:

# install.packages('pak')
pak::pak('mjfrigaard/shinypak')
library(shinypak)

Chapter 23 applications:

list_apps(regex = '^23')

leprechaun covers:

The applications in Chapter 24 (leprechaun) can be accessed with the launch() or get() functions from the shinypak R package:

# install.packages('pak')
pak::pak('mjfrigaard/shinypak')
library(shinypak)

Chapter 24 applications:

list_apps(regex = '^24')

The rhino chapter includes three examples of CI/CD workflows:

The applications in Chapter 25 (rhino) can be accessed with the launch() or get() functions from the shinypak R package:

# install.packages('pak')
pak::pak('mjfrigaard/shinypak')
library(shinypak)

Chapter 25 applications:

list_apps(regex = '^25')

Since I began writing this book1, the number of AI tools for building Shiny apps has grown significantly. The chapters in this section introduce a few popular tools I’ve personally used to develop applications in Positron and RStudio .

Given the rapidly evolving landscape and nature of these tools, I expect these chapters to change frequently. Please open a GitHub issue if there is anything outdated, incorrect, or missing.

LLM concepts

Below I’ll cover some LLM concepts that might be good to know before diving into the following chapters. These descriptions come from a variety of resources (which I’ve also included).

Context Windows

A context window is the amount of information that an LLM can “see” at one time.2 This includes the conversation history, any documents we’ve shared, tool outputs, and the model’s own responses.

%%{init: {'theme': 'neutral', 'themeVariables': { 'fontFamily': 'monospace', "fontSize":"14px"}}}%%

flowchart LR
    ContextWindow(["📏 Context Window Size"])
    
    Cost["💰 Cost"]
    Latency["⏱️ Latency"]
    Reasoning["🧠 Reasoning Quality"]
    Recall["🔍 Recall Accuracy"]
    BestPractices["✅ Best Practices"]
    
    CostDetail["↑ More tokens = more $<br/>Billed per token"]
    LatencyDetail["↑ Slower to respond<br/>More to process"]
    ReasoningDetail["↓ Lost in the middle<br/>Harder to attend to details"]
    RecallDetail["↓ Misses buried details<br/>Degrades with length"]
    PracticesDetail["Keep context lean<br/>Use skills and subagents<br/>Retrieve only what is needed"]
    
    ContextWindow --> Cost --> CostDetail
    ContextWindow --> Latency --> LatencyDetail
    ContextWindow --> Reasoning --> ReasoningDetail
    ContextWindow --> Recall --> RecallDetail
    ContextWindow --> BestPractices --> PracticesDetail
    
    classDef root fill:#e3f2fd,stroke:#1976d2,stroke-width:3px,color:#000
    classDef negative fill:#ffebee,stroke:#c62828,stroke-width:2px,color:#000
    classDef positive fill:#e8f5e9,stroke:#2e7d32,stroke-width:2px,color:#000
    classDef detail fill:#fafafa,stroke:#9e9e9e,stroke-width:1px,color:#000
    
    class ContextWindow root
    class Cost,Latency,Reasoning,Recall negative
    class BestPractices positive
    class CostDetail,LatencyDetail,ReasoningDetail,RecallDetail,PracticesDetail detail
    

Agents vs. Chatbots

Larger context windows let the model reason over more material at once, but not for free — larger context windows cost more, tend to run slower, and the model often struggles to attend carefully to information buried in the middle of the window.3

Resources:
* Anthropic: Long context tips
* “Lost in the Middle” — how models struggle with long contexts
* OpenAI: Tokens explained

Agents

An agent is an LLM that’s been given the ability to take actions on its own behalf, rather than just responding with text, like a chatbot. The chatbot’s workflow is a straight line: question in, answer out.

%%{init: {'theme': 'neutral', 'themeVariables': { 'fontFamily': 'monospace', "fontSize":"14px"}}}%%

flowchart TD
    subgraph Chatbot[" Chatbot"]
        direction LR
        UserCB(["User asks<br>question"]) --> LlmCB("LLM generates<br>response")
        LlmCB --> ReplyCB[Text reply]
        ReplyCB --> UserCB
    end

    style Chatbot stroke-width:3px,stroke:#1976d2
    style UserCB fill:#e3f2fd,stroke:#1976d2,stroke-width:1px,color:#000
    style ReplyCB fill:#e8f5e9,stroke:#2e7d32,stroke-width:1px,color:#000
    

Chatbots

On the other hand, agentic workflow contains a loop — take actions, see what happened, and adjust course before responding. This loop is what lets it accomplish multi-step tasks autonomously.

%%{init: {'theme': 'neutral', 'themeVariables': { 'fontFamily': 'monospace', "fontSize":"14px"}}}%%

flowchart TD
    subgraph Agent["Agent"]
        direction LR
        UserAg(["User assigns<br>task"]) --> LlmAg("LLM plans<br>next step")
        LlmAg --> Decide{Task<br/>complete?}
        Decide -.->|No| ToolCall("<em>Call tool <br>(run code,<br> query DB,<br> read file)</em>")
        ToolCall -.-> Observe(Observe<br>result)
        Observe -.-> LlmAg
        Decide -->|Yes| FinalAg[Final<br>answer]
        FinalAg --> UserAg
    end

    style Agent stroke-width:3px,stroke:#7b1fa2
    
    style UserAg fill:#e3f2fd,stroke:#1976d2,stroke-width:1px,color:#000
    style FinalAg fill:#e8f5e9,stroke:#2e7d32,stroke-width:1px,color:#000
    
    style ToolCall fill:#fff3e0,stroke:#f57c00
    style Decide fill:#fff9c4,stroke:#f9a825

Agents

Instead of simply answering the question, the agent can run commands, read the outputs, decide what to do next, and keep iterating until a task is complete.

Resources:
* Anthropic: Building effective agents
* ellmer: Streaming and async
* Hugging Face: Agents course

Tools

Tools are functions that an LLM can call to interact with the outside world — running code, querying databases, reading files, or hitting web APIs. Model don’t execute tools; they request tool calls, the system software executes the tool, and the results are fed back into the conversation.

%%{init: {'theme': 'neutral', 'themeVariables': { 'fontFamily': 'monospace', "fontSize":"14px"}}}%%

sequenceDiagram
    actor Usr as User
    participant LLM as LLM
    participant Host as Host App
    participant Tool as Tool

    Usr->>LLM: "What were average sales this quarter?"

    LLM->>LLM: Reason — tool needed

    loop Until task is complete
        LLM->>Host: Tool call request + parameters
        Note right of LLM: LLM can't run<br/>tools directly
        Host->>Tool: Execute tool
        Note right of Host: e.g. run code,<br/>query a DB,<br/>call an API
        Tool-->>Host: Raw result
        Host-->>LLM: Tool result injected into context
        LLM->>LLM: Reason about result —<br/>another tool needed?
    end

    LLM->>Usr: Final response
    

Tools

Tools are what transform chatbots into something that can actually do things, and they’re the foundation that makes agents possible.

Resources:
* ellmer: Tool/function calling
* btw: Give LLMs context about your R session
* Anthropic: Tool use
* Model Context Protocol (MCP)

Skills

Skills are reusable, packaged instructions that teach an LLM how to perform a specific task well. Rather than re-explaining preferences and workflows every time, they get bundled into a skill that the model can load on demand.

%%{init: {'theme': 'neutral', 'themeVariables': { 'fontFamily': 'monospace', "fontSize":"14px"}}}%%

sequenceDiagram
    actor Usr as User
    participant LLM as LLM
    participant SkillLib as Skill Library
    participant Ctx as Context Window

    Usr->>LLM: "Review my R code"

    LLM->>SkillLib: Identify relevant skill
    Note right of SkillLib: Skills stored<br/>as reusable<br/>instruction<br/>sets

    SkillLib-->>Ctx: Load skill instructions on demand
    Note right of Ctx: Keeps context<br/>lean — only<br/>loaded when<br/>needed

    Ctx-->>LLM: Skill instructions now available
    Note right of LLM: No external<br/>execution — <br/>shapes<br/>reasoning only

    LLM->>LLM: Apply skill guidance to task

    LLM->>Usr: Response shaped by skill

Skills

Skills keep the main context window clean while still letting the model draw on detailed, task-specific guidance when it’s relevant.

Resources:
* Anthropic: Agent Skills
* chores: A library of LLM helpers
* Simon Couch: On the chores package

Subagents

A subagent is a separate agent spawned by a parent agent to handle a focused task, with its own fresh context window. The parent agent delegates the task, and the subagent does the heavy lifting in isolation, returning only the summarized result.

%%{init: {'theme': 'neutral', 'themeVariables': { 'fontFamily': 'monospace', "fontSize":"14px"}}}%%

sequenceDiagram
    actor Usr as User
    participant PAgent as Parent Agent
    participant SAgent as Subagent
    participant Tool as Tool

    Usr->>PAgent: "Analyze Q1–Q4 sales reports"
    PAgent->>PAgent: Identify delegatable subtask

    PAgent->>SAgent: "Summarize all four reports"
    Note right of SAgent: Fresh isolated<br/>context window
    SAgent->>Tool: Call tool — read files
    Tool-->>SAgent: File contents
    SAgent->>SAgent: Analyze & summarize
    SAgent-->>PAgent: Combined summary
    Note left of PAgent: Only summary<br/>returns —<br/>parent context<br/>stays lean

    PAgent->>PAgent: Reason over summary
    PAgent->>Usr: Final response
    

Skills

This pattern keeps the parent’s context from getting cluttered with intermediate details and lets complex jobs be broken into parallel, manageable chunks.

Resources:
* Anthropic: How we built our multi-agent research system
* Claude Code: Subagents documentation
* Subagents: When and How to Use Them

More Resources

I also highly recommend checking out the resources in the callout boxes below:

Shiny Assistant

The Shiny Assistant is a browser-based LLM chat tool you can use to help build a Shiny app. The UI gives you the ability to submit prompts (questions or instructions), view the code, and launch the application. 26  Shiny Assistant covers:

ellmer

The ellmer package allows users to,

Chat with large language models from a range of providers including ‘Claude’ https://claude.ai, ‘OpenAI’ https://chatgpt.com, and more. Supports streaming, asynchronous calls, tool calling, and structured data extraction.

This chapter starts with setting up the ellmer package.

To demonstrate using ellmer chats during development, this chapter uses an application from the shiny-examples repository.

chores

The chores package was designed to,

help you complete repetitive, hard-to-automate tasks quickly.

The first portion of this chapter covers the updates to the movie review application. I also cover how to write extension packages with custom helper (prompts) that can be used with the addin.

gander

The gander package is,

a higher-performance and lower-friction chat experience for data scientists in RStudio and Positron–sort of like completions with Copilot, but it knows how to talk to the objects in your R environment.

This chapter starts with configuring gander and a simple example:

The following sections cover the structure of gander prompts and chats:

I extend the use of the gander addin to help make adjustments to the plotly and ggplot2 visualizations, and to create a downloadable R markdown report module:

btw

The btw package,

helps you describe your computational environment to LLMs.


  1. I put the first ‘complete’ edition online in late 2023.↩︎

  2. Measured in tokens, roughly equivalent to word fragments↩︎

  3. Once the window fills up, content in the middle gets pushed out and effectively forgotten. Similar to the Serial-Position Effect.↩︎