# BEAM Live Introspection for AI Coding Agents

**TL;DR** — Give your AI coding agent (GitHub Copilot, Claude Code, OpenAI Codex, Gemini CLI) a reusable "skill" that lets it start, connect to, and introspect a running Elixir/BEAM node. Instead of guessing at runtime behavior or writing throwaway test scripts, the agent can query GenServer state, inspect supervision trees, poke ETS tables, and hot-reload code — all through a single shell script.

This article is self-contained: point your coding agent at it and say *"Adopt this pattern for my project."*

***

## Table of Contents

1. [The Problem](#the-problem)
2. [The Pattern](#the-pattern)
3. [Step 1: Enable Your Project for Introspection](#step-1-enable-your-project-for-introspection)
4. [Step 2: Add the `dev_node.sh` Script](#step-2-add-the-dev_nodesh-script)
5. [Step 3: Create the Skill Definition](#step-3-create-the-skill-definition)
6. [Step 4: Register the Skill with Your Agent](#step-4-register-the-skill-with-your-agent)
7. [Usage Examples](#usage-examples)
8. [Reference: Agent Configuration Paths](#reference-agent-configuration-paths)

***

## The Problem

When an AI coding agent works on an Elixir project, it typically has two options for validating its changes: run the tests (`mix test`) or reason about the code statically. Neither lets it *observe a live system* — check whether a GenServer has the right state, whether a supervision tree recovered from a crash, or whether a message actually arrived.

The BEAM VM has world-class introspection built in. Every Erlang/Elixir node can be connected to from another node using distributed Erlang. The trick is teaching the coding agent how to use this.

### Why this matters: "The Soul of Erlang and Elixir"

In his talk [*"The Soul of Erlang and Elixir"*](https://www.youtube.com/watch?v=JvBT4XBdoUE), Saša Jurić demonstrates exactly this capability against a live system. He SSHs into a running server, opens a remote console, and without restarting anything, drills into the problem:

> *"BEAM is a runtime which is highly debuggable, introspectable, observable if you will. BEAM allows us to hook into the running system and peek and poke inside it and get a lot of useful information — and I don't need to set some special flags, restart the system and whatnot. I can do this by default."* — Saša Jurić, [20:43](https://www.youtube.com/watch?v=JvBT4XBdoUE\&t=1243)

From the remote shell, he lists all processes, identifies the CPU-hogging one by its reduction count, gets its stack trace, traces its function calls, kills it with `Process.exit(pid, :kill)` — and the rest of the system keeps running at 10K requests/second, undisturbed. Then he hot-deploys a fix into the running production node without a restart.

> *"I was able to approach the system and look from inside it to figure out what the problems are, quickly fix those problems, and deploy into production without disturbing anything in the system itself. This is what I want from my tool."* — Saša Jurić, [29:27](https://www.youtube.com/watch?v=JvBT4XBdoUE\&t=1767)

This is exactly what we're giving to AI coding agents: the same remote-shell-into-a-live-system capability that Saša demonstrates manually, but wrapped in a scriptable interface (`dev_node.sh rpc`) that a coding agent can invoke without needing an interactive TTY. The agent becomes the operator, SSHing into the running BEAM.

## The Pattern

Three pieces work together:

1. **The project launches with a known node name and cookie** — so the agent's helper script can connect.
2. **A `dev_node.sh` script** in the project provides `start`, `stop`, `status`, `await`, `rpc`, and `eval_file` commands.
3. **A skill definition** tells the coding agent *when* and *how* to use live introspection.

![BEAM Live Introspection Pattern](https://439978545-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FDiEVTiIb6z0zL45wfNrM%2Fuploads%2Fgit-blob-a28c53c6aa8e90e446119687966ca13b04ab07e7%2Fbeam-introspection.svg?alt=media)

The RPC node is started as a **hidden node** (`--hidden` flag). In distributed Erlang, hidden nodes do not participate in the global cluster mesh — they don't trigger transitive connections, don't appear in `nodes()`, and are invisible to `:global` process registration. This is exactly what we want: the introspection node should observe the system without joining it as a peer or causing the cluster to attempt scheduling work on it.

***

## Step 1: Enable Your Project for Introspection

Your application must start with a **short name** (`--sname`) and a **cookie** (`--cookie`). The simplest approach is a `run` script in the project root:

### Create `run`

```bash
#!/bin/bash
cd "$(dirname "$0")" || exit 1
SNAME="$(basename "$(pwd)")"
export ELIXIR_ERL_OPTIONS="-sname $SNAME -setcookie devcookie"
exec mix phx.server > run.log 2>&1
```

```bash
chmod +x run
```

> **How `--sname` works**: `--sname my_app` registers the node as `my_app@<hostname>`. The (secret) cookie must match on both sides for distributed Erlang to connect. Using the project directory name as the `sname` is a convention that the `dev_node.sh` script mirrors — so everything just works without configuration.

### For non-Phoenix projects

If you don't use Phoenix, replace `mix phx.server` with `mix run --no-halt`:

```bash
export ELIXIR_ERL_OPTIONS="-sname $SNAME -setcookie devcookie"
exec mix run --no-halt > run.log 2>&1
```

### For production / releases

When running a Mix release, use the `--sname` and `--cookie` flags in your release config or `rel/env.sh.eex`:

```bash
export RELEASE_NODE="my_app"
export RELEASE_COOKIE="devcookie"
export RELEASE_DISTRIBUTION="sname"
```

> ⚠️ Use a stronger cookie in production. `devcookie` is for local development only. The Erlang cookie is security-critical. It is literally the only thing standing between you and an attacker getting full access onto your cluster.

***

## Step 2: Add the `dev_node.sh` Script

Create `scripts/dev_node.sh` in your project. This is the single entry point for all BEAM introspection:

```bash
#!/usr/bin/env bash
set -euo pipefail

SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
PROJECT_DIR="$(dirname "$SCRIPT_DIR")"
APP_NAME="${DEV_NODE_NAME:-$(basename "$PROJECT_DIR")}"
COOKIE="${DEV_NODE_COOKIE:-devcookie}"
HOSTNAME="$(hostname -s)"
FQDN="${APP_NAME}@${HOSTNAME}"
PIDFILE=".dev_node.pid"

case "${1:-help}" in
  start)
    if [ -f "$PIDFILE" ] && kill -0 "$(cat "$PIDFILE")" 2>/dev/null; then
      echo "Node already running (pid $(cat "$PIDFILE"))"
      exit 0
    fi
    echo "Starting node ${FQDN} ..."
    export ELIXIR_ERL_OPTIONS="-sname $APP_NAME -setcookie $COOKIE"
    mix run --no-halt > .dev_node.log 2>&1 &
    echo $! > "$PIDFILE"
    for i in $(seq 1 30); do
      if elixir --sname "probe_$$" --cookie "$COOKIE" --hidden -e "
        if Node.connect(:\"${FQDN}\"), do: System.stop(0), else: System.stop(1)
      " 2>/dev/null; then
        echo "Node ${FQDN} is up (pid $(cat "$PIDFILE"))"
        exit 0
      fi
      sleep 1
    done
    echo "ERROR: Node did not become reachable within 30s. Check .dev_node.log"
    exit 1
    ;;

  stop)
    if [ -f "$PIDFILE" ]; then
      kill "$(cat "$PIDFILE")" 2>/dev/null && echo "Node stopped" || echo "Node was not running"
      rm -f "$PIDFILE"
    else
      echo "No pidfile found"
    fi
    ;;

  status)
    if epmd -names 2>/dev/null | grep -q "name ${APP_NAME} "; then
      echo "Node ${FQDN} is running"
      exit 0
    else
      echo "Node ${FQDN} is not running"
      exit 1
    fi
    ;;

  await)
    TIMEOUT="${2:-30}"
    echo "Waiting for node ${FQDN} ..."
    for i in $(seq 1 "$TIMEOUT"); do
      if elixir --sname "probe_$$" --cookie "$COOKIE" --hidden -e "
        if Node.connect(:\"${FQDN}\"), do: System.stop(0), else: System.stop(1)
      " 2>/dev/null; then
        echo "Node ${FQDN} is reachable"
        exit 0
      fi
      sleep 1
    done
    echo "ERROR: Node ${FQDN} did not become reachable within ${TIMEOUT}s"
    exit 1
    ;;

  rpc)
    shift
    EXPR="$*"
    elixir --sname "rpc_$$" --cookie "$COOKIE" --hidden --no-halt -e "
      target = :\"${FQDN}\"
      true = Node.connect(target)
      {result, _binding} = :rpc.call(target, Code, :eval_string, [\"\"\"
        ${EXPR}
      \"\"\"])
      IO.inspect(result, pretty: true, limit: 200, printable_limit: 4096)
      System.stop(0)
    "
    ;;

  eval_file)
    shift
    FILE="$1"
    elixir --sname "rpc_$$" --cookie "$COOKIE" --hidden --no-halt -e "
      target = :\"${FQDN}\"
      true = Node.connect(target)
      code = File.read!(\"${FILE}\")
      {result, _binding} = :rpc.call(target, Code, :eval_string, [code])
      IO.inspect(result, pretty: true, limit: 200, printable_limit: 4096)
      System.stop(0)
    "
    ;;

  help|*)
    echo "Usage: scripts/dev_node.sh {start|stop|status|await [timeout]|rpc <expr>|eval_file <path>}"
    echo ""
    echo "Commands:"
    echo "  start          - Start a standalone BEAM node"
    echo "  stop           - Kill the node process"
    echo "  status         - Check if node is registered with epmd (exit 0/1)"
    echo "  await [secs]   - Wait for node to be connectable via distributed Erlang (default: 30s)"
    echo "  rpc <expr>     - Execute an Elixir expression on the remote node"
    echo "  eval_file <f>  - Evaluate a file on the remote node"
    echo ""
    echo "Environment variables:"
    echo "  DEV_NODE_NAME  - sname for the node (default: project directory name)"
    echo "  DEV_NODE_COOKIE - cluster cookie (default: devcookie)"
    ;;
esac
```

```bash
mkdir -p scripts
chmod +x scripts/dev_node.sh
```

### How `dev_node.sh` works

| Command            | What it does                                                                                                                                                              |
| ------------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `start`            | Launches `mix run --no-halt` as a background BEAM node, waits until it's connectable                                                                                      |
| `stop`             | Kills the background node via its PID file                                                                                                                                |
| `status`           | Checks if the node is registered with `epmd` — works regardless of how the node was started. Exits 0 (running) or 1 (not running)                                         |
| `await [secs]`     | Waits for the node to become connectable via distributed Erlang (RPC probe). Useful when the node was started externally (e.g. via `just start-bg`). Default timeout: 30s |
| `rpc <expr>`       | Spawns a *short-lived hidden* BEAM node, connects to the app node, evaluates `<expr>` via `:rpc.call`, prints the result, and exits                                       |
| `eval_file <path>` | Same as `rpc`, but reads the expression from a `.exs` file — useful for complex multi-line introspection                                                                  |

> **Key design decision**: Each `rpc` call is stateless. A fresh hidden BEAM node connects, runs one expression, and exits. This avoids stale connections but means bindings don't carry across calls. The `--hidden` flag ensures the RPC node doesn't join the cluster as a peer — it won't appear in `nodes()`, won't trigger transitive connections to other cluster members, and the BEAM scheduler won't try to distribute work to it.

### Add a `shutdown.sh` for graceful stop

```bash
#!/usr/bin/env bash
"$(cd "$(dirname "$0")" && pwd)/dev_node.sh" rpc "System.stop()"
```

This tells the running node to shut down through the BEAM's own `System.stop()`, which triggers application shutdown callbacks.

***

## Step 3: Create the Skill Definition

A "skill" is a markdown file (`SKILL.md`) with a YAML front-matter header and instructions for the coding agent. It lives alongside its helper scripts.

Create a directory structure:

```
beam-introspection/
├── SKILL.md
└── scripts/
    └── dev_node.sh    (symlink or copy)
```

### `SKILL.md`

````markdown
---
name: beam-introspection
description: >
  Start, connect to, and introspect a running BEAM/Elixir node.
  Use when asked to test, debug, validate, or observe runtime behavior
  of an Elixir application, inspect GenServer state, supervision trees,
  ETS tables, process mailboxes, or hot-reload code into a live system.
  Use instead of writing one-off test scripts.
---

# BEAM Live Introspection Skill

## Purpose

This skill enables you to start, connect to, introspect, and control a running BEAM/Elixir node instead of writing one-off scripts. Use it whenever you need to validate behavior, debug state, or test functionality against a live system.

## When to Use This Skill

Use live introspection instead of writing standalone scripts when:

- You need to observe GenServer state, supervision trees, or process behavior
- You want to test a sequence of interactions against a running system
- You need to debug why something isn't working by inspecting the live process tree
- You want to validate that a code change works by hot-reloading into a running node

Do NOT use this skill when:

- You just need to run unit or integration tests (`mix test`)
- You need to compile-check code (`mix compile --warnings-as-errors`)
- The task is pure code generation with no runtime validation needed

## Setup

Before first use, ensure the project has `scripts/dev_node.sh`. If it does not exist, create it from the template in this skill's `scripts/` directory, then `chmod +x scripts/dev_node.sh`.

## Configuration

The script auto-detects the project name from the directory. Override with environment variables:

```bash
export DEV_NODE_NAME=my_app        # sname for the node
export DEV_NODE_COOKIE=devcookie   # cluster cookie
```

## Workflow

### Step 1: Start the node

```bash
scripts/dev_node.sh start
```

Wait for the "is up" confirmation before proceeding.

### Step 2: Introspect via RPC

```bash
# Check supervision tree
scripts/dev_node.sh rpc "Supervisor.which_children(MyApp.Supervisor)"

# Get GenServer state
scripts/dev_node.sh rpc ":sys.get_state(GenServer.whereis(MyApp.SomeServer))"

# Count processes
scripts/dev_node.sh rpc "length(Process.list())"

# Inspect an ETS table
scripts/dev_node.sh rpc ":ets.tab2list(:my_table) |> Enum.take(5)"

# Call application functions directly
scripts/dev_node.sh rpc "MyApp.some_function(\"arg\")"
```

### Step 3: For complex introspection, use eval_file

Write a `.exs` file and evaluate it on the live node:

```bash
scripts/dev_node.sh eval_file scripts/check_state.exs
```

### Step 4: Hot-reload code changes

After modifying source code:

```bash
mix compile
scripts/dev_node.sh rpc "IEx.Helpers.recompile()"
```

### Step 5: Stop the node

```bash
scripts/dev_node.sh stop
```

## Common Recipes

### Process tree overview

```bash
scripts/dev_node.sh rpc "
  Process.list()
  |> Enum.map(fn pid ->
    info = Process.info(pid, [:registered_name, :message_queue_len, :memory])
    {Keyword.get(info, :registered_name), Keyword.get(info, :message_queue_len), Keyword.get(info, :memory)}
  end)
  |> Enum.filter(fn {name, _, _} -> name != [] end)
  |> Enum.sort_by(fn {_, _, mem} -> mem end, :desc)
  |> Enum.take(15)
"
```

### Find processes with full mailboxes

```bash
scripts/dev_node.sh rpc "
  Process.list()
  |> Enum.map(fn pid -> {pid, Process.info(pid, :message_queue_len)} end)
  |> Enum.filter(fn {_, {:message_queue_len, n}} -> n > 0 end)
  |> Enum.sort_by(fn {_, {:message_queue_len, n}} -> n end, :desc)
  |> Enum.take(10)
  |> Enum.map(fn {pid, {:message_queue_len, n}} ->
    info = Process.info(pid, [:registered_name, :current_function])
    {pid, n, info}
  end)
"
```

### Memory overview

```bash
scripts/dev_node.sh rpc ":erlang.memory() |> Enum.map(fn {k, v} -> {k, Float.round(v / 1_048_576, 2)} end)"
```

### Application environment

```bash
scripts/dev_node.sh rpc "Application.get_all_env(:my_app)"
```

## Important Notes

- Each `rpc` invocation is stateless. A fresh hidden BEAM node connects, runs, and exits.
- The RPC node uses `--hidden` so it doesn't join the cluster mesh or appear in `nodes()`.
- Expressions must be valid Elixir. Use `eval_file` for complex expressions with quotes or multi-line logic.
- The `limit: 200` in `IO.inspect` truncates large data structures.
- If a call hangs, the target node may be stuck. Check `.dev_node.log`.
````

***

## Step 4: Register the Skill with Your Agent

Each coding agent looks for skills in a different location. The skill directory structure is the same everywhere — only the parent path changes.

### GitHub Copilot (Copilot CLI / Copilot Coding Agent)

**User-level skills** (available to all projects):

```
~/.github/skills/beam-introspection/
├── SKILL.md
└── scripts/
    └── dev_node.sh
```

**Project-level instructions** — add to `.github/copilot-instructions.md` or `AGENTS.md` at the project root:

```markdown
### BEAM introspection

Use the `beam-introspection` skill to connect to the running BEAM node for runtime
validation. The node runs with:

- sname: `<your_app_name>` (derived from directory name)
- cookie: `devcookie`

Use `scripts/dev_node.sh rpc '<expression>'` for one-shot introspection and
`scripts/dev_node.sh eval_file <path>` for multi-line scripts.
```

### Claude Code

**User-level skills** (available to all projects):

```
~/.claude-<profile>/skills/beam-introspection/
├── SKILL.md
└── scripts/
    └── dev_node.sh
```

Where `<profile>` is your Claude profile identifier (e.g., `claude-user@example.com`).

**Project-level instructions** — add a `CLAUDE.md` file at the project root referencing the skill, or add the instructions to your existing `CLAUDE.md`:

```markdown
## BEAM introspection

When asked to test, debug, validate, or observe runtime behavior, use the
`beam-introspection` skill to connect to the live BEAM node.

Node configuration:
- sname: derived from project directory name
- cookie: devcookie

Key commands:
- `scripts/dev_node.sh start` — start a background node
- `scripts/dev_node.sh rpc '<expression>'` — evaluate on the live node
- `scripts/dev_node.sh eval_file <path>` — evaluate a script file on the live node
- `scripts/dev_node.sh stop` — stop the background node
```

### OpenAI Codex

**User-level skills**:

```
~/.codex/skills/beam-introspection/
├── SKILL.md
└── scripts/
    └── dev_node.sh
```

**Project-level instructions** — Codex reads `AGENTS.md` at the project root. Add the same introspection section as shown for Copilot above.

### Gemini CLI

**User-level skills**:

```
~/.gemini/skills/beam-introspection/
├── SKILL.md
└── scripts/
    └── dev_node.sh
```

**Project-level instructions** — Gemini reads `GEMINI.md` or `AGENTS.md` at the project root. Add the introspection instructions there.

### Generic / Multi-Agent (`~/.agents/`)

Some agent frameworks check `~/.agents/` as a shared skills directory:

```
~/.agents/skills/beam-introspection/
├── SKILL.md
└── scripts/
    └── dev_node.sh
```

### Quick setup script

To install the skill for **all** agents at once, run this from your project root:

```bash
#!/usr/bin/env bash
set -euo pipefail

SKILL_NAME="beam-introspection"
SKILL_SOURCE="$(cd "$(dirname "$0")" && pwd)/scripts/dev_node.sh"

# All known agent skill directories
AGENT_DIRS=(
  "$HOME/.github/skills"
  "$HOME/.agents/skills"
  "$HOME/.codex/skills"
  "$HOME/.gemini/skills"
)

# Claude uses a profile-specific directory — find it
for d in "$HOME"/.claude-*/; do
  [ -d "$d" ] && AGENT_DIRS+=("${d}skills")
done

for dir in "${AGENT_DIRS[@]}"; do
  target="$dir/$SKILL_NAME"
  mkdir -p "$target/scripts"
  
  # Copy SKILL.md (from this article or from an existing install)
  if [ -f "$target/SKILL.md" ]; then
    echo "SKILL.md already exists at $target, skipping"
  else
    echo "Creating $target/SKILL.md — paste the SKILL.md content from this article"
  fi
  
  # Copy dev_node.sh
  if [ -f "$SKILL_SOURCE" ]; then
    cp "$SKILL_SOURCE" "$target/scripts/dev_node.sh"
    chmod +x "$target/scripts/dev_node.sh"
    echo "Installed dev_node.sh to $target/scripts/"
  fi
done

echo "Done. Skill '$SKILL_NAME' registered for all agents."
```

***

## Usage Examples

Once the skill is installed, here's what a typical agent interaction looks like:

### "Is my GenServer running?"

You ask: *"Check if the OrderProcessor GenServer is alive and what its state looks like."*

The agent runs:

```bash
scripts/dev_node.sh rpc "
  case GenServer.whereis(MyApp.OrderProcessor) do
    nil -> :not_running
    pid -> {:running, pid, :sys.get_state(pid)}
  end
"
```

### "Why is the queue backed up?"

You ask: *"Something's wrong with message processing — debug it."*

The agent runs:

```bash
scripts/dev_node.sh rpc "
  Process.list()
  |> Enum.map(fn pid -> {pid, Process.info(pid, [:registered_name, :message_queue_len])} end)
  |> Enum.reject(fn {_, info} -> Keyword.get(info, :message_queue_len) == 0 end)
  |> Enum.sort_by(fn {_, info} -> Keyword.get(info, :message_queue_len) end, :desc)
  |> Enum.take(5)
"
```

### "Hot-reload my fix and test it"

You ask: *"I changed the retry logic — reload it into the running node and test."*

The agent runs:

```bash
mix compile
scripts/dev_node.sh rpc "IEx.Helpers.recompile()"
scripts/dev_node.sh rpc "MyApp.OrderProcessor.retry_pending()"
```

***

## Reference: Agent Configuration Paths

| Agent              | User-level skill path                | Project instructions file                        |
| ------------------ | ------------------------------------ | ------------------------------------------------ |
| **GitHub Copilot** | `~/.github/skills/<name>/`           | `.github/copilot-instructions.md` or `AGENTS.md` |
| **Claude Code**    | `~/.claude-<profile>/skills/<name>/` | `CLAUDE.md`                                      |
| **OpenAI Codex**   | `~/.codex/skills/<name>/`            | `AGENTS.md`                                      |
| **Gemini CLI**     | `~/.gemini/skills/<name>/`           | `GEMINI.md` or `AGENTS.md`                       |
| **Generic**        | `~/.agents/skills/<name>/`           | `AGENTS.md`                                      |

### What goes where

* **`SKILL.md`** — The skill definition with YAML front-matter (`name`, `description`) and instructions. This is what the agent reads to understand *when* and *how* to use the skill.
* **`scripts/dev_node.sh`** — The helper script that handles node lifecycle and RPC. Can be a copy in each agent's skills dir, or a symlink to the project's `scripts/dev_node.sh`.
* **Project instructions file** (`AGENTS.md`, `CLAUDE.md`, etc.) — Tells the agent that introspection is available for *this specific project*, including the node name and cookie.

### Minimum viable setup

If you want the simplest possible setup for a single agent (e.g., GitHub Copilot):

1. Add `scripts/dev_node.sh` to your project (chmod +x)
2. Ensure your app starts with `--sname` and `--cookie devcookie`
3. Add this to `AGENTS.md`:

```markdown
## BEAM introspection

Use `scripts/dev_node.sh` to introspect the running BEAM node:

- `scripts/dev_node.sh start` — start the node
- `scripts/dev_node.sh rpc '<elixir expression>'` — evaluate on the live node
- `scripts/dev_node.sh eval_file <script.exs>` — evaluate a script file
- `scripts/dev_node.sh stop` — stop the node

Node name: derived from project directory name. Cookie: `devcookie`.

When asked to debug, validate, or observe runtime behavior, prefer live
introspection over writing throwaway scripts.
```

That's it. No skill registration needed — the agent reads `AGENTS.md` and knows how to use the script.

***

## How It Works Under the Hood

When `dev_node.sh rpc` runs, it:

1. Starts a **new, short-lived hidden BEAM node** with a unique sname (`rpc_<pid>`), the same cookie, and the `--hidden` flag.
2. Calls `Node.connect/1` to connect to the target app node via distributed Erlang. Because the RPC node is hidden, this connection is **not transitive** — the app node won't try to mesh with it, and it won't appear in `nodes()` on the app side (only in `nodes(:hidden)`).
3. Uses `:rpc.call/4` to execute `Code.eval_string/1` on the target node — so the expression runs in the app's process context with access to all its modules and state.
4. Prints the result with `IO.inspect/2` and exits.

This is the same mechanism that `iex --remsh` uses, but wrapped in a scriptable interface that coding agents can invoke without interactive TTY support.

### Security considerations

* The cookie `devcookie` is well-known. Anyone on the same machine (or network, if using `--name` instead of `--sname`) can connect. Use only for local development.
* `Code.eval_string/1` can execute arbitrary code. This is by design — the agent needs full access — but be aware of it in shared environments.
* `--sname` restricts connections to the same hostname. `--name` would allow cross-host connections (not recommended without TLS distribution).
