feat(l4d2): add l4d2ctl host command boundary

This commit is contained in:
mwiegand 2026-05-06 16:35:20 +02:00
parent a347829608
commit de86139323
No known key found for this signature in database
17 changed files with 538 additions and 39 deletions

View file

@ -13,9 +13,8 @@ Do not invent architecture outside these plans unless explicitly requested.
## Current Project State
- Repo is newly initialized.
- Only planning docs exist right now.
- Implementation directories are planned, not yet created.
- `l4d2host/` and `l4d2web/` implementation directories exist.
- Implementation plans remain the source of truth for contract changes and task sequencing.
## Non-Negotiable Constraints
@ -31,12 +30,15 @@ Do not invent architecture outside these plans unless explicitly requested.
### Host library (`l4d2host` / `l4d2ctl`)
- Exposed CLI command set is fixed:
- Exposed CLI write command set is fixed:
- `install`
- `initialize <name> -f <spec.yaml>`
- `start <name>`
- `stop <name>`
- `delete <name>`
- CLI read commands are allowed for web/host boundary consistency:
- `status <name> --json`
- `logs <name> --lines <n> --follow/--no-follow`
- Hard-coded paths under `/opt/l4d2`.
- Overlays are external directories (no overlay content management here).
- Fail-fast subprocess behavior; pass raw stderr; propagate return code.
@ -55,6 +57,7 @@ Do not invent architecture outside these plans unless explicitly requested.
- Persist command logs in `job_logs` table (retain indefinitely).
- Desired vs actual server state model.
- Live logs in UI for both jobs and servers.
- Web app host operations go through `l4d2ctl` via a host command client, not direct `l4d2host` imports.
- Blueprint semantics (locked):
- private per user in v1
- live-linked to servers

View file

@ -17,12 +17,16 @@ Implementation plans are the source of truth:
- Naming is strictly `l4d2` (not `l4d`).
- Host library and web app are separate components.
- Host CLI commands are fixed to:
- Host CLI write commands are fixed to:
- `install`
- `initialize <name> -f <spec.yaml>`
- `start <name>`
- `stop <name>`
- `delete <name>`
- Host CLI read commands are available for the web/host boundary:
- `status <name> --json`
- `logs <name> --lines <n> --follow/--no-follow`
- The web app calls host operations through `l4d2ctl`, not direct `l4d2host` imports.
- Runtime paths are hard-coded under `/opt/l4d2`.
- Overlay handling is directory-based and externally populated.
- No lock manager, no rollback, no preflight checks in host library.

View file

@ -2,9 +2,9 @@
> **For agentic workers:** REQUIRED SUB-SKILL: Use superpowers:subagent-driven-development (recommended) or superpowers:executing-plans to implement this plan task-by-task. Steps use checkbox (`- [ ]`) syntax for tracking.
**Goal:** Build a Python `l4d2` host library and `l4d2ctl` CLI with exactly five commands (`install`, `initialize`, `start`, `stop`, `delete`) plus read APIs needed by the local web app.
**Goal:** Build a Python `l4d2` host library and `l4d2ctl` CLI with fixed write commands (`install`, `initialize`, `start`, `stop`, `delete`) plus read commands/APIs needed by the web app.
**Architecture:** Runtime paths are hard-coded under `/opt/l4d2`. Write operations are imperative and fail-fast with no lock manager, no rollback, and no preflight checks. CLI behavior remains raw/stderr-first, while library internals additionally expose callback-based streaming and read APIs (`get_instance_status`, `stream_instance_logs`) for the web app.
**Architecture:** Runtime paths are hard-coded under `/opt/l4d2`. Write operations are imperative and fail-fast with no lock manager, no rollback, and no preflight checks. CLI write behavior remains raw/stderr-first, and CLI read commands expose status/log output for the web/host boundary while library internals keep callback-based streaming and read APIs.
**Tech Stack:** Python 3.12+, Typer, PyYAML, pytest, subprocess, systemd user units, fuse-overlayfs.
@ -12,7 +12,7 @@
## Scope and Contracts
- Command surface is fixed in v1:
- Write command surface is fixed in v1:
- `l4d2ctl install`
- `l4d2ctl initialize <name> -f <spec.yaml>`
- `l4d2ctl start <name>`
@ -32,7 +32,10 @@
- `initialize` always writes `server.cfg`; if `config` is empty/missing, `server.cfg` is empty.
- `delete` is no-op success when instance/runtime directories are already missing.
- CLI errors: print raw subprocess stderr and exit with subprocess return code.
- Additional read APIs for web app (no extra CLI commands):
- Read commands are allowed for web/host boundary consistency:
- `l4d2ctl status <name> --json`
- `l4d2ctl logs <name> --lines <n> --follow/--no-follow`
- Additional host-local read APIs:
- `get_instance_status(name)`
- `stream_instance_logs(name, lines=200, follow=True)`
- Blueprints are intentionally out of scope for this library; callers must resolve any blueprint linkage to a concrete YAML spec before calling `initialize`.
@ -734,7 +737,7 @@ git commit -m "docs(l4d2): finalize v1 CLI contracts and web-facing read APIs"
## Self-Review
- [ ] Spec coverage: command surface fixed, hard-coded paths, config semantics, delete no-op, callback streaming, read APIs.
- [ ] Spec coverage: write command surface fixed, read commands allowed, hard-coded paths, config semantics, delete no-op, callback streaming, read APIs.
- [ ] Placeholder scan: no TODO/TBD placeholders.
- [ ] Consistency: argument names (`on_stdout`, `on_stderr`, `passthrough`) are consistent across tasks.
- [ ] Verification: each task contains exact test commands and expected outcomes.

View file

@ -4,7 +4,7 @@
**Goal:** Build a local Flask web app where users create blueprints and manage L4D2 servers derived from those blueprints, with async lifecycle jobs and live logs.
**Architecture:** Run a single Flask process with Jinja templates, vendored HTMX, custom CSS, and in-process worker threads. Persist app state in SQLite with Rails-style foreign-key naming (`user_id`, `server_id`, `blueprint_id`, `overlay_id`, `job_id`). Integrate directly with `l4d2host` write/read APIs: jobs call `install/initialize/start/stop/delete` with output callbacks, while status/logs use `get_instance_status` and `stream_instance_logs`.
**Architecture:** Run a single Flask process with Jinja templates, vendored HTMX, custom CSS, and in-process worker threads. Persist app state in SQLite with Rails-style foreign-key naming (`user_id`, `server_id`, `blueprint_id`, `overlay_id`, `job_id`). Integrate with host operations through `l4d2ctl` via a host command client: jobs call `install/initialize/start/stop/delete` with output callbacks, while status/logs use `status --json` and `logs` CLI read commands.
**Tech Stack:** Python 3.12+, Flask, SQLAlchemy, Alembic, pytest, vendored HTMX, custom CSS, vanilla JS (SSE).
@ -505,7 +505,7 @@ git add l4d2web/routes/server_routes.py l4d2web/tests/test_servers.py
git commit -m "feat(l4d2-web): add server creation and blueprint reassignment routes"
```
### Task 7: Add direct `l4d2host` facade and blueprint-to-spec generation
### Task 7: Add `l4d2ctl` facade and blueprint-to-spec generation
**Files:**
- Create: `l4d2web/services/spec_yaml.py`

View file

@ -297,7 +297,7 @@ Report:
Task 3 evidence:
- archive creation/copy: report local archive path and remote unpack path
- venv/pip install: report virtualenv path and pip install status
- l4d2ctl command surface: report the five commands found in help output
- l4d2ctl command surface: report write commands plus status/log read commands found in help output
Approve Task 4: run l4d2ctl install on ckn@10.0.4.128?
```

View file

@ -0,0 +1,66 @@
# L4D2 CLI Host Client Implementation Plan
> **For agentic workers:** REQUIRED SUB-SKILL: Use superpowers:subagent-driven-development (recommended) or superpowers:executing-plans to implement this plan task-by-task. Steps use checkbox (`- [ ]`) syntax for tracking.
**Goal:** Make `l4d2web` manage the local host through `l4d2ctl` instead of importing `l4d2host` internals, so the same execution boundary can later be transported over SSH.
**Architecture:** `l4d2host` remains the host-local implementation behind `l4d2ctl`. `l4d2web` gains a small local command runner that streams CLI stdout/stderr into jobs, supports cancellation, and parses status JSON. Hosts and overlay sync remain out of this change; the current machine is the implicit local host.
**Tech Stack:** Python 3.12+, Typer, subprocess, Flask, SQLAlchemy, pytest.
---
## File Map
- `l4d2host/cli.py`: add read commands for status and logs.
- `l4d2host/tests/test_cli.py`: cover the expanded CLI contract.
- `l4d2web/services/host_commands.py`: new subprocess-based host command runner and cancellation exception.
- `l4d2web/services/l4d2_facade.py`: call `l4d2ctl` through `host_commands` instead of importing `l4d2host` internals.
- `l4d2web/services/job_worker.py`: catch the web-side cancellation exception.
- `l4d2web/tests/test_host_commands.py`: cover callback streaming, failures, and cancellation.
- `l4d2web/tests/test_l4d2_facade.py`: verify facade emits CLI commands and parses status.
- `l4d2web/tests/test_job_worker.py`: update cancellation imports.
- `l4d2host/README.md`, `l4d2web/README.md`, existing implementation plans: document the relaxed CLI boundary.
## Tasks
### Task 1: Add host CLI read commands
- [x] Write failing tests for `l4d2ctl status <name> --json` and `l4d2ctl logs <name> --no-follow`.
- [x] Run `pytest l4d2host/tests/test_cli.py -q` and confirm the new tests fail because commands do not exist.
- [x] Add the `status` and `logs` commands to `l4d2host/cli.py` using existing `get_instance_status` and `stream_instance_logs` APIs.
- [x] Run `pytest l4d2host/tests/test_cli.py -q` and confirm it passes.
### Task 2: Add web host command runner
- [x] Write failing tests for streaming stdout/stderr callbacks, non-zero exit propagation, and cancellation.
- [x] Run `pytest l4d2web/tests/test_host_commands.py -q` and confirm failures are for the missing module.
- [x] Implement `l4d2web/services/host_commands.py` with `run_command`, `HostCommandError`, and `CommandCancelledError`.
- [x] Run `pytest l4d2web/tests/test_host_commands.py -q` and confirm it passes.
### Task 3: Switch web facade to CLI calls
- [x] Update facade tests so they monkeypatch `host_commands.run_command` and assert emitted `l4d2ctl` commands.
- [x] Run `pytest l4d2web/tests/test_l4d2_facade.py -q` and confirm failures show the facade still imports/calls `l4d2host` internals.
- [x] Replace direct `l4d2host` imports in `l4d2web/services/l4d2_facade.py` with CLI command calls.
- [x] Run `pytest l4d2web/tests/test_l4d2_facade.py -q` and confirm it passes.
### Task 4: Update worker cancellation boundary
- [x] Update job worker tests to import `CommandCancelledError` from `l4d2web.services.host_commands`.
- [x] Run `pytest l4d2web/tests/test_job_worker.py -q` and confirm failures identify the old boundary.
- [x] Update `l4d2web/services/job_worker.py` to catch the web-side cancellation exception.
- [x] Run `pytest l4d2web/tests/test_job_worker.py -q` and confirm it passes.
### Task 5: Update docs and verify
- [x] Update README/plan language from “fixed write commands only” to “fixed write commands plus read commands”.
- [x] Run `pytest l4d2host/tests -q` and confirm pass.
- [x] Run `pytest l4d2web/tests -q` and confirm pass.
- [x] Run `ccc index` if available so the code index reflects the boundary change.
## Self-Review
- Spec coverage: covers CLI read commands, web-side CLI execution, status/log parsing, cancellation, docs, and verification.
- Scope: hosts table, SSH transport, and overlay sync are explicitly excluded from this change.
- Type consistency: the web-side cancellation type is `l4d2web.services.host_commands.CommandCancelledError`; the host-side process type remains internal to `l4d2host`.

View file

@ -2,7 +2,7 @@
> **Approval gate:** This plan may be written and refined without further approval. Do not implement code changes from this plan until the user explicitly approves implementation.
**Goal:** Complete the `l4d2web` async lifecycle queue so queued jobs are claimed, executed through the direct `l4d2host` Python APIs, logged to `job_logs`, reflected in server state, and streamed live to the UI.
**Goal:** Complete the `l4d2web` async lifecycle queue so queued jobs are claimed, executed through the `l4d2ctl` host command boundary, logged to `job_logs`, reflected in server state, and streamed live to the UI.
**Architecture:** Keep the v1 single-process Flask architecture. Use DB-backed queued jobs as the durable source of truth, worker threads inside the Flask process, SQLite-safe process-local locks, and direct imports through `l4d2web.services.l4d2_facade`. Do not shell out to `l4d2ctl` from the web app.

View file

@ -4,7 +4,7 @@ Python host library and CLI for managing L4D2 instances.
## CLI
`l4d2ctl` exposes exactly these commands in v1:
`l4d2ctl` exposes these write commands in v1:
- `install`
- `initialize <name> -f <spec.yaml>`
@ -12,6 +12,11 @@ Python host library and CLI for managing L4D2 instances.
- `stop <name>`
- `delete <name>`
It also exposes read commands used by the web app host boundary:
- `status <name> --json`
- `logs <name> --lines <n> --follow/--no-follow`
Subprocess failures are fail-fast. Raw stderr is written to stderr and the command exits with the same subprocess return code.
## Runtime Paths
@ -68,9 +73,9 @@ steamcmd +quit
`uv` is optional deployment tooling. Debian 13 did not provide an `uv` package during the smoke test, so install it explicitly if you want to use it for faster virtualenv/dependency setup. `l4d2ctl` does not require `uv` at runtime.
## Web App Read APIs
## Host-Local Read APIs
These read APIs are provided for web app integration:
These Python read APIs back the CLI read commands and remain available for host-local callers:
- `get_instance_status(name)`
- `stream_instance_logs(name, lines=200, follow=True)`

View file

@ -1,9 +1,12 @@
from pathlib import Path
import json
import subprocess
import typer
from l4d2host.instances import delete_instance, initialize_instance, start_instance, stop_instance
from l4d2host.logs import stream_instance_logs
from l4d2host.status import get_instance_status
from l4d2host.steam_install import SteamInstaller
@ -54,3 +57,30 @@ def delete(name: str) -> None:
delete_instance(name, passthrough=True)
except subprocess.CalledProcessError as exc:
_exit_from_subprocess_error(exc)
@app.command()
def status(name: str, json_output: bool = typer.Option(False, "--json")) -> None:
instance_status = get_instance_status(name)
if json_output:
typer.echo(
json.dumps(
{
"state": instance_status.state,
"raw_active_state": instance_status.raw_active_state,
"raw_sub_state": instance_status.raw_sub_state,
}
)
)
return
typer.echo(instance_status.state)
@app.command()
def logs(
name: str,
lines: int = typer.Option(200, "--lines"),
follow: bool = typer.Option(True, "--follow/--no-follow"),
) -> None:
for line in stream_instance_logs(name, lines=lines, follow=follow):
typer.echo(line)

View file

@ -1,4 +1,6 @@
import subprocess
from types import SimpleNamespace
import json
from typer.testing import CliRunner
@ -24,3 +26,33 @@ def test_cli_propagates_subprocess_return_code(monkeypatch) -> None:
assert result.exit_code == 9
assert "boom" in result.stderr
def test_status_command_outputs_json(monkeypatch) -> None:
monkeypatch.setattr(
"l4d2host.cli.get_instance_status",
lambda name: SimpleNamespace(state="running", raw_active_state="active", raw_sub_state="running"),
raising=False,
)
result = CliRunner().invoke(app, ["status", "alpha", "--json"])
assert result.exit_code == 0
assert json.loads(result.output) == {
"state": "running",
"raw_active_state": "active",
"raw_sub_state": "running",
}
def test_logs_command_streams_lines(monkeypatch) -> None:
monkeypatch.setattr(
"l4d2host.cli.stream_instance_logs",
lambda name, *, lines, follow: iter([f"{name}:{lines}:{follow}", "ready"]),
raising=False,
)
result = CliRunner().invoke(app, ["logs", "alpha", "--lines", "25", "--no-follow"])
assert result.exit_code == 0
assert result.output.splitlines() == ["alpha:25:False", "ready"]

View file

@ -11,6 +11,7 @@ Flask web app for managing L4D2 servers through user-private blueprints.
- Async job model with persisted command logs in `job_logs`
- Desired vs actual state model
- Live logs for jobs and servers via SSE endpoints
- Host operations go through `l4d2ctl` via a local host command runner, not direct `l4d2host` imports
## Frontend constraints

View file

@ -0,0 +1,166 @@
from dataclasses import dataclass
import os
import signal
import subprocess
import sys
import threading
import time
from typing import Callable, Iterator, Sequence
@dataclass(slots=True)
class CommandResult:
returncode: int
stdout: str
stderr: str
class HostCommandError(subprocess.CalledProcessError):
pass
class CommandCancelledError(HostCommandError):
pass
def run_command(
cmd: Sequence[str],
*,
on_stdout: Callable[[str], None] | None = None,
on_stderr: Callable[[str], None] | None = None,
passthrough: bool = False,
should_cancel: Callable[[], bool] | None = None,
cancel_poll_seconds: float = 0.2,
cancel_terminate_timeout: float = 2.0,
) -> CommandResult:
stdout_lines: list[str] = []
stderr_lines: list[str] = []
proc = subprocess.Popen(
list(cmd),
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
text=True,
bufsize=1,
start_new_session=should_cancel is not None,
)
def emit_stderr_message(line: str) -> None:
stderr_lines.append(line)
if on_stderr is not None:
on_stderr(line)
if passthrough:
print(line, file=sys.stderr)
def terminate_process() -> None:
emit_stderr_message("cancellation requested; terminating subprocess")
if should_cancel is not None:
try:
os.killpg(proc.pid, signal.SIGTERM)
except ProcessLookupError:
pass
else:
proc.terminate()
def kill_process() -> None:
emit_stderr_message("subprocess did not exit after cancellation; killing subprocess")
if should_cancel is not None:
try:
os.killpg(proc.pid, signal.SIGKILL)
except ProcessLookupError:
pass
else:
proc.kill()
def pump(
stream,
sink: list[str],
callback: Callable[[str], None] | None,
output_stream,
) -> None:
if stream is None:
return
for raw in iter(stream.readline, ""):
line = raw.rstrip("\n")
sink.append(line)
if callback is not None:
callback(line)
if passthrough:
print(line, file=output_stream)
stream.close()
stdout_thread = threading.Thread(
target=pump,
args=(proc.stdout, stdout_lines, on_stdout, sys.stdout),
daemon=True,
)
stderr_thread = threading.Thread(
target=pump,
args=(proc.stderr, stderr_lines, on_stderr, sys.stderr),
daemon=True,
)
stdout_thread.start()
stderr_thread.start()
cancelled = False
while True:
returncode = proc.poll()
if returncode is not None:
break
if should_cancel is not None and should_cancel():
cancelled = True
terminate_process()
try:
returncode = proc.wait(timeout=cancel_terminate_timeout)
except subprocess.TimeoutExpired:
kill_process()
returncode = proc.wait()
break
time.sleep(cancel_poll_seconds)
stdout_thread.join()
stderr_thread.join()
result = CommandResult(
returncode=returncode,
stdout="\n".join(stdout_lines),
stderr="\n".join(stderr_lines),
)
if cancelled:
raise CommandCancelledError(
returncode=returncode,
cmd=list(cmd),
output=result.stdout,
stderr=result.stderr,
)
if returncode != 0:
raise HostCommandError(
returncode=returncode,
cmd=list(cmd),
output=result.stdout,
stderr=result.stderr,
)
return result
def stream_command(cmd: Sequence[str]) -> Iterator[str]:
proc = subprocess.Popen(
list(cmd),
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
text=True,
bufsize=1,
)
try:
if proc.stdout is None:
return
for raw in iter(proc.stdout.readline, ""):
yield raw.rstrip("\n")
finally:
if proc.poll() is None:
proc.terminate()
try:
proc.wait(timeout=2)
except subprocess.TimeoutExpired:
proc.kill()
proc.wait()

View file

@ -4,12 +4,12 @@ import subprocess
import threading
import time
from l4d2host.process import CommandCancelledError
from sqlalchemy import func, select
from sqlalchemy.orm import Session
from l4d2web.db import session_scope
from l4d2web.models import Job, JobLog, Server
from l4d2web.services.host_commands import CommandCancelledError
TERMINAL_JOB_STATES = {"succeeded", "failed", "cancelled"}

View file

@ -1,17 +1,22 @@
from dataclasses import dataclass
import json
from pathlib import Path
from sqlalchemy import select
from l4d2host.instances import delete_instance, initialize_instance, start_instance, stop_instance
from l4d2host.logs import stream_instance_logs
from l4d2host.status import get_instance_status
from l4d2host.steam_install import SteamInstaller
from l4d2web.db import session_scope
from l4d2web.models import Blueprint, BlueprintOverlay, Overlay, Server
from l4d2web.services import host_commands
from l4d2web.services.spec_yaml import write_temp_spec
@dataclass(slots=True)
class ServerStatus:
state: str
raw_active_state: str
raw_sub_state: str
def build_server_spec_payload(server: Server, blueprint: Blueprint, overlay_names: list[str]) -> dict:
return {
"port": server.port,
@ -42,36 +47,69 @@ def load_server_blueprint_bundle(server_id: int) -> tuple[Server, Blueprint, lis
def install_runtime(on_stdout=None, on_stderr=None, should_cancel=None) -> None:
SteamInstaller().install_or_update(on_stdout=on_stdout, on_stderr=on_stderr, should_cancel=should_cancel)
host_commands.run_command(
["l4d2ctl", "install"],
on_stdout=on_stdout,
on_stderr=on_stderr,
should_cancel=should_cancel,
)
def initialize_server(server_id: int, on_stdout=None, on_stderr=None, should_cancel=None) -> None:
server, blueprint, overlay_names = load_server_blueprint_bundle(server_id)
spec_path = write_temp_spec(build_server_spec_payload(server, blueprint, overlay_names))
try:
initialize_instance(server.name, spec_path, on_stdout=on_stdout, on_stderr=on_stderr, should_cancel=should_cancel)
host_commands.run_command(
["l4d2ctl", "initialize", server.name, "-f", str(spec_path)],
on_stdout=on_stdout,
on_stderr=on_stderr,
should_cancel=should_cancel,
)
finally:
spec_path.unlink(missing_ok=True)
def start_server(server_id: int, on_stdout=None, on_stderr=None, should_cancel=None) -> None:
server, _, _ = load_server_blueprint_bundle(server_id)
start_instance(server.name, on_stdout=on_stdout, on_stderr=on_stderr, should_cancel=should_cancel)
host_commands.run_command(
["l4d2ctl", "start", server.name],
on_stdout=on_stdout,
on_stderr=on_stderr,
should_cancel=should_cancel,
)
def stop_server(server_id: int, on_stdout=None, on_stderr=None, should_cancel=None) -> None:
server, _, _ = load_server_blueprint_bundle(server_id)
stop_instance(server.name, on_stdout=on_stdout, on_stderr=on_stderr, should_cancel=should_cancel)
host_commands.run_command(
["l4d2ctl", "stop", server.name],
on_stdout=on_stdout,
on_stderr=on_stderr,
should_cancel=should_cancel,
)
def delete_server(server_id: int, on_stdout=None, on_stderr=None, should_cancel=None) -> None:
server, _, _ = load_server_blueprint_bundle(server_id)
delete_instance(server.name, on_stdout=on_stdout, on_stderr=on_stderr, should_cancel=should_cancel)
host_commands.run_command(
["l4d2ctl", "delete", server.name],
on_stdout=on_stdout,
on_stderr=on_stderr,
should_cancel=should_cancel,
)
def server_status(server_name: str):
return get_instance_status(server_name)
def server_status(server_name: str) -> ServerStatus:
result = host_commands.run_command(["l4d2ctl", "status", server_name, "--json"])
payload = json.loads(result.stdout or "{}")
return ServerStatus(
state=str(payload.get("state", "unknown")),
raw_active_state=str(payload.get("raw_active_state", "unknown")),
raw_sub_state=str(payload.get("raw_sub_state", "unknown")),
)
def stream_server_logs(server_name: str, *, lines: int = 200, follow: bool = True):
return stream_instance_logs(server_name, lines=lines, follow=follow)
command = ["l4d2ctl", "logs", server_name, "--lines", str(lines)]
command.append("--follow" if follow else "--no-follow")
return host_commands.stream_command(command)

View file

@ -0,0 +1,55 @@
import pytest
def test_run_command_streams_stdout_and_stderr_callbacks() -> None:
from l4d2web.services.host_commands import run_command
stdout: list[str] = []
stderr: list[str] = []
result = run_command(
["python3", "-c", "import sys; print('ok'); print('warn', file=sys.stderr)"],
on_stdout=stdout.append,
on_stderr=stderr.append,
)
assert stdout == ["ok"]
assert stderr == ["warn"]
assert result.returncode == 0
assert result.stdout == "ok"
assert result.stderr == "warn"
def test_run_command_raises_host_error_on_nonzero_exit() -> None:
from l4d2web.services.host_commands import HostCommandError, run_command
with pytest.raises(HostCommandError) as exc_info:
run_command(["python3", "-c", "import sys; print('bad', file=sys.stderr); sys.exit(7)"])
assert exc_info.value.returncode == 7
assert exc_info.value.stderr == "bad"
def test_run_command_raises_cancelled_error_when_cancel_requested() -> None:
from l4d2web.services.host_commands import CommandCancelledError, run_command
stdout: list[str] = []
with pytest.raises(CommandCancelledError):
run_command(
["python3", "-c", "import time; print('ready', flush=True); time.sleep(5)"],
on_stdout=stdout.append,
should_cancel=lambda: bool(stdout),
cancel_poll_seconds=0.01,
cancel_terminate_timeout=0.2,
)
assert stdout == ["ready"]
def test_stream_command_yields_stdout_lines() -> None:
from l4d2web.services.host_commands import stream_command
lines = list(stream_command(["python3", "-c", "print('one'); print('two')"]))
assert lines == ["one", "two"]

View file

@ -6,11 +6,11 @@ import subprocess
import pytest
from sqlalchemy import select
from l4d2host.process import CommandCancelledError
from l4d2web.auth import hash_password
from l4d2web.db import init_db, session_scope
from l4d2web.models import Blueprint, Job, Server, User
from l4d2web.services import l4d2_facade
from l4d2web.services.host_commands import CommandCancelledError
from l4d2web.services.job_worker import SchedulerState, can_start, recover_stale_jobs, run_worker_once

View file

@ -6,6 +6,7 @@ from l4d2web.app import create_app
from l4d2web.auth import hash_password
from l4d2web.db import init_db, session_scope
from l4d2web.models import Blueprint, BlueprintOverlay, Overlay, Server, User
from l4d2web.services.host_commands import CommandResult
@pytest.fixture
@ -44,19 +45,114 @@ def server_with_blueprint(tmp_path, monkeypatch):
return server_id
def test_initialize_uses_latest_blueprint_data(monkeypatch: pytest.MonkeyPatch, server_with_blueprint) -> None:
called: dict[str, str] = {}
def test_initialize_uses_l4d2ctl_with_latest_blueprint_data(
monkeypatch: pytest.MonkeyPatch,
server_with_blueprint,
) -> None:
calls: list[list[str]] = []
def fake_initialize(name, spec_path, **kwargs):
def fake_run_command(cmd, **kwargs):
del kwargs
called["name"] = name
called["spec"] = Path(spec_path).read_text()
calls.append(list(cmd))
spec_path = Path(cmd[cmd.index("-f") + 1])
assert "sv_consistency 1" in spec_path.read_text()
return CommandResult(returncode=0, stdout="", stderr="")
monkeypatch.setattr("l4d2web.services.l4d2_facade.initialize_instance", fake_initialize)
monkeypatch.setattr("l4d2web.services.host_commands.run_command", fake_run_command)
monkeypatch.setattr(
"l4d2web.services.l4d2_facade.initialize_instance",
lambda *args, **kwargs: pytest.fail("facade must not call l4d2host.initialize_instance directly"),
raising=False,
)
from l4d2web.services.l4d2_facade import initialize_server
initialize_server(server_with_blueprint)
assert called["name"] == "alpha"
assert "sv_consistency 1" in called["spec"]
assert calls[0][:3] == ["l4d2ctl", "initialize", "alpha"]
assert calls[0][3] == "-f"
def test_install_and_lifecycle_commands_use_l4d2ctl(
monkeypatch: pytest.MonkeyPatch,
server_with_blueprint,
) -> None:
calls: list[list[str]] = []
def fake_run_command(cmd, **kwargs):
del kwargs
calls.append(list(cmd))
return CommandResult(returncode=0, stdout="", stderr="")
monkeypatch.setattr("l4d2web.services.host_commands.run_command", fake_run_command)
for name in ["SteamInstaller", "start_instance", "stop_instance", "delete_instance"]:
monkeypatch.setattr(
f"l4d2web.services.l4d2_facade.{name}",
lambda *args, **kwargs: pytest.fail(f"facade must not call l4d2host {name} directly"),
raising=False,
)
from l4d2web.services.l4d2_facade import delete_server, install_runtime, start_server, stop_server
install_runtime()
start_server(server_with_blueprint)
stop_server(server_with_blueprint)
delete_server(server_with_blueprint)
assert calls == [
["l4d2ctl", "install"],
["l4d2ctl", "start", "alpha"],
["l4d2ctl", "stop", "alpha"],
["l4d2ctl", "delete", "alpha"],
]
def test_server_status_parses_l4d2ctl_json(monkeypatch: pytest.MonkeyPatch) -> None:
calls: list[list[str]] = []
def fake_run_command(cmd, **kwargs):
del kwargs
calls.append(list(cmd))
return CommandResult(
returncode=0,
stdout='{"state":"running","raw_active_state":"active","raw_sub_state":"running"}',
stderr="",
)
monkeypatch.setattr("l4d2web.services.host_commands.run_command", fake_run_command)
monkeypatch.setattr(
"l4d2web.services.l4d2_facade.get_instance_status",
lambda *args, **kwargs: pytest.fail("facade must not call l4d2host.get_instance_status directly"),
raising=False,
)
from l4d2web.services.l4d2_facade import server_status
status = server_status("alpha")
assert calls == [["l4d2ctl", "status", "alpha", "--json"]]
assert status.state == "running"
assert status.raw_active_state == "active"
assert status.raw_sub_state == "running"
def test_server_logs_stream_l4d2ctl_logs(monkeypatch: pytest.MonkeyPatch) -> None:
calls: list[list[str]] = []
def fake_stream_command(cmd):
calls.append(list(cmd))
return iter(["one", "two"])
monkeypatch.setattr("l4d2web.services.host_commands.stream_command", fake_stream_command)
monkeypatch.setattr(
"l4d2web.services.l4d2_facade.stream_instance_logs",
lambda *args, **kwargs: pytest.fail("facade must not call l4d2host.stream_instance_logs directly"),
raising=False,
)
from l4d2web.services.l4d2_facade import stream_server_logs
lines = list(stream_server_logs("alpha", lines=10, follow=False))
assert calls == [["l4d2ctl", "logs", "alpha", "--lines", "10", "--no-follow"]]
assert lines == ["one", "two"]