Compare commits
No commits in common. "master" and "router" have entirely different histories.
273 changed files with 1618 additions and 15489 deletions
5
.gitignore
vendored
5
.gitignore
vendored
|
|
@ -2,8 +2,3 @@
|
|||
.venv
|
||||
.cache
|
||||
*.pyc
|
||||
.bw_debug_history
|
||||
# CocoIndex Code (ccc)
|
||||
/.cocoindex_code/
|
||||
# bundlewrap git_deploy local-mirror map (operator-specific paths)
|
||||
git_deploy_repos
|
||||
|
|
|
|||
106
AGENTS.md
106
AGENTS.md
|
|
@ -1,106 +0,0 @@
|
|||
# ckn-bw — agent & contributor guide
|
||||
|
||||
## What this repo is
|
||||
|
||||
A [BundleWrap](https://bundlewrap.org/) configuration-management repo
|
||||
for ~22 personal/family-infra nodes. Nodes, groups, and bundles are
|
||||
defined in plain Python; `bw apply` deploys the resulting state to
|
||||
real machines.
|
||||
|
||||
Note: the root `README.md` is the maintainer's personal scratchpad,
|
||||
not project documentation. Onboarding lives **here**, in `AGENTS.md`.
|
||||
|
||||
## Quickstart for agents
|
||||
|
||||
Five rules; follow these and you won't break things:
|
||||
|
||||
1. **Read-only by default.** Never run `bw apply`, `bw run`, or
|
||||
`bw lock` without explicit user request — even with `-i`. Stick
|
||||
to `bw test`, `bw nodes`, `bw groups`, `bw items`,
|
||||
`bw metadata`, `bw hash`, `bw verify`, `bw debug`. See
|
||||
[`docs/agents/commands.md`](docs/agents/commands.md) and the
|
||||
fork's [safety envelope](https://github.com/CroneKorkN/bundlewrap/blob/main/AGENTS.md).
|
||||
2. **Never echo decrypted secrets.** Don't print, paste, or log the
|
||||
value behind a `!password_for:`, `!decrypt:`, or
|
||||
`!32_random_bytes_as_base64_for:` magic string — not even from
|
||||
`bw debug` exploration. See
|
||||
[`conventions.md#secrets`](docs/agents/conventions.md#secrets).
|
||||
3. **Don't touch the do-not-modify list.** `.secrets.cfg*`, `.venv`,
|
||||
`.cache`, `.bw_debug_history`, `.envrc`, root `README.md`. Treat
|
||||
`hooks/` and `items/` (custom item types) with extra care: a
|
||||
broken hook or item type breaks every `bw` command repo-wide.
|
||||
4. **Use the fork.** The venv runs editable from
|
||||
[`github.com/CroneKorkN/bundlewrap`](https://github.com/CroneKorkN/bundlewrap)
|
||||
(branch `main`). Behavior tracks upstream `main`; the fork's
|
||||
[`AGENTS.md`](https://github.com/CroneKorkN/bundlewrap/blob/main/AGENTS.md)
|
||||
is the canonical bundlewrap-language reference. See
|
||||
[`conventions.md#bundlewrap-version`](docs/agents/conventions.md#bundlewrap-version).
|
||||
5. **Prefer adding helpers to `libs/`** over duplicating logic across
|
||||
bundles. Repo-wide helpers go in
|
||||
[`libs/`](libs/AGENTS.md), reachable as `repo.libs.<x>`.
|
||||
|
||||
## Layout
|
||||
|
||||
| Dir | What's there |
|
||||
|---|---|
|
||||
| [`bundles/`](bundles/AGENTS.md) | 103 bundles. One subdir per bundle (`items.py`, `metadata.py`, `files/`). |
|
||||
| [`nodes/`](nodes/AGENTS.md) | One file per node (~22). `eval()`-loaded; demagified through `repo.vault`. |
|
||||
| [`groups/`](groups/AGENTS.md) | Group definitions, organized by axis (`applications/`, `locations/`, `machine/`, `os/`). |
|
||||
| [`libs/`](libs/AGENTS.md) | Shared Python helpers reachable as `repo.libs.<modulename>`. |
|
||||
| [`hooks/`](hooks/AGENTS.md) | bw lifecycle hooks (`apply_start`, `test`, `node_apply_start`, …). |
|
||||
| [`data/`](data/AGENTS.md) | Out-of-bundle data assets (apt keys, grafana dashboards, …). |
|
||||
| [`items/`](items/AGENTS.md) | Custom item types (currently `download:`). |
|
||||
| [`bin/`](bin/AGENTS.md) | Operator scripts; not invoked by bundlewrap. |
|
||||
| [`docs/agents/`](docs/agents/conventions.md) | Repo conventions and command deltas. |
|
||||
|
||||
## How nodes, groups, and bundles fit together
|
||||
|
||||
- A **node** (`nodes/<location>.<role>.py`) declares the groups it
|
||||
belongs to and any node-local bundles + metadata overrides.
|
||||
- A **group** (`groups/<axis>/<x>.py`) attaches bundles and shared
|
||||
metadata to its members. Groups inherit via `supergroups`.
|
||||
- A **bundle** (`bundles/<x>/`) is one chunk of configuration:
|
||||
`items.py` produces the items (files, services, packages),
|
||||
`metadata.py` declares `defaults` and `@metadata_reactor` functions
|
||||
that derive metadata from other metadata.
|
||||
- The repo-root loaders (`nodes.py`, `groups.py`) walk these dirs and
|
||||
`eval()` each file. `nodes.py` additionally **demagifies** the
|
||||
result, resolving `!password_for:` etc. through `repo.vault`. See
|
||||
[`conventions.md#eval-loaded-node-and-group-files`](docs/agents/conventions.md#eval-loaded-node-and-group-files)
|
||||
for the constraints this places on editors.
|
||||
- Metadata merges along: `all → location → os → machine →
|
||||
applications → node`.
|
||||
|
||||
## Conventions you must know
|
||||
|
||||
| Topic | Where |
|
||||
|---|---|
|
||||
| Bundlewrap-language reference (item types, dep keywords, reactors) | Fork's [`AGENTS.md`](https://github.com/CroneKorkN/bundlewrap/blob/main/AGENTS.md) — read first if new to bundlewrap |
|
||||
| Vault / demagify magic strings | [`conventions.md#secrets`](docs/agents/conventions.md#secrets) |
|
||||
| Bundlewrap install (editable from the fork) | [`conventions.md#bundlewrap-version`](docs/agents/conventions.md#bundlewrap-version) |
|
||||
| Group inheritance order, naming patterns | [`conventions.md#group-inheritance-order`](docs/agents/conventions.md#group-inheritance-order), [`#naming-conventions`](docs/agents/conventions.md#naming-conventions) |
|
||||
| Repo-specific bw command deltas (apt keys, suspended nodes, vault echo) | [`commands.md`](docs/agents/commands.md) |
|
||||
| Lib helpers | top-of-file docstrings in `libs/*.py` (`head -1 libs/*.py`) |
|
||||
| Suspension idioms (`*.py_`, `_old/`, "for now") | [`conventions.md#suspension-and-soft-delete-idioms`](docs/agents/conventions.md#suspension-and-soft-delete-idioms) |
|
||||
|
||||
## Where to look for examples
|
||||
|
||||
When writing a new bundle, copy patterns from one that already does
|
||||
the thing you need:
|
||||
|
||||
| Pattern | Look at |
|
||||
|---|---|
|
||||
| Vault calls inside metadata reactors | `bundles/dm-crypt/metadata.py` (compact, focused) |
|
||||
| Mako-templated files | `bundles/bind/items.py` (DNS zonefile rendering) |
|
||||
| Cross-bundle reactor writing | `bundles/nextcloud/metadata.py` (writes into `apt.packages`, `archive.paths`) |
|
||||
| Custom `download:` items | `bundles/minecraft/items.py` |
|
||||
| Node file (single-purpose) | `nodes/home.server.py` |
|
||||
| Group with `supergroups` chain | `groups/os/debian-13.py` |
|
||||
|
||||
## Where this doc lives
|
||||
|
||||
- This file: `AGENTS.md` at the repo root.
|
||||
- `CLAUDE.md` is a symlink to this file — both names point to the same
|
||||
content so different tools can find it.
|
||||
- The personal TODO scratchpad (`README.md`) is **separate** and not
|
||||
project documentation.
|
||||
|
|
@ -1 +0,0 @@
|
|||
AGENTS.md
|
||||
13
README.md
13
README.md
|
|
@ -13,6 +13,10 @@ Raspberry pi as soundcard
|
|||
- OTG g_audio
|
||||
- https://audiosciencereview.com/forum/index.php?threads/raspberry-pi-as-usb-to-i2s-adapter.8567/post-215824
|
||||
|
||||
# install bw fork
|
||||
|
||||
pip3 install --editable git+file:///Users/mwiegand/Projekte/bundlewrap-fork@main#egg=bundlewrap
|
||||
|
||||
# monitor timers
|
||||
|
||||
```sh
|
||||
|
|
@ -33,12 +37,3 @@ fi
|
|||
telegraf: execd for daemons
|
||||
|
||||
TEST
|
||||
|
||||
# git signing
|
||||
|
||||
git config --global gpg.format ssh
|
||||
git config --global commit.gpgsign true
|
||||
|
||||
git config user.name CroneKorkN
|
||||
git config user.email i@ckn.li
|
||||
git config user.signingkey "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILMVroYmswD4tLk6iH+2tvQiyaMe42yfONDsPDIdFv6I"
|
||||
|
|
|
|||
|
|
@ -1,62 +0,0 @@
|
|||
# bin/
|
||||
|
||||
## What's here
|
||||
|
||||
Operator scripts — invoked manually by the maintainer, **not** by
|
||||
bundlewrap itself. Each is a standalone Python (or shell) script that
|
||||
opens the repo via `Repository(dirname(dirname(realpath(__file__))))`.
|
||||
|
||||
Discovery is by `ls bin/` plus the `# purpose:` header line at the top
|
||||
of each script:
|
||||
|
||||
```sh
|
||||
head -2 bin/*
|
||||
```
|
||||
|
||||
## Conventions
|
||||
|
||||
- **`# purpose:` header.** Every script under `bin/` starts with
|
||||
`#!/usr/bin/env python3` (or appropriate shebang), then a
|
||||
`# purpose: <one-line description>` comment. Baseline enforced by
|
||||
`grep -L '^# purpose' bin/*`.
|
||||
- **Self-contained.** A script must work when run from anywhere — it
|
||||
resolves the repo via the script's own path, not `cwd`.
|
||||
- **Read-only by default.** Most operator scripts query/print state
|
||||
(`passwords-for`, `wireguard-client-config`). Mutating scripts
|
||||
(`upgrade_and_restart_all`, `mikrotik-firmware-updater`,
|
||||
`sync_1password`) are the exception, not the rule, and prompt for
|
||||
confirmation.
|
||||
|
||||
## How to add a script
|
||||
|
||||
1. Start from [`bin/script_template`](script_template) — it carries
|
||||
the canonical shebang + `# purpose:` header + `Repository(...)`
|
||||
bootstrap.
|
||||
2. Add the `# purpose:` line; lowercase, terse, include a `usage:`
|
||||
example if the script takes arguments.
|
||||
3. `chmod +x bin/<name>`.
|
||||
4. The script can reach helpers via `bw.libs.<x>` exactly like a
|
||||
bundle does.
|
||||
|
||||
## Pitfalls
|
||||
|
||||
- **`bin/` is not on `$PATH` by default.** Invoke as `bin/<name>` from
|
||||
the repo root, or via `direnv` if `.envrc` exposes it.
|
||||
- **Mutating scripts can hit Tier-3 territory** (per the fork's
|
||||
safety envelope). Don't run `upgrade_and_restart_all`,
|
||||
`mikrotik-firmware-updater`, or anything that does `node.run(...)`
|
||||
without explicit user instruction. See the fork's
|
||||
[`AGENTS.md`](https://github.com/CroneKorkN/bundlewrap/blob/main/AGENTS.md)
|
||||
for the three-tier model.
|
||||
- **Vault echo.** Scripts like `passwords-for` print decrypted values
|
||||
by design; that's allowed for the human at the terminal but *not*
|
||||
for the agent — never paste output into chat, ticket, or PR
|
||||
description.
|
||||
|
||||
## See also
|
||||
|
||||
- [`script_template`](script_template) — canonical starter.
|
||||
- [`docs/agents/conventions.md`](../docs/agents/conventions.md) —
|
||||
vault rules.
|
||||
- [`docs/agents/commands.md`](../docs/agents/commands.md) — read-only
|
||||
bw-command guidance.
|
||||
|
|
@ -1,149 +0,0 @@
|
|||
#!/usr/bin/env python3
|
||||
# purpose: upgrade RouterOS and routerboard firmware on `bundle:routeros` (or any selector) — usage: mikrotik-firmware-updater [<selector>...] [--yes].
|
||||
|
||||
from argparse import ArgumentParser
|
||||
from time import sleep
|
||||
|
||||
from bundlewrap.exceptions import RemoteException
|
||||
from bundlewrap.utils.cmdline import get_target_nodes
|
||||
from bundlewrap.utils.ui import io
|
||||
from bundlewrap.repo import Repository
|
||||
from os.path import realpath, dirname
|
||||
|
||||
|
||||
# parse args
|
||||
parser = ArgumentParser()
|
||||
parser.add_argument("targets", nargs="*", default=['bundle:routeros'], help="bw nodes selector")
|
||||
parser.add_argument("--yes", action="store_true", default=False, help="skip confirmation prompts")
|
||||
args = parser.parse_args()
|
||||
|
||||
|
||||
def wait_up(node):
|
||||
sleep(5)
|
||||
while True:
|
||||
try:
|
||||
node.run_routeros('/system/resource/print')
|
||||
except RemoteException:
|
||||
sleep(2)
|
||||
continue
|
||||
else:
|
||||
io.debug(f"{node.name}: is up")
|
||||
sleep(10)
|
||||
return
|
||||
|
||||
|
||||
def upgrade_switch_os(node):
|
||||
# get versions for comparison
|
||||
with io.job(f"{node.name}: checking OS version"):
|
||||
response = node.run_routeros('/system/package/update/check-for-updates').raw[-1]
|
||||
installed_os = bw.libs.version.Version(response['installed-version'])
|
||||
latest_os = bw.libs.version.Version(response['latest-version'])
|
||||
io.debug(f"{node.name}: installed: {installed_os} >= latest: {latest_os}")
|
||||
|
||||
# compare versions
|
||||
if installed_os >= latest_os:
|
||||
# os is up to date
|
||||
io.stdout(f"{node.name}: os up to date ({installed_os})")
|
||||
else:
|
||||
# confirm os upgrade
|
||||
if not args.yes and not io.ask(
|
||||
f"{node.name}: upgrade os from {installed_os} to {latest_os}?", default=True
|
||||
):
|
||||
io.stdout(f"{node.name}: skipped by user")
|
||||
return
|
||||
|
||||
# download os
|
||||
with io.job(f"{node.name}: downloading OS"):
|
||||
response = node.run_routeros('/system/package/update/download').raw[-1]
|
||||
io.debug(f"{node.name}: OS upgrade download response: {response['status']}")
|
||||
|
||||
# install and wait for reboot
|
||||
with io.job(f"{node.name}: upgrading OS"):
|
||||
try:
|
||||
response = node.run_routeros('/system/package/update/install').raw[-1]
|
||||
except RemoteException:
|
||||
pass
|
||||
wait_up(node)
|
||||
|
||||
# verify new os version
|
||||
with io.job(f"{node.name}: checking new OS version"):
|
||||
new_os = bw.libs.version.Version(node.run_routeros('/system/package/update/check-for-updates').raw[-1]['installed-version'])
|
||||
if new_os == latest_os:
|
||||
io.stdout(f"{node.name}: OS successfully upgraded from {installed_os} to {new_os}")
|
||||
else:
|
||||
raise Exception(f"{node.name}: OS upgrade failed, expected {latest_os}, got {new_os}")
|
||||
|
||||
|
||||
def upgrade_switch_firmware(node):
|
||||
# get versions for comparison
|
||||
with io.job(f"{node.name}: checking Firmware version"):
|
||||
response = node.run_routeros('/system/routerboard/print').raw[-1]
|
||||
current_firmware = bw.libs.version.Version(response['current-firmware'])
|
||||
upgrade_firmware = bw.libs.version.Version(response['upgrade-firmware'])
|
||||
io.debug(f"{node.name}: firmware installed: {current_firmware}, upgrade: {upgrade_firmware}")
|
||||
|
||||
# compare versions
|
||||
if current_firmware >= upgrade_firmware:
|
||||
# firmware is up to date
|
||||
io.stdout(f"{node.name}: firmware is up to date ({current_firmware})")
|
||||
else:
|
||||
# confirm firmware upgrade
|
||||
if not args.yes and not io.ask(
|
||||
f"{node.name}: upgrade firmware from {current_firmware} to {upgrade_firmware}?", default=True
|
||||
):
|
||||
io.stdout(f"{node.name}: skipped by user")
|
||||
return
|
||||
|
||||
# upgrade firmware
|
||||
with io.job(f"{node.name}: upgrading Firmware"):
|
||||
node.run_routeros('/system/routerboard/upgrade')
|
||||
|
||||
# reboot and wait
|
||||
with io.job(f"{node.name}: rebooting"):
|
||||
try:
|
||||
node.run_routeros('/system/reboot')
|
||||
except RemoteException:
|
||||
pass
|
||||
wait_up(node)
|
||||
|
||||
# verify firmware version
|
||||
new_firmware = bw.libs.version.Version(node.run_routeros('/system/routerboard/print').raw[-1]['current-firmware'])
|
||||
if new_firmware == upgrade_firmware:
|
||||
io.stdout(f"{node.name}: firmware successfully upgraded from {current_firmware} to {new_firmware}")
|
||||
else:
|
||||
raise Exception(f"firmware upgrade failed, expected {upgrade_firmware}, got {new_firmware}")
|
||||
|
||||
|
||||
def upgrade_switch(node):
|
||||
with io.job(f"{node.name}: checking"):
|
||||
# check if routeros
|
||||
if node.os != 'routeros':
|
||||
io.progress_advance(2)
|
||||
io.stdout(f"{node.name}: skipped, unsupported os {node.os}")
|
||||
return
|
||||
|
||||
# check switch reachability
|
||||
try:
|
||||
node.run_routeros('/system/resource/print')
|
||||
except RemoteException as error:
|
||||
io.progress_advance(2)
|
||||
io.stdout(f"{node.name}: skipped, error {error}")
|
||||
return
|
||||
|
||||
upgrade_switch_os(node)
|
||||
io.progress_advance(1)
|
||||
|
||||
upgrade_switch_firmware(node)
|
||||
io.progress_advance(1)
|
||||
|
||||
|
||||
with io:
|
||||
bw = Repository(dirname(dirname(realpath(__file__))))
|
||||
|
||||
nodes = get_target_nodes(bw, args.targets)
|
||||
|
||||
io.progress_set_total(len(nodes) * 2)
|
||||
io.stdout(f"upgrading {len(nodes)} switches: {', '.join([node.name for node in sorted(nodes)])}")
|
||||
|
||||
for node in sorted(nodes):
|
||||
upgrade_switch(node)
|
||||
|
|
@ -1,23 +0,0 @@
|
|||
#!/usr/bin/env python3
|
||||
# purpose: print node.password and selected metadata-key passwords for one node — usage: passwords-for <node>.
|
||||
|
||||
from bundlewrap.repo import Repository
|
||||
from os.path import realpath, dirname
|
||||
|
||||
import argparse
|
||||
|
||||
parser = argparse.ArgumentParser()
|
||||
parser.add_argument('node', help='Node to generate passwords for')
|
||||
args = parser.parse_args()
|
||||
|
||||
bw = Repository(dirname(dirname(realpath(__file__))))
|
||||
node = bw.get_node(args.node)
|
||||
|
||||
if node.password:
|
||||
print(f"password: {node.password}")
|
||||
|
||||
for metadata_key in sorted([
|
||||
'users/root/password',
|
||||
]):
|
||||
if value := node.metadata.get(metadata_key, None):
|
||||
print(f"{metadata_key}: {value}")
|
||||
1
bin/rcon
1
bin/rcon
|
|
@ -1,5 +1,4 @@
|
|||
#!/usr/bin/env python3
|
||||
# purpose: send an RCON command to a left4dead2 server defined in node metadata — usage: rcon (list) | rcon <server> <command>.
|
||||
|
||||
from sys import argv
|
||||
from os.path import realpath, dirname
|
||||
|
|
|
|||
|
|
@ -1,7 +1,6 @@
|
|||
#!/usr/bin/env python3
|
||||
# purpose: starter template for new operator scripts under bin/.
|
||||
|
||||
from bundlewrap.repo import Repository
|
||||
from os.path import realpath, dirname
|
||||
|
||||
bw = Repository(dirname(dirname(realpath(__file__))))
|
||||
repo = Repository(dirname(dirname(realpath(__file__))))
|
||||
|
|
|
|||
|
|
@ -1,133 +0,0 @@
|
|||
#!/usr/bin/env python3
|
||||
# purpose: upsert one 1Password login per `bundle:routeros` node, keyed on the bw node id.
|
||||
|
||||
from bundlewrap.repo import Repository
|
||||
from os.path import realpath, dirname
|
||||
|
||||
import json
|
||||
import os
|
||||
import subprocess
|
||||
from dataclasses import dataclass
|
||||
from typing import Optional, List
|
||||
|
||||
bw = Repository(dirname(dirname(realpath(__file__))))
|
||||
|
||||
VAULT=bw.vault.decrypt('encrypt$gAAAAABpLgX_xxb5NmNCl3cgHM0JL65GT6PHVXO5gwly7IkmWoEgkCDSuAcSAkNFB8Tb4RdnTdpzVQEUL1XppTKVto_O7_b11GjATiyQYiSfiQ8KZkTKLvk=').value
|
||||
BW_TAG = "bw"
|
||||
BUNDLEWRAP_FIELD_LABEL = "bundlewrap node id"
|
||||
|
||||
|
||||
@dataclass
|
||||
class OpResult:
|
||||
stdout: str
|
||||
stderr: str
|
||||
returncode: int
|
||||
|
||||
|
||||
def main():
|
||||
for node in bw.nodes_in_group('routeros'):
|
||||
upsert_node_item(
|
||||
node_name=node.name,
|
||||
node_uuid=node.metadata.get('id'),
|
||||
username=node.username,
|
||||
password=node.password,
|
||||
url=f'http://{node.hostname}',
|
||||
)
|
||||
|
||||
|
||||
def run_op(args):
|
||||
proc = subprocess.run(
|
||||
["op", "--vault", VAULT] + args,
|
||||
env=os.environ.copy(),
|
||||
capture_output=True,
|
||||
text=True,
|
||||
)
|
||||
|
||||
if proc.returncode != 0:
|
||||
raise RuntimeError(
|
||||
f"op {' '.join(args)} failed with code {proc.returncode}:\n"
|
||||
f"STDOUT:\n{proc.stdout}\n\nSTDERR:\n{proc.stderr}"
|
||||
)
|
||||
|
||||
return OpResult(stdout=proc.stdout, stderr=proc.stderr, returncode=proc.returncode)
|
||||
|
||||
|
||||
def op_item_list_bw():
|
||||
out = run_op([
|
||||
"item", "list",
|
||||
"--tags", BW_TAG,
|
||||
"--format", "json",
|
||||
])
|
||||
stdout = out.stdout.strip()
|
||||
return json.loads(stdout) if stdout else []
|
||||
|
||||
|
||||
def op_item_get(item_id):
|
||||
args = ["item", "get", item_id, "--format", "json"]
|
||||
return json.loads(run_op(args).stdout)
|
||||
|
||||
|
||||
def op_item_create(title, node_uuid, username, password, url):
|
||||
print(f"creating {title}")
|
||||
return json.loads(run_op([
|
||||
"item", "create",
|
||||
"--category", "LOGIN",
|
||||
"--title", title,
|
||||
"--tags", BW_TAG,
|
||||
"--url", url,
|
||||
"--format", "json",
|
||||
f"username={username}",
|
||||
f"password={password}",
|
||||
f"{BUNDLEWRAP_FIELD_LABEL}[text]={node_uuid}",
|
||||
]).stdout)
|
||||
|
||||
|
||||
def op_item_edit(item_id, title, username, password, url):
|
||||
print(f"updating {title}")
|
||||
return json.loads(run_op([
|
||||
"item", "edit",
|
||||
item_id,
|
||||
"--title", title,
|
||||
"--url", url,
|
||||
"--format", "json",
|
||||
f"username={username}",
|
||||
f"password={password}",
|
||||
]).stdout)
|
||||
|
||||
|
||||
def find_node_item_id(node_uuid):
|
||||
for summary in op_item_list_bw():
|
||||
item_id = summary.get("id")
|
||||
if not item_id:
|
||||
continue
|
||||
|
||||
item = op_item_get(item_id)
|
||||
for field in item.get("fields") or []:
|
||||
label = field.get("label")
|
||||
value = field.get("value")
|
||||
if label == BUNDLEWRAP_FIELD_LABEL and value == node_uuid:
|
||||
return item_id
|
||||
return None
|
||||
|
||||
|
||||
def upsert_node_item(node_name, node_uuid, username, password, url):
|
||||
if item_id := find_node_item_id(node_uuid):
|
||||
return op_item_edit(
|
||||
item_id=item_id,
|
||||
title=node_name,
|
||||
username=username,
|
||||
password=password,
|
||||
url=url,
|
||||
)
|
||||
else:
|
||||
return op_item_create(
|
||||
title=node_name,
|
||||
node_uuid=node_uuid,
|
||||
username=username,
|
||||
password=password,
|
||||
url=url,
|
||||
)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
|
|
@ -1,217 +0,0 @@
|
|||
#!/usr/bin/env python3
|
||||
# purpose: add missing EXIF/QuickTime timestamps to photos in a directory using mdls + exiftool — usage: timestamp_icloud_photos_for_nextcloud -d <dir>.
|
||||
|
||||
from subprocess import check_output, CalledProcessError
|
||||
from datetime import datetime, timedelta
|
||||
from pathlib import Path
|
||||
import json
|
||||
from argparse import ArgumentParser
|
||||
from concurrent.futures import ThreadPoolExecutor, as_completed
|
||||
from os import cpu_count
|
||||
from time import sleep
|
||||
|
||||
EXT_GROUPS = {
|
||||
"quicktime": {".mp4", ".mov", ".heic", ".cr3"},
|
||||
"exif": {".jpg", ".jpeg", ".cr2"},
|
||||
}
|
||||
DATETIME_KEYS = [
|
||||
("Composite", "SubSecDateTimeOriginal"),
|
||||
("Composite", "SubSecCreateDate"),
|
||||
('ExifIFD', 'DateTimeOriginal'),
|
||||
('ExifIFD', 'CreateDate'),
|
||||
('XMP-xmp', 'CreateDate'),
|
||||
('Keys', 'CreationDate'),
|
||||
('QuickTime', 'CreateDate'),
|
||||
('XMP-photoshop', 'DateCreated'),
|
||||
]
|
||||
|
||||
def run(command):
|
||||
return check_output(command, text=True).strip()
|
||||
|
||||
|
||||
def mdls_timestamp(file):
|
||||
for i in range(5): # retry a few times in case of transient mdls failures
|
||||
try:
|
||||
output = run(('mdls', '-raw', '-name', 'kMDItemContentCreationDate', file))
|
||||
except CalledProcessError as e:
|
||||
print(f"{file}: Error running mdls (attempt {i+1}/5): {e}")
|
||||
continue
|
||||
|
||||
try:
|
||||
return datetime.strptime(output, "%Y-%m-%d %H:%M:%S %z")
|
||||
except ValueError as e:
|
||||
print(f"{file}: Error parsing mdls output (attempt {i+1}/5): {e}")
|
||||
continue
|
||||
|
||||
sleep(1)
|
||||
|
||||
raise RuntimeError(f"Failed to get mdls timestamp for {file} after 5 attempts")
|
||||
|
||||
|
||||
def exiftool_data(file):
|
||||
try:
|
||||
output = run((
|
||||
'exiftool',
|
||||
'-j', # json
|
||||
'-a', # unknown tags
|
||||
'-u', # unknown values
|
||||
'-g1', # group by category
|
||||
'-time:all', # all time tags
|
||||
'-api', 'QuickTimeUTC=1', # use UTC for QuickTime timestamps
|
||||
'-d', '%Y-%m-%dT%H:%M:%S%z',
|
||||
file,
|
||||
))
|
||||
except CalledProcessError as e:
|
||||
print(f"Error running exiftool: {e}")
|
||||
return None
|
||||
else:
|
||||
return json.loads(output)[0]
|
||||
|
||||
def exiftool_timestamp(file):
|
||||
data = exiftool_data(file)
|
||||
for category, key in DATETIME_KEYS:
|
||||
try:
|
||||
value = data[category][key]
|
||||
return category, key, datetime.strptime(value, '%Y-%m-%dT%H:%M:%S%z')
|
||||
except (TypeError, KeyError, ValueError) as e:
|
||||
continue
|
||||
print(f"⚠️ {file}: No timestamp found in exiftool: " + json.dumps(data, indent=2))
|
||||
return None, None, None
|
||||
|
||||
|
||||
def photo_has_embedded_timestamp(file):
|
||||
mdls_ts = mdls_timestamp(file)
|
||||
category, key, exiftool_ts = exiftool_timestamp(file)
|
||||
|
||||
if not exiftool_ts:
|
||||
print(f"⚠️ {file}: No timestamp found in exiftool")
|
||||
return False
|
||||
|
||||
# normalize timezone for comparison
|
||||
exiftool_ts = exiftool_ts.astimezone(mdls_ts.tzinfo)
|
||||
delta = abs(mdls_ts - exiftool_ts)
|
||||
|
||||
if delta < timedelta(hours=1): # allow for small differences
|
||||
print(f"✅ {file}: {mdls_ts.isoformat()} (#{category}:{key})")
|
||||
return True
|
||||
else:
|
||||
print(f"⚠️ {file}: {mdls_ts.isoformat()} != {exiftool_ts} (Δ={delta})")
|
||||
return False
|
||||
|
||||
|
||||
def photos_without_embedded_timestamps(directory):
|
||||
executor = ThreadPoolExecutor(max_workers=cpu_count()//2)
|
||||
try:
|
||||
futures = {
|
||||
executor.submit(photo_has_embedded_timestamp, file): file
|
||||
for file in directory.iterdir()
|
||||
if file.is_file()
|
||||
if file.suffix.lower() not in {".aae"}
|
||||
if not file.name.startswith('.')
|
||||
}
|
||||
|
||||
for future in as_completed(futures):
|
||||
file = futures[future]
|
||||
has_ts = future.result() # raises immediately on first failed future
|
||||
|
||||
if has_ts:
|
||||
file.rename(file.parent / 'ok' / file.name)
|
||||
else:
|
||||
yield file
|
||||
|
||||
except Exception:
|
||||
executor.shutdown(wait=False, cancel_futures=True)
|
||||
raise
|
||||
else:
|
||||
executor.shutdown(wait=True)
|
||||
|
||||
|
||||
def exiftool_write(file, assignments):
|
||||
print(f"🔵 {file}: Writing -- {assignments}")
|
||||
return run((
|
||||
"exiftool", "-overwrite_original",
|
||||
"-api", "QuickTimeUTC=1",
|
||||
*[
|
||||
f"-{group}:{tag}={value}"
|
||||
for group, tag, value in assignments
|
||||
],
|
||||
str(file),
|
||||
))
|
||||
|
||||
|
||||
def add_missing_timestamp(file):
|
||||
data = exiftool_data(file)
|
||||
mdls_ts = mdls_timestamp(file)
|
||||
|
||||
offset = mdls_ts.strftime("%z")
|
||||
offset = f"{offset[:3]}:{offset[3:]}" if len(offset) == 5 else offset
|
||||
|
||||
exif_ts = mdls_ts.strftime("%Y:%m:%d %H:%M:%S")
|
||||
qt_ts = mdls_ts.strftime("%Y:%m:%d %H:%M:%S")
|
||||
qt_ts_tz = f"{qt_ts}{offset}"
|
||||
ext = file.suffix.lower()
|
||||
|
||||
try:
|
||||
if ext in {".heic"}:
|
||||
exiftool_write(file, [
|
||||
("ExifIFD", "DateTimeOriginal", qt_ts),
|
||||
("ExifIFD", "CreateDate", qt_ts),
|
||||
("ExifIFD", "OffsetTime", offset),
|
||||
("ExifIFD", "OffsetTimeOriginal", offset),
|
||||
("ExifIFD", "OffsetTimeDigitized", offset),
|
||||
("QuickTime", "CreateDate", qt_ts_tz),
|
||||
("Keys", "CreationDate", qt_ts_tz),
|
||||
("XMP-xmp", "CreateDate", qt_ts_tz),
|
||||
])
|
||||
elif "QuickTime" in data or ext in {".mp4", ".mov", ".heic", ".cr3"}:
|
||||
exiftool_write(file, [
|
||||
("QuickTime", "CreateDate", qt_ts_tz),
|
||||
("Keys", "CreationDate", qt_ts_tz),
|
||||
])
|
||||
elif "ExifIFD" in data or ext in {".jpg", ".jpeg", ".cr2", ".webp"}:
|
||||
exiftool_write(file, [
|
||||
("ExifIFD", "DateTimeOriginal", exif_ts),
|
||||
("ExifIFD", "CreateDate", exif_ts),
|
||||
("IFD0", "ModifyDate", exif_ts),
|
||||
("ExifIFD", "OffsetTime", offset),
|
||||
("ExifIFD", "OffsetTimeOriginal", offset),
|
||||
("ExifIFD", "OffsetTimeDigitized", offset),
|
||||
])
|
||||
elif ext in {".png", ".gif", ".avif"}:
|
||||
exiftool_write(file, [
|
||||
("XMP-xmp", "CreateDate", qt_ts_tz),
|
||||
("XMP-photoshop", "DateCreated", exif_ts),
|
||||
])
|
||||
else:
|
||||
print(f"❌ {file}: unsupported type, skipped")
|
||||
return
|
||||
|
||||
if photo_has_embedded_timestamp(file):
|
||||
print(f"✅ {file}: Timestamp successfully added: {mdls_ts.isoformat()}")
|
||||
file.rename(file.parent / 'processed' / file.name)
|
||||
return
|
||||
else:
|
||||
category, key, exiftool_ts = exiftool_timestamp(file)
|
||||
print(f"❌ {file}: Timestamp still wrong/missing after write '{category}:{key}:{exiftool_ts}': #{json.dumps(data, indent=4)}")
|
||||
return
|
||||
except CalledProcessError as e:
|
||||
print(f"❌ {file}: Failed to write timestamp: {e}")
|
||||
return
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
parser = ArgumentParser(description="Print timestamps of photos in the current directory.")
|
||||
parser.add_argument("-d", "--directory", help="Directory to scan for photos")
|
||||
args = parser.parse_args()
|
||||
|
||||
directory = Path(args.directory)
|
||||
(directory/'ok').mkdir(exist_ok=True)
|
||||
(directory/'processed').mkdir(exist_ok=True)
|
||||
|
||||
_photos_without_embedded_timestamps = list(photos_without_embedded_timestamps(directory))
|
||||
print(f"{len(_photos_without_embedded_timestamps)} photos without embedded timestamps found.")
|
||||
print("Press Enter to add missing timestamps...")
|
||||
input()
|
||||
|
||||
for file in _photos_without_embedded_timestamps:
|
||||
add_missing_timestamp(file)
|
||||
|
|
@ -1,5 +1,4 @@
|
|||
#!/usr/bin/env python3
|
||||
# purpose: apt-update and full-upgrade every non-dummy debian node, then reboot in WireGuard-aware order.
|
||||
|
||||
from bundlewrap.repo import Repository
|
||||
from os.path import realpath, dirname
|
||||
|
|
@ -24,7 +23,7 @@ for node in nodes:
|
|||
print(node.run('DEBIAN_FRONTEND=noninteractive apt update').stdout.decode())
|
||||
print(node.run('DEBIAN_FRONTEND=noninteractive apt list --upgradable').stdout.decode())
|
||||
if int(node.run('DEBIAN_FRONTEND=noninteractive apt list --upgradable 2> /dev/null | grep upgradable | wc -l').stdout.decode()):
|
||||
print(node.run('DEBIAN_FRONTEND=noninteractive apt -qy full-upgrade').stdout.decode())
|
||||
print(node.run('DEBIAN_FRONTEND=noninteractive apt -y dist-upgrade').stdout.decode())
|
||||
|
||||
# REBOOT IN ORDER
|
||||
|
||||
|
|
|
|||
1
bin/wake
1
bin/wake
|
|
@ -1,5 +1,4 @@
|
|||
#!/usr/bin/env python3
|
||||
# purpose: wake one node via WoL by name — usage: wake <node>.
|
||||
|
||||
from bundlewrap.repo import Repository
|
||||
from os.path import realpath, dirname
|
||||
|
|
|
|||
|
|
@ -1,25 +1,23 @@
|
|||
#!/usr/bin/env python3
|
||||
# purpose: print or QR-render a WireGuard client config from htz.mails metadata — usage: wireguard-client-config <client>.
|
||||
|
||||
from bundlewrap.repo import Repository
|
||||
from os.path import realpath, dirname
|
||||
from sys import argv
|
||||
from ipaddress import ip_network, ip_interface
|
||||
import argparse
|
||||
|
||||
if len(argv) != 3:
|
||||
print(f'usage: {argv[0]} <node> <client>')
|
||||
exit(1)
|
||||
|
||||
# get info from repo
|
||||
repo = Repository(dirname(dirname(realpath(__file__))))
|
||||
server_node = repo.get_node('htz.mails')
|
||||
available_clients = server_node.metadata.get('wireguard/clients').keys()
|
||||
server_node = repo.get_node(argv[1])
|
||||
|
||||
# parse args
|
||||
parser = argparse.ArgumentParser(description='Generate WireGuard client configuration.')
|
||||
parser.add_argument('client', choices=available_clients, help='The client name to generate the configuration for.')
|
||||
args = parser.parse_args()
|
||||
if argv[2] not in server_node.metadata.get('wireguard/clients'):
|
||||
print(f'client {argv[2]} not found in: {server_node.metadata.get("wireguard/clients").keys()}')
|
||||
exit(1)
|
||||
|
||||
data = server_node.metadata.get(f'wireguard/clients/{argv[2]}')
|
||||
|
||||
# get cert
|
||||
data = server_node.metadata.get(f'wireguard/clients/{args.client}')
|
||||
vpn_network = ip_interface(server_node.metadata.get('wireguard/my_ip')).network
|
||||
allowed_ips = [
|
||||
vpn_network,
|
||||
|
|
@ -45,15 +43,10 @@ Endpoint = {ip_interface(server_node.metadata.get('network/external/ipv4')).ip}:
|
|||
PersistentKeepalive = 10
|
||||
'''
|
||||
|
||||
answer = input("print config or qrcode? [Cq]: ").strip().upper()
|
||||
match answer:
|
||||
case '' | 'C':
|
||||
print('>>>>>>>>>>>>>>>')
|
||||
print(conf)
|
||||
print('<<<<<<<<<<<<<<<')
|
||||
case 'Q':
|
||||
import pyqrcode
|
||||
print(pyqrcode.create(conf).terminal(quiet_zone=1))
|
||||
case _:
|
||||
print(f'Invalid option "{answer}".')
|
||||
exit(1)
|
||||
print('>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>')
|
||||
print(conf)
|
||||
print('<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<')
|
||||
|
||||
if input("print qrcode? [Yn]: ").upper() in ['', 'Y']:
|
||||
import pyqrcode
|
||||
print(pyqrcode.create(conf).terminal(quiet_zone=1))
|
||||
|
|
@ -1,204 +0,0 @@
|
|||
# bundles/
|
||||
|
||||
## Before you start
|
||||
|
||||
Read [`docs/agents/conventions.md`](../docs/agents/conventions.md) first
|
||||
— it covers vault calls, demagify, the `repo.libs.<x>` helpers, and the
|
||||
files agents must not modify. Skipping it leads to subtly broken bundles
|
||||
(vault calls in the wrong place, dict-in-set `TypeError` because of
|
||||
unhashable nesting, etc.).
|
||||
|
||||
For bundlewrap-language reference (item types, dep keywords,
|
||||
`metadata_reactor`, `defaults`, item-file template syntax) see the fork's
|
||||
[`AGENTS.md`](https://github.com/CroneKorkN/bundlewrap/blob/main/AGENTS.md)
|
||||
and its [`docs/content/guide/item_file_templates.md`](https://github.com/CroneKorkN/bundlewrap/blob/main/docs/content/guide/item_file_templates.md).
|
||||
|
||||
## What's here
|
||||
|
||||
103 bundles. Each is a directory `bundles/<name>/` containing some of:
|
||||
|
||||
```
|
||||
bundles/<name>/
|
||||
├── items.py # the items this bundle creates (files, services, packages, …)
|
||||
├── metadata.py # `defaults` + `@metadata_reactor` functions
|
||||
├── files/ # static or templated file payloads referenced from items.py
|
||||
└── README.md # one doc per bundle, for humans and agents (see "Per-bundle README" below)
|
||||
```
|
||||
|
||||
## Conventions
|
||||
|
||||
- **Bundle names** are lowercase, hyphen-separated: `backup-server`,
|
||||
`bind-acme`, `dm-crypt`. No underscores in new bundle names — see
|
||||
[`conventions.md#naming-conventions`](../docs/agents/conventions.md#naming-conventions).
|
||||
- **`items.py`** is plain Python; it produces `files = {...}`,
|
||||
`pkg_apt = {...}`, `svc_systemd = {...}`, etc. dicts at module scope.
|
||||
Cross-item dependencies use `needs` / `triggers` / `triggered_by` —
|
||||
see the fork's `AGENTS.md` for the full keyword cheat sheet.
|
||||
- **`metadata.py`** uses `defaults = {...}` for static seed values and
|
||||
`@metadata_reactor.provides(...)` for derived values. Reactors are
|
||||
pure functions of `(metadata,)` — no side effects, no I/O.
|
||||
- **Helpers go in [`libs/`](../libs/AGENTS.md)** when they're useful to
|
||||
more than one bundle. Don't duplicate logic across bundles.
|
||||
- **Custom item types** (e.g. `download:`) live in
|
||||
[`items/`](../items/AGENTS.md), not per-bundle.
|
||||
- **Bundles own application-wide knowledge; nodes carry only the few
|
||||
per-host knobs the bundle actually needs.** When designing a bundle,
|
||||
identify the per-node knobs (e.g. domain, uplink interface, a
|
||||
vault-id suffix) and put everything else in `defaults`, or in a
|
||||
reactor that derives from those knobs. Per-node random secrets
|
||||
belong in `defaults` via `repo.vault.random_bytes_as_base64_for(...)`
|
||||
keyed on the node — not in the node file. See
|
||||
`bundles/left4me/metadata.py:10` (`secret_key` derived in defaults)
|
||||
and `bundles/postgresql/metadata.py:4` (vault-derived `password_for`
|
||||
at module scope).
|
||||
|
||||
## How to add a new bundle
|
||||
|
||||
1. `mkdir bundles/<name>/` (lowercase, hyphenated).
|
||||
2. Write `items.py` and (if anything is configurable) `metadata.py`.
|
||||
Use `repo.libs.hashable.hashable(...)` when you need to nest a dict
|
||||
or set inside a metadata set; raw dicts/sets aren't hashable.
|
||||
3. Drop static payloads into `bundles/<name>/files/`. For Mako-templated
|
||||
files, declare `'content_type': 'mako'` on the `file:` item — see
|
||||
the fork's
|
||||
[item-file-templates guide](https://github.com/CroneKorkN/bundlewrap/blob/main/docs/content/guide/item_file_templates.md).
|
||||
4. **Wire to nodes.** Either add an entry to the relevant
|
||||
[`groups/<axis>/<x>.py`](../groups/AGENTS.md) (preferred for shared
|
||||
bundles) or to the node's `bundles` list directly
|
||||
([`nodes/AGENTS.md`](../nodes/AGENTS.md)).
|
||||
5. **Verify, in this order:**
|
||||
- `bw test` — repo-wide parse + cross-cutting hooks. Loads every
|
||||
bundle, but reactors don't fire for nodes that haven't opted into
|
||||
the bundle yet — bugs in new reactors stay hidden here.
|
||||
- **Attach the bundle to a node** (via the node's `bundles` list, or
|
||||
a group it belongs to). Until you do, the next steps don't actually
|
||||
exercise the bundle.
|
||||
- `bw test <node>` — exercises every reactor and item-graph edge for
|
||||
that node. This is where most new-bundle bugs surface.
|
||||
- `bw items <node> --blame` — confirm items materialise with the
|
||||
right paths, authored by the expected bundle.
|
||||
- `bw metadata <node> -k <a/b>` — spot-check derived metadata.
|
||||
- `bw hash <node>` — preview vs current host state.
|
||||
|
||||
See [`docs/agents/commands.md#bundle-validation-workflow`](../docs/agents/commands.md#bundle-validation-workflow)
|
||||
for the rationale.
|
||||
6. Add a `bundles/<name>/README.md`. See "Per-bundle README" below
|
||||
for what to cover.
|
||||
|
||||
## How to remove a bundle
|
||||
|
||||
1. `git grep '<name>'` in `nodes/`, `groups/`, and other `bundles/` to
|
||||
find references.
|
||||
2. Remove those references.
|
||||
3. `rm -rf bundles/<name>/`.
|
||||
4. `bw test` and `bw nodes` to confirm clean.
|
||||
|
||||
## Pitfalls
|
||||
|
||||
- **`metadata.py` is evaluated at load time** for *every* node, every
|
||||
invocation of `bw`. Heavy work or I/O slows the whole repo. Keep
|
||||
reactors pure and fast; pre-compute in `libs/` if you must.
|
||||
- **Static files vs templates.** `bundles/<x>/files/<f>` is static
|
||||
unless the matching `file:` item declares `content_type='mako'`
|
||||
(or a templating extension triggers it). To check, read the matching
|
||||
`file:` entry in `items.py`.
|
||||
- **`file:` `source` defaults to the destination basename.** For a
|
||||
destination of `/etc/foo/bar.conf` with no `source` key, bw looks
|
||||
for `bundles/<bundle>/files/bar.conf`. Only declare `source`
|
||||
explicitly when the basename you want differs (e.g. shipping a Mako
|
||||
template named `bar.conf.mako` to a destination of
|
||||
`/etc/foo/bar.conf`).
|
||||
- **Reactors writing across namespaces.** Some bundles' reactors write
|
||||
into other bundles' metadata namespaces (e.g. `nextcloud` writes
|
||||
into `apt.packages`, `archive.paths`). When you change such a bundle,
|
||||
every consumer's metadata changes too. The bundle's `README.md`
|
||||
often calls these out — but the authoritative source is `metadata.py`
|
||||
itself; grep `'<other-bundle>':` in the reactors when in doubt.
|
||||
- **`bw hash` doesn't accept selectors.** Use `bw hash <node>` per
|
||||
literal name; see the fork's runbook.
|
||||
- **Reactors must read metadata.** If a reactor body returns a static
|
||||
dict without calling `metadata.get(...)`, bw raises
|
||||
`ValueError: <reactor> on <node> did not request any metadata, you
|
||||
might want to use defaults instead` once a node consumes the bundle.
|
||||
Fix: fold the contribution into `defaults`. The rule applies even
|
||||
when the reactor writes into another bundle's namespace — a static
|
||||
contribution to e.g. `nftables/output` belongs in `defaults`, where
|
||||
bw merges it with other bundles' contributions.
|
||||
- **`triggers` ↔ `triggered: True` invariant.** Any item listed in
|
||||
another's `triggers` list must declare `triggered: True`. bw
|
||||
enforces this at `bw test` time: *"…triggered by …, but missing
|
||||
'triggered' attribute"*. Corollary: an action can't be both in an
|
||||
upstream `triggers` list AND self-healing every apply — pick one.
|
||||
- **Triggered actions don't recover from partial failure.** When an
|
||||
upstream item's apply succeeds but its triggered downstream action
|
||||
fails, subsequent applies can't recover via the trigger chain —
|
||||
upstream is "already in desired state" and never re-triggers. For
|
||||
actions that must self-heal (pip installs, chowns, migrations),
|
||||
drop `triggered: True` and gate the command with `unless: <fast-check>`.
|
||||
`unless` is a shell command on the target host whose exit status
|
||||
decides whether the main command runs (exit 0 = skip); it's checked
|
||||
at fire time, after `triggered:` filtering.
|
||||
|
||||
## Per-bundle README
|
||||
|
||||
Each bundle has (or should have) a `README.md`. One doc per bundle,
|
||||
written for humans and agents both. There's no fixed structure —
|
||||
match the bundle's actual surface, write what helps a future reader
|
||||
(or future you) avoid trial-and-error.
|
||||
|
||||
The existing READMEs vary in quality and shape. For orientation,
|
||||
look at the bigger ones, not the two-line ones:
|
||||
|
||||
- [`bundles/flask/README.md`](flask/README.md) — title + one-sentence
|
||||
purpose, a metadata example as a Python dict, then the contract
|
||||
the consuming git repo has to satisfy + a logging pitfall. The
|
||||
closest thing to a "balanced doc" in tree.
|
||||
- [`bundles/dm-crypt/README.md`](dm-crypt/README.md) — same shape,
|
||||
shorter: purpose + metadata example + one sentence on effect.
|
||||
- [`bundles/apt/README.md`](apt/README.md) — relevant upstream URLs
|
||||
at the top, then a Python metadata example with rich inline
|
||||
comments (type / optionality / where keys come from).
|
||||
- [`bundles/nextcloud/README.md`](nextcloud/README.md) — operational
|
||||
scratchpad: iPhone-import recipe, preview-generator commands,
|
||||
reset queries. Captures muscle-memory the maintainer would
|
||||
otherwise re-learn each time.
|
||||
|
||||
Useful things to include, when relevant:
|
||||
|
||||
- A sentence or two on what the bundle does and when you'd attach it.
|
||||
- A metadata example as a Python dict literal, with `#` comments
|
||||
on each key (type, required vs default, units, where it comes
|
||||
from). This is the cleanest way to communicate the schema and
|
||||
matches how `metadata.py` actually looks.
|
||||
- Anything non-obvious about wiring it up — required keys without
|
||||
defaults, group-membership expectations, manual one-time steps.
|
||||
- Cross-namespace metadata writes, when this bundle's reactors
|
||||
populate another bundle's namespace. Easy to miss, cheap to flag.
|
||||
- Gotchas, debug recipes, failure modes you've actually hit.
|
||||
|
||||
What to skip:
|
||||
|
||||
- An exhaustive item list — `items.py` is shorter and more accurate.
|
||||
- Anything that would just rot — version numbers, "TODO" lists,
|
||||
change notes. Use git history.
|
||||
|
||||
If a single paragraph is enough to say what's worth saying, write a
|
||||
single paragraph. Verbosity isn't a goal.
|
||||
|
||||
Convention going forward is leave-as-you-go: any time you materially
|
||||
edit a bundle, top up its README (or write one if it's missing).
|
||||
Don't burn a session bulk-reformatting the existing ones — uneven
|
||||
quality is part of what we accept in exchange for not blocking other
|
||||
work.
|
||||
|
||||
## See also
|
||||
|
||||
- [`docs/agents/conventions.md`](../docs/agents/conventions.md) — repo
|
||||
idioms (vault, demagify, naming, do-not-touch list).
|
||||
- [`docs/agents/commands.md`](../docs/agents/commands.md) — repo-specific
|
||||
command deltas.
|
||||
- [`items/AGENTS.md`](../items/AGENTS.md) — custom item types
|
||||
(`download:`); when to write a new one vs use `file:`.
|
||||
- [`libs/AGENTS.md`](../libs/AGENTS.md) — shared helpers.
|
||||
- Fork's [`AGENTS.md`](https://github.com/CroneKorkN/bundlewrap/blob/main/AGENTS.md)
|
||||
— bundlewrap-language reference + safety envelope.
|
||||
|
|
@ -13,14 +13,16 @@ defaults = {
|
|||
},
|
||||
},
|
||||
'telegraf': {
|
||||
'inputs': {
|
||||
'exec': {
|
||||
'apcupsd': {
|
||||
'commands': ["sudo /usr/local/share/telegraf/apcupsd"],
|
||||
'name_override': "apcupsd",
|
||||
'data_format': "influx",
|
||||
'interval': '30s',
|
||||
'flush_interval': '30s',
|
||||
'config': {
|
||||
'inputs': {
|
||||
'exec': {
|
||||
repo.libs.hashable.hashable({
|
||||
'commands': ["sudo /usr/local/share/telegraf/apcupsd"],
|
||||
'name_override': "apcupsd",
|
||||
'data_format': "influx",
|
||||
'interval': '30s',
|
||||
'flush_interval': '30s',
|
||||
}),
|
||||
},
|
||||
},
|
||||
},
|
||||
|
|
|
|||
|
|
@ -13,9 +13,6 @@
|
|||
'deb',
|
||||
'deb-src',
|
||||
},
|
||||
'options': { # optional
|
||||
'aarch': 'amd64',
|
||||
},
|
||||
'urls': {
|
||||
'https://deb.debian.org/debian',
|
||||
},
|
||||
|
|
|
|||
|
|
@ -62,7 +62,6 @@ files = {
|
|||
'/usr/lib/nagios/plugins/check_apt_upgradable': {
|
||||
'mode': '0755',
|
||||
},
|
||||
# /etc/kernel/postinst.d/apt-auto-removal
|
||||
}
|
||||
|
||||
actions = {
|
||||
|
|
|
|||
|
|
@ -4,8 +4,6 @@ defaults = {
|
|||
'apt-listchanges': {
|
||||
'installed': False,
|
||||
},
|
||||
'ca-certificates': {},
|
||||
'unattended-upgrades': {},
|
||||
},
|
||||
'config': {
|
||||
'DPkg': {
|
||||
|
|
@ -23,10 +21,6 @@ defaults = {
|
|||
},
|
||||
},
|
||||
'APT': {
|
||||
'Periodic': {
|
||||
'Update-Package-Lists': '1',
|
||||
'Unattended-Upgrade': '1',
|
||||
},
|
||||
'NeverAutoRemove': {
|
||||
'^firmware-linux.*',
|
||||
'^linux-firmware$',
|
||||
|
|
@ -54,11 +48,6 @@ defaults = {
|
|||
'Error-Mode': 'any',
|
||||
},
|
||||
},
|
||||
'Unattended-Upgrade': {
|
||||
'Origins-Pattern': {
|
||||
"origin=*",
|
||||
},
|
||||
},
|
||||
},
|
||||
'sources': {},
|
||||
},
|
||||
|
|
@ -117,6 +106,33 @@ def signed_by(metadata):
|
|||
}
|
||||
|
||||
|
||||
@metadata_reactor.provides(
|
||||
'apt/config',
|
||||
'apt/packages',
|
||||
)
|
||||
def unattended_upgrades(metadata):
|
||||
return {
|
||||
'apt': {
|
||||
'config': {
|
||||
'APT': {
|
||||
'Periodic': {
|
||||
'Update-Package-Lists': '1',
|
||||
'Unattended-Upgrade': '1',
|
||||
},
|
||||
},
|
||||
'Unattended-Upgrade': {
|
||||
'Origins-Pattern': {
|
||||
"origin=*",
|
||||
},
|
||||
},
|
||||
},
|
||||
'packages': {
|
||||
'unattended-upgrades': {},
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
|
||||
# @metadata_reactor.provides(
|
||||
# 'apt/config',
|
||||
# 'apt/list_changes',
|
||||
|
|
|
|||
|
|
@ -1,31 +1,13 @@
|
|||
#!/bin/bash
|
||||
|
||||
set -u
|
||||
set -exu
|
||||
|
||||
# FIXME: inelegant
|
||||
% if wol_command:
|
||||
${wol_command}
|
||||
% endif
|
||||
|
||||
exit=0
|
||||
failed_paths=""
|
||||
|
||||
for path in $(jq -r '.paths | .[]' < /etc/backup/config.json)
|
||||
do
|
||||
echo backing up $path
|
||||
/opt/backup/backup_path "$path"
|
||||
# set exit to 1 if any backup fails
|
||||
if [ $? -ne 0 ]
|
||||
then
|
||||
echo ERROR: backing up $path failed >&2
|
||||
exit=5
|
||||
failed_paths="$failed_paths $path"
|
||||
fi
|
||||
done
|
||||
|
||||
if [ $exit -ne 0 ]
|
||||
then
|
||||
echo "ERROR: failed to backup paths: $failed_paths" >&2
|
||||
fi
|
||||
|
||||
exit $exit
|
||||
|
|
|
|||
|
|
@ -1,6 +1,6 @@
|
|||
#!/bin/bash
|
||||
|
||||
set -eu
|
||||
set -exu
|
||||
|
||||
path=$1
|
||||
uuid=$(jq -r .client_uuid < /etc/backup/config.json)
|
||||
|
|
|
|||
|
|
@ -33,7 +33,6 @@ def acme_zone(metadata):
|
|||
str(ip_interface(other_node.metadata.get('network/internal/ipv4')).ip)
|
||||
for other_node in repo.nodes
|
||||
if other_node.metadata.get('letsencrypt/domains', {})
|
||||
and other_node.metadata.get('network/internal/ipv4', None)
|
||||
},
|
||||
*{
|
||||
str(ip_interface(other_node.metadata.get('wireguard/my_ip')).ip)
|
||||
|
|
|
|||
|
|
@ -1,30 +0,0 @@
|
|||
# bind
|
||||
|
||||
Authoritative DNS — primary plus optional `bind/master_node` slaves.
|
||||
|
||||
## Applying changes needs both nodes
|
||||
|
||||
The slave's bw-managed zone files are rendered from the master's
|
||||
metadata at slave-apply time (see `bundles/bind/items.py:100`). When
|
||||
you change a record on the master (adding a `letsencrypt/domains`
|
||||
entry, a new vhost, etc.), the change is only published once you
|
||||
apply BOTH:
|
||||
|
||||
```sh
|
||||
bw apply htz.mails # primary (where the source records live)
|
||||
bw apply ovh.secondary # secondary (renders its own zone files)
|
||||
```
|
||||
|
||||
Until both have been applied, `bw verify ovh.secondary` will show
|
||||
stale zones and consumers that hit the secondary (Let's Encrypt's
|
||||
secondary-region validators in particular) will see NXDOMAIN. Even
|
||||
though the slave's named.conf.local declares `type slave;`, don't
|
||||
rely on bind's own AXFR catching up — the bw-rendered file on disk
|
||||
is what `bw verify` measures.
|
||||
|
||||
## See also
|
||||
|
||||
- `bundles/bind-acme/` — the in-house ACME-update receiver.
|
||||
- `bundles/letsencrypt/README.md` — DNS-01 prerequisites and the
|
||||
negative-cache penalty (the most common operational consequence
|
||||
of forgetting to apply the secondary).
|
||||
|
|
@ -1,8 +0,0 @@
|
|||
$TTL 86400
|
||||
@ IN SOA localhost. root.localhost. (
|
||||
1 ; Serial
|
||||
604800 ; Refresh
|
||||
86400 ; Retry
|
||||
2419200 ; Expire
|
||||
86400 ) ; Negative Cache TTL
|
||||
IN NS localhost.
|
||||
|
|
@ -29,7 +29,6 @@ view "${view_name}" {
|
|||
|
||||
% if view_conf['is_internal']:
|
||||
recursion yes;
|
||||
include "/etc/bind/zones.rfc1918";
|
||||
% else:
|
||||
recursion no;
|
||||
rate-limit {
|
||||
|
|
@ -63,6 +62,9 @@ view "${view_name}" {
|
|||
file "/var/lib/bind/${view_name}/${zone_name}";
|
||||
};
|
||||
% endfor
|
||||
|
||||
include "/etc/bind/named.conf.default-zones";
|
||||
include "/etc/bind/zones.rfc1918";
|
||||
};
|
||||
|
||||
% endfor
|
||||
|
|
|
|||
|
|
@ -10,7 +10,7 @@ options {
|
|||
|
||||
% if type == 'master':
|
||||
notify yes;
|
||||
also-notify { ${' '.join(sorted(f'{ip};' for ip in slave_ips))} };
|
||||
allow-transfer { ${' '.join(sorted(f'{ip};' for ip in slave_ips))} };
|
||||
also-notify { ${' '.join([f'{ip};' for ip in slave_ips])} };
|
||||
allow-transfer { ${' '.join([f'{ip};' for ip in slave_ips])} };
|
||||
% endif
|
||||
};
|
||||
|
|
|
|||
|
|
@ -1,19 +0,0 @@
|
|||
zone "10.in-addr.arpa" { type master; file "/etc/bind/db.empty"; };
|
||||
zone "16.172.in-addr.arpa" { type master; file "/etc/bind/db.empty"; };
|
||||
zone "17.172.in-addr.arpa" { type master; file "/etc/bind/db.empty"; };
|
||||
zone "18.172.in-addr.arpa" { type master; file "/etc/bind/db.empty"; };
|
||||
zone "19.172.in-addr.arpa" { type master; file "/etc/bind/db.empty"; };
|
||||
zone "20.172.in-addr.arpa" { type master; file "/etc/bind/db.empty"; };
|
||||
zone "21.172.in-addr.arpa" { type master; file "/etc/bind/db.empty"; };
|
||||
zone "22.172.in-addr.arpa" { type master; file "/etc/bind/db.empty"; };
|
||||
zone "23.172.in-addr.arpa" { type master; file "/etc/bind/db.empty"; };
|
||||
zone "24.172.in-addr.arpa" { type master; file "/etc/bind/db.empty"; };
|
||||
zone "25.172.in-addr.arpa" { type master; file "/etc/bind/db.empty"; };
|
||||
zone "26.172.in-addr.arpa" { type master; file "/etc/bind/db.empty"; };
|
||||
zone "27.172.in-addr.arpa" { type master; file "/etc/bind/db.empty"; };
|
||||
zone "28.172.in-addr.arpa" { type master; file "/etc/bind/db.empty"; };
|
||||
zone "29.172.in-addr.arpa" { type master; file "/etc/bind/db.empty"; };
|
||||
zone "30.172.in-addr.arpa" { type master; file "/etc/bind/db.empty"; };
|
||||
zone "31.172.in-addr.arpa" { type master; file "/etc/bind/db.empty"; };
|
||||
zone "168.192.in-addr.arpa" { type master; file "/etc/bind/db.empty"; };
|
||||
zone "254.169.in-addr.arpa" { type master; file "/etc/bind/db.empty"; };
|
||||
|
|
@ -142,21 +142,3 @@ actions['named-checkconf'] = {
|
|||
'svc_systemd:bind9:reload',
|
||||
]
|
||||
}
|
||||
|
||||
# beantwortet Anfragen nach privaten IP-Adressen mit NXDOMAIN, statt sie ins Internet weiterzuleiten
|
||||
files['/etc/bind/zones.rfc1918'] = {
|
||||
'needed_by': [
|
||||
'svc_systemd:bind9',
|
||||
],
|
||||
'triggers': [
|
||||
'svc_systemd:bind9:reload',
|
||||
],
|
||||
}
|
||||
files['/etc/bind/db.empty'] = {
|
||||
'needed_by': [
|
||||
'svc_systemd:bind9',
|
||||
],
|
||||
'triggers': [
|
||||
'svc_systemd:bind9:reload',
|
||||
],
|
||||
}
|
||||
|
|
|
|||
|
|
@ -3,7 +3,6 @@ from json import dumps
|
|||
h = repo.libs.hashable.hashable
|
||||
repo.libs.bind.repo = repo
|
||||
|
||||
|
||||
defaults = {
|
||||
'apt': {
|
||||
'packages': {
|
||||
|
|
@ -49,13 +48,13 @@ defaults = {
|
|||
},
|
||||
},
|
||||
'telegraf': {
|
||||
'inputs': {
|
||||
'bind': {
|
||||
'default': {
|
||||
'config': {
|
||||
'inputs': {
|
||||
'bind': [{
|
||||
'urls': ['http://localhost:8053/xml/v3'],
|
||||
'gather_memory_contexts': False,
|
||||
'gather_views': True,
|
||||
},
|
||||
}],
|
||||
},
|
||||
},
|
||||
},
|
||||
|
|
@ -212,7 +211,7 @@ def generate_keys(metadata):
|
|||
'token':repo.libs.hmac.hmac_sha512(
|
||||
key,
|
||||
str(repo.vault.random_bytes_as_base64_for(
|
||||
f"{metadata.get('id')} bind key {key} 20250713",
|
||||
f"{metadata.get('id')} bind key {key}",
|
||||
length=32,
|
||||
)),
|
||||
)
|
||||
|
|
|
|||
|
|
@ -1,165 +0,0 @@
|
|||
#!/usr/bin/env python3
|
||||
|
||||
import os
|
||||
import datetime
|
||||
import numpy as np
|
||||
import matplotlib.pyplot as plt
|
||||
import soundfile as sf
|
||||
from scipy.fft import rfft, rfftfreq
|
||||
import shutil
|
||||
import traceback
|
||||
|
||||
|
||||
RECORDINGS_DIR = "recordings"
|
||||
PROCESSED_RECORDINGS_DIR = "recordings/processed"
|
||||
DETECTIONS_DIR = "events"
|
||||
|
||||
DETECT_FREQUENCY = 211 # Hz
|
||||
DETECT_FREQUENCY_TOLERANCE = 2 # Hz
|
||||
ADJACENCY_FACTOR = 2 # area to look for the frequency (e.g. 2 means 100Hz to 400Hz for 200Hz detection)
|
||||
BLOCK_SECONDS = 3 # seconds (longer means more frequency resolution, but less time resolution)
|
||||
DETECTION_DISTANCE_SECONDS = 30 # seconds (minimum time between detections)
|
||||
BLOCK_OVERLAP_FACTOR = 0.9 # overlap between blocks (0.2 means 20% overlap)
|
||||
MIN_SIGNAL_QUALITY = 1000.0 # maximum noise level (relative DB) to consider a detection valid
|
||||
PLOT_PADDING_START_SECONDS = 2 # seconds (padding before and after the event in the plot)
|
||||
PLOT_PADDING_END_SECONDS = 3 # seconds (padding before and after the event in the plot)
|
||||
|
||||
DETECTION_DISTANCE_BLOCKS = DETECTION_DISTANCE_SECONDS // BLOCK_SECONDS # number of blocks to skip after a detection
|
||||
DETECT_FREQUENCY_FROM = DETECT_FREQUENCY - DETECT_FREQUENCY_TOLERANCE # Hz
|
||||
DETECT_FREQUENCY_TO = DETECT_FREQUENCY + DETECT_FREQUENCY_TOLERANCE # Hz
|
||||
|
||||
|
||||
def process_recording(filename):
|
||||
print('processing', filename)
|
||||
|
||||
# get ISO 8601 nanosecond recording date from filename
|
||||
date_string_from_filename = os.path.splitext(filename)[0]
|
||||
recording_date = datetime.datetime.strptime(date_string_from_filename, "%Y-%m-%d_%H-%M-%S.%f%z")
|
||||
|
||||
# get data and metadata from recording
|
||||
path = os.path.join(RECORDINGS_DIR, filename)
|
||||
soundfile = sf.SoundFile(path)
|
||||
samplerate = soundfile.samplerate
|
||||
samples_per_block = int(BLOCK_SECONDS * samplerate)
|
||||
overlapping_samples = int(samples_per_block * BLOCK_OVERLAP_FACTOR)
|
||||
|
||||
sample_num = 0
|
||||
current_event = None
|
||||
|
||||
while sample_num < len(soundfile):
|
||||
soundfile.seek(sample_num)
|
||||
block = soundfile.read(frames=samples_per_block, dtype='float32', always_2d=False)
|
||||
|
||||
if len(block) == 0:
|
||||
break
|
||||
|
||||
# calculate FFT
|
||||
labels = rfftfreq(len(block), d=1/samplerate)
|
||||
complex_amplitudes = rfft(block)
|
||||
amplitudes = np.abs(complex_amplitudes)
|
||||
|
||||
# get the frequency with the highest amplitude within the search range
|
||||
search_amplitudes = amplitudes[(labels >= DETECT_FREQUENCY_FROM/ADJACENCY_FACTOR) & (labels <= DETECT_FREQUENCY_TO*ADJACENCY_FACTOR)]
|
||||
search_labels = labels[(labels >= DETECT_FREQUENCY_FROM/ADJACENCY_FACTOR) & (labels <= DETECT_FREQUENCY_TO*ADJACENCY_FACTOR)]
|
||||
max_amplitude = max(search_amplitudes)
|
||||
max_amplitude_index = np.argmax(search_amplitudes)
|
||||
max_freq = search_labels[max_amplitude_index]
|
||||
max_freq_detected = DETECT_FREQUENCY_FROM <= max_freq <= DETECT_FREQUENCY_TO
|
||||
|
||||
# calculate signal quality
|
||||
adjacent_amplitudes = amplitudes[(labels < DETECT_FREQUENCY_FROM) | (labels > DETECT_FREQUENCY_TO)]
|
||||
signal_quality = max_amplitude/np.mean(adjacent_amplitudes)
|
||||
good_signal_quality = signal_quality > MIN_SIGNAL_QUALITY
|
||||
|
||||
# conclude detection
|
||||
if (
|
||||
max_freq_detected and
|
||||
good_signal_quality
|
||||
):
|
||||
block_date = recording_date + datetime.timedelta(seconds=sample_num / samplerate)
|
||||
|
||||
# detecting an event
|
||||
if not current_event:
|
||||
current_event = {
|
||||
'start_at': block_date,
|
||||
'end_at': block_date,
|
||||
'start_sample': sample_num,
|
||||
'end_sample': sample_num + samples_per_block,
|
||||
'start_freq': max_freq,
|
||||
'end_freq': max_freq,
|
||||
'max_amplitude': max_amplitude,
|
||||
}
|
||||
else:
|
||||
current_event.update({
|
||||
'end_at': block_date,
|
||||
'end_freq': max_freq,
|
||||
'end_sample': sample_num + samples_per_block,
|
||||
'max_amplitude': max(max_amplitude, current_event['max_amplitude']),
|
||||
})
|
||||
print(f'- {block_date.strftime('%Y-%m-%d %H:%M:%S')}: {max_amplitude:.1f}rDB @ {max_freq:.1f}Hz (signal {signal_quality:.3f}x)')
|
||||
else:
|
||||
# not detecting an event
|
||||
if current_event:
|
||||
duration = (current_event['end_at'] - current_event['start_at']).total_seconds()
|
||||
current_event['duration'] = duration
|
||||
print(f'🔊 {current_event['start_at'].strftime('%Y-%m-%d %H:%M:%S')} ({duration:.1f}s): {current_event['start_freq']:.1f}Hz->{current_event['end_freq']:.1f}Hz @{current_event['max_amplitude']:.0f}rDB')
|
||||
|
||||
# read full audio clip again for writing
|
||||
write_event(current_event=current_event, soundfile=soundfile, samplerate=samplerate)
|
||||
|
||||
current_event = None
|
||||
sample_num += DETECTION_DISTANCE_BLOCKS * samples_per_block
|
||||
|
||||
sample_num += samples_per_block - overlapping_samples
|
||||
|
||||
# move to PROCESSED_RECORDINGS_DIR
|
||||
|
||||
os.makedirs(PROCESSED_RECORDINGS_DIR, exist_ok=True)
|
||||
shutil.move(os.path.join(RECORDINGS_DIR, filename), os.path.join(PROCESSED_RECORDINGS_DIR, filename))
|
||||
|
||||
|
||||
# write a spectrogram using the sound from start to end of the event
|
||||
def write_event(current_event, soundfile, samplerate):
|
||||
# date and filename
|
||||
event_date = current_event['start_at'] - datetime.timedelta(seconds=PLOT_PADDING_START_SECONDS)
|
||||
filename_prefix = event_date.strftime('%Y-%m-%d_%H-%M-%S.%f%z')
|
||||
|
||||
# event clip
|
||||
event_start_sample = current_event['start_sample'] - samplerate * PLOT_PADDING_START_SECONDS
|
||||
event_end_sample = current_event['end_sample'] + samplerate * PLOT_PADDING_END_SECONDS
|
||||
total_samples = event_end_sample - event_start_sample
|
||||
soundfile.seek(event_start_sample)
|
||||
event_clip = soundfile.read(frames=total_samples, dtype='float32', always_2d=False)
|
||||
|
||||
# write flac
|
||||
flac_path = os.path.join(DETECTIONS_DIR, f"{filename_prefix}.flac")
|
||||
sf.write(flac_path, event_clip, samplerate, format='FLAC')
|
||||
|
||||
# write spectrogram
|
||||
plt.figure(figsize=(8, 6))
|
||||
plt.specgram(event_clip, Fs=samplerate, NFFT=samplerate, noverlap=samplerate//2, cmap='inferno', vmin=-100, vmax=-10)
|
||||
plt.title(f"Bootshorn @{event_date.strftime('%Y-%m-%d %H:%M:%S%z')}")
|
||||
plt.xlabel(f"Time {current_event['duration']:.1f}s")
|
||||
plt.ylabel(f"Frequency {current_event['start_freq']:.1f}Hz -> {current_event['end_freq']:.1f}Hz")
|
||||
plt.colorbar(label="Intensity (rDB)")
|
||||
plt.ylim(50, 1000)
|
||||
plt.savefig(os.path.join(DETECTIONS_DIR, f"{filename_prefix}.png"))
|
||||
plt.close()
|
||||
|
||||
|
||||
def main():
|
||||
os.makedirs(RECORDINGS_DIR, exist_ok=True)
|
||||
os.makedirs(PROCESSED_RECORDINGS_DIR, exist_ok=True)
|
||||
|
||||
for filename in sorted(os.listdir(RECORDINGS_DIR)):
|
||||
if filename.endswith(".flac"):
|
||||
try:
|
||||
process_recording(filename)
|
||||
except Exception as e:
|
||||
print(f"Error processing {filename}: {e}")
|
||||
# print stacktrace
|
||||
traceback.print_exc()
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
|
|
@ -1,25 +0,0 @@
|
|||
#!/bin/sh
|
||||
|
||||
mkdir -p recordings
|
||||
|
||||
while true
|
||||
do
|
||||
# get date in ISO 8601 format with nanoseconds
|
||||
PROGRAMM=$(test $(uname) = "Darwin" && echo "gdate" || echo "date")
|
||||
DATE=$($PROGRAMM "+%Y-%m-%d_%H-%M-%S.%6N%z")
|
||||
|
||||
# record audio using ffmpeg
|
||||
ffmpeg \
|
||||
-y \
|
||||
-f pulse \
|
||||
-i "alsa_input.usb-HANMUS_USB_AUDIO_24BIT_2I2O_1612310-00.analog-stereo" \
|
||||
-ac 1 \
|
||||
-ar 96000 \
|
||||
-sample_fmt s32 \
|
||||
-t "3600" \
|
||||
-c:a flac \
|
||||
-compression_level 12 \
|
||||
"recordings/current/$DATE.flac"
|
||||
|
||||
mv "recordings/current/$DATE.flac" "recordings/$DATE.flac"
|
||||
done
|
||||
|
|
@ -1,43 +0,0 @@
|
|||
#!/usr/bin/env python3
|
||||
|
||||
import requests
|
||||
import urllib3
|
||||
import datetime
|
||||
import csv
|
||||
urllib3.disable_warnings()
|
||||
import os
|
||||
|
||||
|
||||
HUE_IP = "${hue_ip}" # replace with your bridge IP
|
||||
HUE_APP_KEY = "${hue_app_key}" # local only
|
||||
HUE_DEVICE_ID = "31f58786-3242-4e88-b9ce-23f44ba27bbe"
|
||||
TEMPERATURE_LOG_DIR = "/opt/bootshorn/temperatures"
|
||||
|
||||
response = requests.get(
|
||||
f"https://{HUE_IP}/clip/v2/resource/temperature",
|
||||
headers={"hue-application-key": HUE_APP_KEY},
|
||||
verify=False,
|
||||
)
|
||||
response.raise_for_status()
|
||||
data = response.json()
|
||||
|
||||
for item in data["data"]:
|
||||
if item["id"] == HUE_DEVICE_ID:
|
||||
temperature = item["temperature"]["temperature"]
|
||||
temperature_date_string = item["temperature"]["temperature_report"]["changed"]
|
||||
temperature_date = datetime.datetime.fromisoformat(temperature_date_string).astimezone(datetime.timezone.utc)
|
||||
break
|
||||
|
||||
print(f"@{temperature_date}: {temperature}°C")
|
||||
|
||||
filename = temperature_date.strftime("%Y-%m-%d_00-00-00.000000%z") + ".log"
|
||||
logpath = os.path.join(TEMPERATURE_LOG_DIR, filename)
|
||||
now_utc = datetime.datetime.now(datetime.timezone.utc)
|
||||
|
||||
with open(logpath, "a+", newline="") as logfile:
|
||||
writer = csv.writer(logfile)
|
||||
writer.writerow([
|
||||
now_utc.strftime('%Y-%m-%d_%H-%M-%S.%f%z'), # current UTC time
|
||||
temperature_date.strftime('%Y-%m-%d_%H-%M-%S.%f%z'), # date of temperature reading
|
||||
temperature,
|
||||
])
|
||||
|
|
@ -1,61 +0,0 @@
|
|||
# nano /etc/selinux/config
|
||||
# SELINUX=disabled
|
||||
# reboot
|
||||
|
||||
directories = {
|
||||
'/opt/bootshorn': {
|
||||
'owner': 'ckn',
|
||||
'group': 'ckn',
|
||||
},
|
||||
'/opt/bootshorn/temperatures': {
|
||||
'owner': 'ckn',
|
||||
'group': 'ckn',
|
||||
},
|
||||
'/opt/bootshorn/recordings': {
|
||||
'owner': 'ckn',
|
||||
'group': 'ckn',
|
||||
},
|
||||
'/opt/bootshorn/recordings/current': {
|
||||
'owner': 'ckn',
|
||||
'group': 'ckn',
|
||||
},
|
||||
'/opt/bootshorn/recordings/processed': {
|
||||
'owner': 'ckn',
|
||||
'group': 'ckn',
|
||||
},
|
||||
'/opt/bootshorn/events': {
|
||||
'owner': 'ckn',
|
||||
'group': 'ckn',
|
||||
},
|
||||
}
|
||||
|
||||
files = {
|
||||
'/opt/bootshorn/record': {
|
||||
'owner': 'ckn',
|
||||
'group': 'ckn',
|
||||
'mode': '755',
|
||||
},
|
||||
'/opt/bootshorn/temperature': {
|
||||
'content_type': 'mako',
|
||||
'context': {
|
||||
'hue_ip': repo.get_node('home.hue').hostname,
|
||||
'hue_app_key': repo.vault.decrypt('encrypt$gAAAAABoc2WxZCLbxl-Z4IrSC97CdOeFgBplr9Fp5ujpd0WCCCPNBUY_WquHN86z8hKLq5Y04dwq8TdJW0PMSOSgTFbGgdp_P1q0jOBLEKaW9IIT1YM88h-JYwLf9QGDV_5oEfvnBCtO'),
|
||||
},
|
||||
'owner': 'ckn',
|
||||
'group': 'ckn',
|
||||
'mode': '755',
|
||||
},
|
||||
'/opt/bootshorn/process': {
|
||||
'owner': 'ckn',
|
||||
'group': 'ckn',
|
||||
'mode': '755',
|
||||
},
|
||||
}
|
||||
|
||||
svc_systemd = {
|
||||
'bootshorn-record.service': {
|
||||
'needs': {
|
||||
'file:/opt/bootshorn/record',
|
||||
},
|
||||
},
|
||||
}
|
||||
|
|
@ -1,44 +0,0 @@
|
|||
defaults = {
|
||||
'systemd': {
|
||||
'units': {
|
||||
'bootshorn-record.service': {
|
||||
'Unit': {
|
||||
'Description': 'Bootshorn Recorder',
|
||||
'After': 'network.target',
|
||||
},
|
||||
'Service': {
|
||||
'User': 'ckn',
|
||||
'Group': 'ckn',
|
||||
'Type': 'simple',
|
||||
'WorkingDirectory': '/opt/bootshorn',
|
||||
'ExecStart': '/opt/bootshorn/record',
|
||||
'Restart': 'always',
|
||||
'RestartSec': 5,
|
||||
'Environment': {
|
||||
"XDG_RUNTIME_DIR": "/run/user/1000",
|
||||
"PULSE_SERVER": "unix:/run/user/1000/pulse/native",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
'systemd-timers': {
|
||||
'bootshorn-temperature': {
|
||||
'command': '/opt/bootshorn/temperature',
|
||||
'when': '*:0/10',
|
||||
'working_dir': '/opt/bootshorn',
|
||||
'user': 'ckn',
|
||||
'group': 'ckn',
|
||||
},
|
||||
# 'bootshorn-process': {
|
||||
# 'command': '/opt/bootshorn/process',
|
||||
# 'when': 'hourly',
|
||||
# 'working_dir': '/opt/bootshorn',
|
||||
# 'user': 'ckn',
|
||||
# 'group': 'ckn',
|
||||
# 'after': {
|
||||
# 'bootshorn-process.service',
|
||||
# },
|
||||
# },
|
||||
},
|
||||
}
|
||||
|
|
@ -27,7 +27,7 @@ def ssh_keys(metadata):
|
|||
'users': {
|
||||
'build-agent': {
|
||||
'authorized_users': {
|
||||
f'build-server@{other_node.name}': {}
|
||||
f'build-server@{other_node.name}'
|
||||
for other_node in repo.nodes
|
||||
if other_node.has_bundle('build-server')
|
||||
for architecture in other_node.metadata.get('build-server/architectures').values()
|
||||
|
|
|
|||
|
|
@ -14,7 +14,7 @@ def ssh_keys(metadata):
|
|||
'users': {
|
||||
'build-ci': {
|
||||
'authorized_users': {
|
||||
f'build-server@{other_node.name}': {}
|
||||
f'build-server@{other_node.name}'
|
||||
for other_node in repo.nodes
|
||||
if other_node.has_bundle('build-server')
|
||||
},
|
||||
|
|
|
|||
|
|
@ -8,7 +8,6 @@ defaults = {
|
|||
'sources': {
|
||||
'crystal': {
|
||||
# https://software.opensuse.org/download.html?project=devel%3Alanguages%3Acrystal&package=crystal
|
||||
# curl -fsSL https://download.opensuse.org/repositories/devel:/languages:/crystal/Debian_Testing/Release.key
|
||||
'urls': {
|
||||
'http://download.opensuse.org/repositories/devel:/languages:/crystal/Debian_Testing/',
|
||||
},
|
||||
|
|
|
|||
17
bundles/dovecot/files/dovecot-sql.conf
Normal file
17
bundles/dovecot/files/dovecot-sql.conf
Normal file
|
|
@ -0,0 +1,17 @@
|
|||
connect = host=${host} dbname=${name} user=${user} password=${password}
|
||||
driver = pgsql
|
||||
default_pass_scheme = ARGON2ID
|
||||
|
||||
user_query = SELECT '/var/vmail/%u' AS home, 'vmail' AS uid, 'vmail' AS gid
|
||||
|
||||
iterate_query = SELECT CONCAT(users.name, '@', domains.name) AS user \
|
||||
FROM users \
|
||||
LEFT JOIN domains ON users.domain_id = domains.id \
|
||||
WHERE redirect IS NULL
|
||||
|
||||
password_query = SELECT CONCAT(users.name, '@', domains.name) AS user, password \
|
||||
FROM users \
|
||||
LEFT JOIN domains ON users.domain_id = domains.id \
|
||||
WHERE redirect IS NULL \
|
||||
AND users.name = SPLIT_PART('%u', '@', 1) \
|
||||
AND domains.name = SPLIT_PART('%u', '@', 2)
|
||||
|
|
@ -1,17 +1,13 @@
|
|||
dovecot_config_version = ${config_version}
|
||||
dovecot_storage_version = ${storage_version}
|
||||
|
||||
protocols = imap lmtp sieve
|
||||
auth_mechanisms = plain login
|
||||
mail_privileged_group = mail
|
||||
ssl = required
|
||||
ssl_server_cert_file = /var/lib/dehydrated/certs/${hostname}/fullchain.pem
|
||||
ssl_server_key_file = /var/lib/dehydrated/certs/${hostname}/privkey.pem
|
||||
ssl_server_dh_file = /etc/dovecot/dhparam.pem
|
||||
ssl_cert = </var/lib/dehydrated/certs/${node.metadata.get('mailserver/hostname')}/fullchain.pem
|
||||
ssl_key = </var/lib/dehydrated/certs/${node.metadata.get('mailserver/hostname')}/privkey.pem
|
||||
ssl_dh = </etc/dovecot/dhparam.pem
|
||||
ssl_client_ca_dir = /etc/ssl/certs
|
||||
mail_driver = maildir
|
||||
mail_path = ${maildir}/%{user}
|
||||
mail_index_path = ${maildir}/index/%{user}
|
||||
mail_plugins = fts fts_flatcurve
|
||||
mail_location = maildir:${node.metadata.get('mailserver/maildir')}/%u:INDEX=${node.metadata.get('mailserver/maildir')}/index/%u
|
||||
mail_plugins = fts fts_xapian
|
||||
|
||||
namespace inbox {
|
||||
inbox = yes
|
||||
|
|
@ -34,46 +30,14 @@ namespace inbox {
|
|||
}
|
||||
}
|
||||
|
||||
# postgres passdb userdb
|
||||
|
||||
sql_driver = pgsql
|
||||
|
||||
pgsql main {
|
||||
parameters {
|
||||
host = ${db_host}
|
||||
dbname = ${db_name}
|
||||
user = ${db_user}
|
||||
password = ${db_password}
|
||||
}
|
||||
passdb {
|
||||
driver = sql
|
||||
args = /etc/dovecot/dovecot-sql.conf
|
||||
}
|
||||
|
||||
passdb sql {
|
||||
passdb_default_password_scheme = ARGON2ID
|
||||
|
||||
query = SELECT \
|
||||
CONCAT(users.name, '@', domains.name) AS "user", \
|
||||
password \
|
||||
FROM users \
|
||||
LEFT JOIN domains ON users.domain_id = domains.id \
|
||||
WHERE redirect IS NULL \
|
||||
AND users.name = SPLIT_PART('%{user}', '@', 1) \
|
||||
AND domains.name = SPLIT_PART('%{user}', '@', 2)
|
||||
}
|
||||
|
||||
mail_uid = vmail
|
||||
mail_gid = vmail
|
||||
|
||||
userdb sql {
|
||||
query = SELECT \
|
||||
'/var/vmail/%{user}' AS home, \
|
||||
'vmail' AS uid, \
|
||||
'vmail' AS gid
|
||||
|
||||
iterate_query = SELECT \
|
||||
CONCAT(users.name, '@', domains.name) AS username \
|
||||
FROM users \
|
||||
LEFT JOIN domains ON users.domain_id = domains.id \
|
||||
WHERE redirect IS NULL
|
||||
# use sql for userdb too, to enable iterate_query
|
||||
userdb {
|
||||
driver = sql
|
||||
args = /etc/dovecot/dovecot-sql.conf
|
||||
}
|
||||
|
||||
service auth {
|
||||
|
|
@ -103,9 +67,10 @@ service stats {
|
|||
}
|
||||
}
|
||||
service managesieve-login {
|
||||
#inet_listener sieve {}
|
||||
process_min_avail = 1
|
||||
process_limit = 1
|
||||
inet_listener sieve {
|
||||
}
|
||||
process_min_avail = 0
|
||||
service_count = 1
|
||||
vsz_limit = 64 M
|
||||
}
|
||||
service managesieve {
|
||||
|
|
@ -113,53 +78,31 @@ service managesieve {
|
|||
}
|
||||
|
||||
protocol imap {
|
||||
mail_plugins = fts fts_flatcurve imap_sieve
|
||||
mail_max_userip_connections = 50
|
||||
imap_idle_notify_interval = 29 mins
|
||||
mail_plugins = $mail_plugins imap_sieve
|
||||
mail_max_userip_connections = 50
|
||||
imap_idle_notify_interval = 29 mins
|
||||
}
|
||||
protocol lmtp {
|
||||
mail_plugins = fts fts_flatcurve sieve
|
||||
mail_plugins = $mail_plugins sieve
|
||||
}
|
||||
|
||||
# Persönliches Skript (deine alte Datei /var/vmail/sieve/%u.sieve)
|
||||
sieve_script personal {
|
||||
driver = file
|
||||
# Verzeichnis mit (evtl. mehreren) Sieve-Skripten des Users
|
||||
path = /var/vmail/sieve/%{user}/
|
||||
# Aktives Skript (entspricht früher "sieve = /var/vmail/sieve/%u.sieve")
|
||||
active_path = /var/vmail/sieve/%{user}.sieve
|
||||
}
|
||||
|
||||
# Globales After-Skript (dein früheres "sieve_after = …")
|
||||
sieve_script after {
|
||||
type = after
|
||||
driver = file
|
||||
path = /var/vmail/sieve/global/spam-to-folder.sieve
|
||||
protocol sieve {
|
||||
plugin {
|
||||
sieve = /var/vmail/sieve/%u.sieve
|
||||
sieve_storage = /var/vmail/sieve/%u/
|
||||
}
|
||||
}
|
||||
|
||||
# fulltext search
|
||||
language en {
|
||||
plugin {
|
||||
fts = xapian
|
||||
fts_xapian = partial=3 full=20 verbose=0
|
||||
fts_autoindex = yes
|
||||
fts_enforced = yes
|
||||
# Index attachements
|
||||
fts_decoder = decode2text
|
||||
}
|
||||
language de {
|
||||
default = yes
|
||||
}
|
||||
language_tokenizers = generic email-address
|
||||
|
||||
fts flatcurve {
|
||||
substring_search = yes
|
||||
# rotate_count = 5000 # DB-Rotation nach X Mails
|
||||
# rotate_time = 5s # oder zeitbasiert rotieren
|
||||
# optimize_limit = 10
|
||||
# min_term_size = 3
|
||||
}
|
||||
|
||||
fts_autoindex = yes
|
||||
fts_decoder_driver = script
|
||||
fts_decoder_script_socket_path = decode2text
|
||||
|
||||
service indexer-worker {
|
||||
process_limit = ${indexer_cores}
|
||||
vsz_limit = ${indexer_ram}M
|
||||
vsz_limit = ${indexer_ram}
|
||||
}
|
||||
service decode2text {
|
||||
executable = script /usr/local/libexec/dovecot/decode2text.sh
|
||||
|
|
@ -169,39 +112,24 @@ service decode2text {
|
|||
}
|
||||
}
|
||||
|
||||
mailbox Junk {
|
||||
sieve_script learn_spam {
|
||||
driver = file
|
||||
type = before
|
||||
cause = copy
|
||||
path = /var/vmail/sieve/global/learn-spam.sieve
|
||||
}
|
||||
}
|
||||
# spam filter
|
||||
plugin {
|
||||
sieve_plugins = sieve_imapsieve sieve_extprograms
|
||||
sieve_dir = /var/vmail/sieve/%u/
|
||||
sieve = /var/vmail/sieve/%u.sieve
|
||||
sieve_pipe_bin_dir = /var/vmail/sieve/bin
|
||||
sieve_extensions = +vnd.dovecot.pipe
|
||||
|
||||
imapsieve_from Junk {
|
||||
sieve_script learn_ham {
|
||||
driver = file
|
||||
type = before
|
||||
cause = copy
|
||||
path = /var/vmail/sieve/global/learn-ham.sieve
|
||||
}
|
||||
}
|
||||
sieve_after = /var/vmail/sieve/global/spam-to-folder.sieve
|
||||
|
||||
# Extprograms-Plugin einschalten
|
||||
sieve_plugins {
|
||||
sieve_extprograms = yes
|
||||
}
|
||||
# From elsewhere to Spam folder
|
||||
imapsieve_mailbox1_name = Junk
|
||||
imapsieve_mailbox1_causes = COPY
|
||||
imapsieve_mailbox1_before = file:/var/vmail/sieve/global/learn-spam.sieve
|
||||
|
||||
# Welche Sieve-Erweiterungen dürfen genutzt werden?
|
||||
# Empfehlung: nur global erlauben (nicht in User-Skripten):
|
||||
sieve_global_extensions {
|
||||
vnd.dovecot.pipe = yes
|
||||
# vnd.dovecot.filter = yes # nur falls gebraucht
|
||||
# vnd.dovecot.execute = yes # nur falls gebraucht
|
||||
# From Spam folder to elsewhere
|
||||
imapsieve_mailbox2_name = *
|
||||
imapsieve_mailbox2_from = Junk
|
||||
imapsieve_mailbox2_causes = COPY
|
||||
imapsieve_mailbox2_before = file:/var/vmail/sieve/global/learn-ham.sieve
|
||||
}
|
||||
|
||||
# Verzeichnis mit deinen Skripten/Binaries für :pipe
|
||||
sieve_pipe_bin_dir = /var/vmail/sieve/bin
|
||||
# (optional, analog für :filter / :execute)
|
||||
# sieve_filter_bin_dir = /var/vmail/sieve/filter
|
||||
# sieve_execute_bin_dir = /var/vmail/sieve/execute
|
||||
|
|
@ -44,16 +44,6 @@ files = {
|
|||
'context': {
|
||||
'admin_email': node.metadata.get('mailserver/admin_email'),
|
||||
'indexer_ram': node.metadata.get('dovecot/indexer_ram'),
|
||||
'config_version': node.metadata.get('dovecot/config_version'),
|
||||
'storage_version': node.metadata.get('dovecot/storage_version'),
|
||||
'maildir': node.metadata.get('mailserver/maildir'),
|
||||
'hostname': node.metadata.get('mailserver/hostname'),
|
||||
'db_host': node.metadata.get('mailserver/database/host'),
|
||||
'db_name': node.metadata.get('mailserver/database/name'),
|
||||
'db_user': node.metadata.get('mailserver/database/user'),
|
||||
'db_password': node.metadata.get('mailserver/database/password'),
|
||||
'indexer_cores': node.metadata.get('vm/cores'),
|
||||
'indexer_ram': node.metadata.get('vm/ram')//2,
|
||||
},
|
||||
'needs': {
|
||||
'pkg_apt:'
|
||||
|
|
@ -62,9 +52,29 @@ files = {
|
|||
'svc_systemd:dovecot:restart',
|
||||
},
|
||||
},
|
||||
'/etc/dovecot/dovecot-sql.conf': {
|
||||
'content_type': 'mako',
|
||||
'context': node.metadata.get('mailserver/database'),
|
||||
'needs': {
|
||||
'pkg_apt:'
|
||||
},
|
||||
'triggers': {
|
||||
'svc_systemd:dovecot:restart',
|
||||
},
|
||||
},
|
||||
'/etc/dovecot/dhparam.pem': {
|
||||
'content_type': 'any',
|
||||
},
|
||||
'/etc/dovecot/dovecot-sql.conf': {
|
||||
'content_type': 'mako',
|
||||
'context': node.metadata.get('mailserver/database'),
|
||||
'needs': {
|
||||
'pkg_apt:'
|
||||
},
|
||||
'triggers': {
|
||||
'svc_systemd:dovecot:restart',
|
||||
},
|
||||
},
|
||||
'/var/vmail/sieve/global/spam-to-folder.sieve': {
|
||||
'owner': 'vmail',
|
||||
'group': 'vmail',
|
||||
|
|
@ -121,6 +131,7 @@ svc_systemd = {
|
|||
'action:letsencrypt_update_certificates',
|
||||
'action:dovecot_generate_dhparam',
|
||||
'file:/etc/dovecot/dovecot.conf',
|
||||
'file:/etc/dovecot/dovecot-sql.conf',
|
||||
},
|
||||
},
|
||||
}
|
||||
|
|
|
|||
|
|
@ -8,7 +8,7 @@ defaults = {
|
|||
'dovecot-sieve': {},
|
||||
'dovecot-managesieved': {},
|
||||
# fulltext search
|
||||
'dovecot-flatcurve': {}, # buster-backports
|
||||
'dovecot-fts-xapian': {}, # buster-backports
|
||||
'poppler-utils': {}, # pdftotext
|
||||
'catdoc': {}, # catdoc, catppt, xls2csv
|
||||
},
|
||||
|
|
|
|||
|
|
@ -5,11 +5,6 @@ defaults = {
|
|||
'needs': {
|
||||
'zfs_dataset:tank/downloads'
|
||||
},
|
||||
'authorized_users': {
|
||||
f'build-server@{other_node.name}': {}
|
||||
for other_node in repo.nodes
|
||||
if other_node.has_bundle('build-server')
|
||||
},
|
||||
},
|
||||
},
|
||||
'zfs': {
|
||||
|
|
@ -19,15 +14,23 @@ defaults = {
|
|||
},
|
||||
},
|
||||
},
|
||||
'systemd-mount': {
|
||||
'/var/lib/downloads_nginx': {
|
||||
'source': '/var/lib/downloads',
|
||||
'user': 'www-data',
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
|
||||
@metadata_reactor.provides(
|
||||
'systemd-mount'
|
||||
)
|
||||
def mount_certs(metadata):
|
||||
return {
|
||||
'systemd-mount': {
|
||||
'/var/lib/downloads_nginx': {
|
||||
'source': '/var/lib/downloads',
|
||||
'user': 'www-data',
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
|
||||
@metadata_reactor.provides(
|
||||
'nginx/vhosts',
|
||||
)
|
||||
|
|
@ -44,3 +47,20 @@ def nginx(metadata):
|
|||
},
|
||||
},
|
||||
}
|
||||
|
||||
|
||||
@metadata_reactor.provides(
|
||||
'users/downloads/authorized_users',
|
||||
)
|
||||
def ssh_keys(metadata):
|
||||
return {
|
||||
'users': {
|
||||
'downloads': {
|
||||
'authorized_users': {
|
||||
f'build-server@{other_node.name}'
|
||||
for other_node in repo.nodes
|
||||
if other_node.has_bundle('build-server')
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
|
|
|
|||
|
|
@ -43,11 +43,11 @@ def units(metadata):
|
|||
'Service': {
|
||||
'Environment': {
|
||||
f'{k}={v}'
|
||||
for k, v in metadata.get(f'flask/{name}/env', {}).items()
|
||||
for k, v in conf.get('env', {}).items()
|
||||
},
|
||||
'User': metadata.get(f'flask/{name}/user'),
|
||||
'Group': metadata.get(f'flask/{name}/group'),
|
||||
'ExecStart': f"/opt/{name}/venv/bin/gunicorn -w {metadata.get(f'flask/{name}/workers')} -b 127.0.0.1:{metadata.get(f'flask/{name}/port')} --timeout {metadata.get(f'flask/{name}/timeout')} {metadata.get(f'flask/{name}/app_module')}:app"
|
||||
'User': conf['user'],
|
||||
'Group': conf['group'],
|
||||
'ExecStart': f"/opt/{name}/venv/bin/gunicorn -w {conf['workers']} -b 127.0.0.1:{conf['port']} --timeout {conf['timeout']} {conf['app_module']}:app"
|
||||
},
|
||||
'Install': {
|
||||
'WantedBy': {
|
||||
|
|
@ -55,7 +55,7 @@ def units(metadata):
|
|||
}
|
||||
},
|
||||
}
|
||||
for name in metadata.get('flask')
|
||||
for name, conf in metadata.get('flask').items()
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
|
|||
|
|
@ -11,13 +11,3 @@ Enter it again:
|
|||
freescout=#
|
||||
\q
|
||||
```
|
||||
|
||||
|
||||
# problems
|
||||
|
||||
# check if /opt/freescout/.env is resettet
|
||||
# ckeck `psql -h localhost -d freescout -U freescout -W`with pw from .env
|
||||
# chown -R www-data:www-data /opt/freescout
|
||||
# sudo su - www-data -c 'php /opt/freescout/artisan freescout:clear-cache' -s /bin/bash
|
||||
# javascript funny? `sudo su - www-data -c 'php /opt/freescout/artisan storage:link' -s /bin/bash`
|
||||
# benutzer bilder weg? aus dem backup holen: `/opt/freescout/.zfs/snapshot/zfs-auto-snap_hourly-2024-11-22-1700/storage/app/public/users` `./customers`
|
||||
|
|
|
|||
|
|
@ -12,33 +12,33 @@ directories = {
|
|||
}
|
||||
|
||||
actions = {
|
||||
# 'clone_freescout': {
|
||||
# 'command': run_as('www-data', 'git clone https://github.com/freescout-helpdesk/freescout.git /opt/freescout'),
|
||||
# 'unless': 'test -e /opt/freescout/.git',
|
||||
# 'needs': [
|
||||
# 'pkg_apt:git',
|
||||
# 'directory:/opt/freescout',
|
||||
# ],
|
||||
# },
|
||||
# 'pull_freescout': {
|
||||
# 'command': run_as('www-data', 'git -C /opt/freescout fetch origin dist && git -C /opt/freescout reset --hard origin/dist && git -C /opt/freescout clean -f'),
|
||||
# 'unless': run_as('www-data', 'git -C /opt/freescout fetch origin && git -C /opt/freescout status -uno | grep -q "Your branch is up to date"'),
|
||||
# 'needs': [
|
||||
# 'action:clone_freescout',
|
||||
# ],
|
||||
# 'triggers': [
|
||||
# 'action:freescout_artisan_update',
|
||||
# f'svc_systemd:php{php_version}-fpm.service:restart',
|
||||
# ],
|
||||
# },
|
||||
# 'freescout_artisan_update': {
|
||||
# 'command': run_as('www-data', 'php /opt/freescout/artisan freescout:after-app-update'),
|
||||
# 'triggered': True,
|
||||
# 'needs': [
|
||||
# f'svc_systemd:php{php_version}-fpm.service:restart',
|
||||
# 'action:pull_freescout',
|
||||
# ],
|
||||
# },
|
||||
'clone_freescout': {
|
||||
'command': run_as('www-data', 'git clone https://github.com/freescout-helpdesk/freescout.git /opt/freescout'),
|
||||
'unless': 'test -e /opt/freescout/.git',
|
||||
'needs': [
|
||||
'pkg_apt:git',
|
||||
'directory:/opt/freescout',
|
||||
],
|
||||
},
|
||||
'pull_freescout': {
|
||||
'command': run_as('www-data', 'git -C /opt/freescout fetch origin dist && git -C /opt/freescout reset --hard origin/dist && git -C /opt/freescout clean -f'),
|
||||
'unless': run_as('www-data', 'git -C /opt/freescout fetch origin && git -C /opt/freescout status -uno | grep -q "Your branch is up to date"'),
|
||||
'needs': [
|
||||
'action:clone_freescout',
|
||||
],
|
||||
'triggers': [
|
||||
'action:freescout_artisan_update',
|
||||
f'svc_systemd:php{php_version}-fpm.service:restart',
|
||||
],
|
||||
},
|
||||
'freescout_artisan_update': {
|
||||
'command': run_as('www-data', 'php /opt/freescout/artisan freescout:after-app-update'),
|
||||
'triggered': True,
|
||||
'needs': [
|
||||
f'svc_systemd:php{php_version}-fpm.service:restart',
|
||||
'action:pull_freescout',
|
||||
],
|
||||
},
|
||||
}
|
||||
|
||||
# svc_systemd = {
|
||||
|
|
|
|||
|
|
@ -40,7 +40,7 @@ ENABLE_OPENID_SIGNUP = false
|
|||
[service]
|
||||
REGISTER_EMAIL_CONFIRM = true
|
||||
ENABLE_NOTIFY_MAIL = true
|
||||
DISABLE_REGISTRATION = true
|
||||
DISABLE_REGISTRATION = false
|
||||
ALLOW_ONLY_EXTERNAL_REGISTRATION = false
|
||||
ENABLE_CAPTCHA = false
|
||||
REQUIRE_SIGNIN_VIEW = false
|
||||
|
|
|
|||
|
|
@ -49,7 +49,7 @@ files['/etc/gitea/app.ini'] = {
|
|||
),
|
||||
'owner': 'git',
|
||||
'mode': '0600',
|
||||
'context': node.metadata.get('gitea'),
|
||||
'context': node.metadata['gitea'],
|
||||
'triggers': {
|
||||
'svc_systemd:gitea:restart',
|
||||
},
|
||||
|
|
|
|||
|
|
@ -127,7 +127,7 @@ for dashboard_id, monitored_node in enumerate(monitored_nodes, start=1):
|
|||
panel['gridPos']['y'] = (row_id - 1) * panel['gridPos']['h']
|
||||
|
||||
if 'display_name' in panel_config:
|
||||
panel['fieldConfig']['defaults']['displayName'] = panel_config['display_name']
|
||||
panel['fieldConfig']['defaults']['displayName'] = '${'+panel_config['display_name']+'}'
|
||||
|
||||
if panel_config.get('stacked'):
|
||||
panel['fieldConfig']['defaults']['custom']['stacking']['mode'] = 'normal'
|
||||
|
|
@ -158,14 +158,13 @@ for dashboard_id, monitored_node in enumerate(monitored_nodes, start=1):
|
|||
host=monitored_node.name,
|
||||
negative=query_config.get('negative', False),
|
||||
boolean_to_int=query_config.get('boolean_to_int', False),
|
||||
over=query_config.get('over', None),
|
||||
minimum=query_config.get('minimum', None),
|
||||
filters={
|
||||
'host': monitored_node.name,
|
||||
**query_config['filters'],
|
||||
},
|
||||
exists=query_config.get('exists', []),
|
||||
function=query_config.get('function', None),
|
||||
multiply=query_config.get('multiply', None),
|
||||
).strip()
|
||||
})
|
||||
|
||||
|
|
@ -179,3 +178,4 @@ for dashboard_id, monitored_node in enumerate(monitored_nodes, start=1):
|
|||
'svc_systemd:grafana-server:restart',
|
||||
]
|
||||
}
|
||||
|
||||
|
|
|
|||
|
|
@ -26,15 +26,9 @@ defaults = {
|
|||
'config': {
|
||||
'server': {
|
||||
'http_port': 8300,
|
||||
'http_addr': '127.0.0.1',
|
||||
'enable_gzip': True,
|
||||
},
|
||||
'database': {
|
||||
'type': 'postgres',
|
||||
'host': '127.0.0.1:5432',
|
||||
'name': 'grafana',
|
||||
'user': 'grafana',
|
||||
'password': postgres_password,
|
||||
'url': f'postgres://grafana:{postgres_password}@localhost:5432/grafana',
|
||||
},
|
||||
'remote_cache': {
|
||||
'type': 'redis',
|
||||
|
|
@ -139,13 +133,11 @@ def dns(metadata):
|
|||
|
||||
|
||||
@metadata_reactor.provides(
|
||||
'nginx/has_websockets',
|
||||
'nginx/vhosts',
|
||||
)
|
||||
def nginx(metadata):
|
||||
return {
|
||||
'nginx': {
|
||||
'has_websockets': True,
|
||||
'vhosts': {
|
||||
metadata.get('grafana/hostname'): {
|
||||
'content': 'grafana/vhost.conf',
|
||||
|
|
|
|||
|
|
@ -2,7 +2,7 @@ files = {
|
|||
'/usr/local/share/telegraf/cpu_frequency': {
|
||||
'mode': '0755',
|
||||
'triggers': {
|
||||
'svc_systemd:telegraf.service:restart',
|
||||
'svc_systemd:telegraf:restart',
|
||||
},
|
||||
},
|
||||
}
|
||||
|
|
|
|||
|
|
@ -14,25 +14,25 @@ defaults = {
|
|||
},
|
||||
},
|
||||
'telegraf': {
|
||||
'inputs': {
|
||||
'sensors': {
|
||||
'default': {
|
||||
'config': {
|
||||
'inputs': {
|
||||
'sensors': {repo.libs.hashable.hashable({
|
||||
'timeout': '2s',
|
||||
})},
|
||||
'exec': {
|
||||
repo.libs.hashable.hashable({
|
||||
'commands': ["sudo /usr/local/share/telegraf/cpu_frequency"],
|
||||
'name_override': "cpu_frequency",
|
||||
'data_format': "influx",
|
||||
}),
|
||||
# repo.libs.hashable.hashable({
|
||||
# 'commands': ["/bin/bash -c 'expr $(cat /sys/class/thermal/thermal_zone0/temp) / 1000'"],
|
||||
# 'name_override': "cpu_temperature",
|
||||
# 'data_format': "value",
|
||||
# 'data_type': "integer",
|
||||
# }),
|
||||
},
|
||||
},
|
||||
'exec': {
|
||||
'cpu_frequency': {
|
||||
'commands': ["sudo /usr/local/share/telegraf/cpu_frequency"],
|
||||
'name_override': "cpu_frequency",
|
||||
'data_format': "influx",
|
||||
},
|
||||
# repo.libs.hashable.hashable({
|
||||
# 'commands': ["/bin/bash -c 'expr $(cat /sys/class/thermal/thermal_zone0/temp) / 1000'"],
|
||||
# 'name_override': "cpu_temperature",
|
||||
# 'data_format': "value",
|
||||
# 'data_type': "integer",
|
||||
# }),
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
|
|
|
|||
23
bundles/homeassistant-supervised/README.md
Normal file
23
bundles/homeassistant-supervised/README.md
Normal file
|
|
@ -0,0 +1,23 @@
|
|||
https://github.com/home-assistant/supervised-installer?tab=readme-ov-file
|
||||
https://github.com/home-assistant/os-agent/tree/main?tab=readme-ov-file#using-home-assistant-supervised-on-debian
|
||||
https://docs.docker.com/engine/install/debian/
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
https://www.home-assistant.io/installation/linux#install-home-assistant-supervised
|
||||
https://github.com/home-assistant/supervised-installer
|
||||
https://github.com/home-assistant/architecture/blob/master/adr/0014-home-assistant-supervised.md
|
||||
|
||||
DATA_SHARE=/usr/share/hassio dpkg --force-confdef --force-confold -i homeassistant-supervised.deb
|
||||
|
||||
neu debian
|
||||
ha installieren
|
||||
gucken ob geht
|
||||
dann bw drüberbügeln
|
||||
|
||||
|
||||
https://www.home-assistant.io/integrations/http/#ssl_certificate
|
||||
|
||||
`wget "$(curl -L https://api.github.com/repos/home-assistant/supervised-installer/releases/latest | jq -r '.assets[0].browser_download_url')" -O homeassistant-supervised.deb && dpkg -i homeassistant-supervised.deb`
|
||||
30
bundles/homeassistant-supervised/items.py
Normal file
30
bundles/homeassistant-supervised/items.py
Normal file
|
|
@ -0,0 +1,30 @@
|
|||
from shlex import quote
|
||||
|
||||
|
||||
version = node.metadata.get('homeassistant/os_agent_version')
|
||||
|
||||
directories = {
|
||||
'/usr/share/hassio': {},
|
||||
}
|
||||
|
||||
actions = {
|
||||
'install_os_agent': {
|
||||
'command': ' && '.join([
|
||||
f'wget -O /tmp/os-agent.deb https://github.com/home-assistant/os-agent/releases/download/{quote(version)}/os-agent_{quote(version)}_linux_aarch64.deb',
|
||||
'DEBIAN_FRONTEND=noninteractive dpkg -i /tmp/os-agent.deb',
|
||||
]),
|
||||
'unless': f'test "$(apt -qq list os-agent | cut -d" " -f2)" = "{quote(version)}"',
|
||||
'needs': {
|
||||
'pkg_apt:',
|
||||
'zfs_dataset:tank/homeassistant',
|
||||
},
|
||||
},
|
||||
'install_homeassistant_supervised': {
|
||||
'command': 'wget -O /tmp/homeassistant-supervised.deb https://github.com/home-assistant/supervised-installer/releases/latest/download/homeassistant-supervised.deb && apt install /tmp/homeassistant-supervised.deb',
|
||||
'unless': 'apt -qq list homeassistant-supervised | grep -q "installed"',
|
||||
'needs': {
|
||||
'action:install_os_agent',
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
65
bundles/homeassistant-supervised/metadata.py
Normal file
65
bundles/homeassistant-supervised/metadata.py
Normal file
|
|
@ -0,0 +1,65 @@
|
|||
defaults = {
|
||||
'apt': {
|
||||
'packages': {
|
||||
# homeassistant-supervised
|
||||
'apparmor': {},
|
||||
'bluez': {},
|
||||
'cifs-utils': {},
|
||||
'curl': {},
|
||||
'dbus': {},
|
||||
'jq': {},
|
||||
'libglib2.0-bin': {},
|
||||
'lsb-release': {},
|
||||
'network-manager': {},
|
||||
'nfs-common': {},
|
||||
'systemd-journal-remote': {},
|
||||
'systemd-resolved': {},
|
||||
'udisks2': {},
|
||||
'wget': {},
|
||||
# docker
|
||||
'docker-ce': {},
|
||||
'docker-ce-cli': {},
|
||||
'containerd.io': {},
|
||||
'docker-buildx-plugin': {},
|
||||
'docker-compose-plugin': {},
|
||||
},
|
||||
'sources': {
|
||||
# docker: https://docs.docker.com/engine/install/debian/#install-using-the-repository
|
||||
'docker': {
|
||||
'urls': {
|
||||
'https://download.docker.com/linux/debian',
|
||||
},
|
||||
'suites': {
|
||||
'{codename}',
|
||||
},
|
||||
'components': {
|
||||
'stable',
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
'zfs': {
|
||||
'datasets': {
|
||||
'tank/homeassistant': {
|
||||
'mountpoint': '/usr/share/hassio',
|
||||
'needed_by': {
|
||||
'directory:/usr/share/hassio',
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
@metadata_reactor.provides(
|
||||
'nginx/vhosts',
|
||||
)
|
||||
def nginx(metadata):
|
||||
return {
|
||||
'nginx': {
|
||||
'vhosts': {
|
||||
metadata.get('homeassistant/domain'): {
|
||||
'content': 'homeassistant/vhost.conf',
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
|
|
@ -179,7 +179,6 @@ def nginx(metadata):
|
|||
'context': {
|
||||
'php_version': metadata.get('php/version'),
|
||||
},
|
||||
'check_path': '/icingaweb2/index.php',
|
||||
},
|
||||
},
|
||||
},
|
||||
|
|
|
|||
|
|
@ -1,3 +0,0 @@
|
|||
# svc_systemd = {
|
||||
# 'ifupdown.service': {},
|
||||
# }
|
||||
|
|
@ -39,17 +39,6 @@ defaults = {
|
|||
},
|
||||
}
|
||||
|
||||
if node.has_bundle('zfs'):
|
||||
defaults['zfs'] = {
|
||||
'datasets': {
|
||||
'tank/influxdb': {
|
||||
'mountpoint': '/var/lib/influxdb',
|
||||
'recordsize': '8192',
|
||||
'atime': 'off',
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
@metadata_reactor.provides(
|
||||
'influxdb/password',
|
||||
'influxdb/admin_token',
|
||||
|
|
@ -63,6 +52,26 @@ def admin_password(metadata):
|
|||
}
|
||||
|
||||
|
||||
@metadata_reactor.provides(
|
||||
'zfs/datasets',
|
||||
)
|
||||
def zfs(metadata):
|
||||
if not node.has_bundle('zfs'):
|
||||
return {}
|
||||
|
||||
return {
|
||||
'zfs': {
|
||||
'datasets': {
|
||||
'tank/influxdb': {
|
||||
'mountpoint': '/var/lib/influxdb',
|
||||
'recordsize': '8192',
|
||||
'atime': 'off',
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
|
||||
@metadata_reactor.provides(
|
||||
'dns',
|
||||
)
|
||||
|
|
|
|||
|
|
@ -15,7 +15,6 @@ svc_systemd = {
|
|||
'needs': [
|
||||
'pkg_apt:kea-dhcp4-server',
|
||||
'file:/etc/kea/kea-dhcp4.conf',
|
||||
'svc_systemd:systemd-networkd.service:restart',
|
||||
],
|
||||
},
|
||||
}
|
||||
|
|
|
|||
|
|
@ -52,14 +52,13 @@ def subnets(metadata):
|
|||
if 'mac' in network_conf
|
||||
)
|
||||
|
||||
for id, (network_name, network_conf) in enumerate(sorted(metadata.get('network').items())):
|
||||
for network_name, network_conf in metadata.get('network').items():
|
||||
dhcp_server_config = network_conf.get('dhcp_server_config', None)
|
||||
|
||||
if dhcp_server_config:
|
||||
_network = ip_network(dhcp_server_config['subnet'])
|
||||
|
||||
subnet4.add(hashable({
|
||||
'id': id + 1,
|
||||
'subnet': dhcp_server_config['subnet'],
|
||||
'pools': [
|
||||
{
|
||||
|
|
@ -73,7 +72,7 @@ def subnets(metadata):
|
|||
},
|
||||
{
|
||||
'name': 'domain-name-servers',
|
||||
'data': '10.0.0.1',
|
||||
'data': '10.0.10.2',
|
||||
},
|
||||
],
|
||||
'reservations': set(
|
||||
|
|
|
|||
|
|
@ -1,22 +1,58 @@
|
|||
https://github.com/SirPlease/L4D2-Competitive-Rework/blob/master/Dedicated%20Server%20Install%20Guide/README.md
|
||||
https://developer.valvesoftware.com/wiki/List_of_L4D2_Cvars
|
||||
|
||||
```python
|
||||
'tick60_maps': {
|
||||
'port': 27030,
|
||||
# add command line arguments
|
||||
'arguments': ['-tickrate 60'],
|
||||
# stack overlays, first is uppermost
|
||||
'overlays': ['tickrate', 'standard'],
|
||||
# server.cfg contents
|
||||
'config': [
|
||||
# configs from overlays are accessible via server_${overlay}.cfg
|
||||
'exec server_tickrate.cfg',
|
||||
# add more options
|
||||
'sv_minupdaterate 101',
|
||||
'sv_maxupdaterate 101',
|
||||
'sv_mincmdrate 101',
|
||||
'sv_maxcmdrate 101',
|
||||
'sv_consistency 0',
|
||||
],
|
||||
},
|
||||
```
|
||||
Dead Center c1m1_hotel
|
||||
Dead Center c1m2_streets
|
||||
Dead Center c1m3_mall
|
||||
Dead Center c1m4_atrium
|
||||
Dark Carnival c2m1_highway
|
||||
Dark Carnival c2m2_fairgrounds
|
||||
Dark Carnival c2m3_coaster
|
||||
Dark Carnival c2m4_barns
|
||||
Dark Carnival c2m5_concert
|
||||
Swamp Fever c3m1_plankcountry
|
||||
Swamp Fever c3m2_swamp
|
||||
Swamp Fever c3m3_shantytown
|
||||
Swamp Fever c3m4_plantation
|
||||
Hard Rain c4m1_milltown_a
|
||||
Hard Rain c4m2_sugarmill_a
|
||||
Hard Rain c4m3_sugarmill_b
|
||||
Hard Rain c4m4_milltown_b
|
||||
Hard Rain c4m5_milltown_escape
|
||||
The Parish c5m1_waterfront_sndscape
|
||||
The Parish c5m1_waterfront
|
||||
The Parish c5m2_park
|
||||
The Parish c5m3_cemetery
|
||||
The Parish c5m4_quarter
|
||||
The Parish c5m5_bridge
|
||||
The Passing c6m1_riverbank
|
||||
The Passing c6m2_bedlam
|
||||
The Passing c6m3_port
|
||||
The Sacrifice c7m1_docks
|
||||
The Sacrifice c7m2_barge
|
||||
The Sacrifice c7m3_port
|
||||
No Mercy c8m1_apartment
|
||||
No Mercy c8m2_subway
|
||||
No Mercy c8m3_sewers
|
||||
No Mercy c8m4_interior
|
||||
No Mercy c8m5_rooftop
|
||||
Crash Course c9m1_alleys
|
||||
Crash Course c9m2_lots
|
||||
Death Toll c10m1_caves
|
||||
Death Toll c10m2_drainage
|
||||
Death Toll c10m3_ranchhouse
|
||||
Death Toll c10m4_mainstreet
|
||||
Death Toll c10m5_houseboat
|
||||
Dead Air c11m1_greenhouse
|
||||
Dead Air c11m2_offices
|
||||
Dead Air c11m3_garage
|
||||
Dead Air c11m4_terminal
|
||||
Dead Air c11m5_runway
|
||||
Blood Harvest c12m1_hilltop
|
||||
Blood Harvest c12m2_traintunnel
|
||||
Blood Harvest c12m3_bridge
|
||||
Blood Harvest c12m4_barn
|
||||
Blood Harvest c12m5_cornfield
|
||||
Cold Stream c13m1_alpinecreek
|
||||
Cold Stream c13m2_southpinestream
|
||||
Cold Stream c13m3_memorialbridge
|
||||
Cold Stream c13m4_cutthroatcreek
|
||||
|
|
|
|||
|
|
@ -1,13 +0,0 @@
|
|||
#!/bin/bash
|
||||
set -xeuo pipefail
|
||||
|
||||
function steam() {
|
||||
# for systemd, so it can terminate the process (for other things sudo would have been enough)
|
||||
setpriv --reuid=steam --regid=steam --init-groups "$@" <&0
|
||||
export HOME=/opt/l4d2/steam
|
||||
}
|
||||
|
||||
function workshop() {
|
||||
steam mkdir -p "/opt/l4d2/overlays/${overlay}/left4dead2/addons"
|
||||
steam /opt/l4d2/scripts/steam-workshop-download --out "/opt/l4d2/overlays/${overlay}/left4dead2/addons" "$@"
|
||||
}
|
||||
|
|
@ -1,10 +0,0 @@
|
|||
#!/bin/bash
|
||||
set -xeuo pipefail
|
||||
source /opt/l4d2/scripts/helpers
|
||||
overlay=$(basename "$0")
|
||||
|
||||
# https://github.com/SirPlease/L4D2-Competitive-Rework
|
||||
|
||||
steam mkdir -p /opt/l4d2/overlays/$overlay/left4dead2
|
||||
test -d /opt/l4d2/overlays/$overlay/left4dead2/cfg/cfgogl || \
|
||||
curl -L https://github.com/SirPlease/L4D2-Competitive-Rework/archive/refs/heads/master.tar.gz | steam tar -xz --strip-components=1 -C /opt/l4d2/overlays/$overlay/left4dead2
|
||||
|
|
@ -1,128 +0,0 @@
|
|||
#!/bin/bash
|
||||
set -xeuo pipefail
|
||||
source /opt/l4d2/scripts/helpers
|
||||
overlay=$(basename "$0")
|
||||
|
||||
steam mkdir -p /opt/l4d2/overlays/$overlay/left4dead2/addons
|
||||
cd /opt/l4d2/overlays/$overlay/left4dead2/addons
|
||||
|
||||
# https://l4d2center.com/maps/servers/l4d2center_maps_sync.sh.txt ->
|
||||
|
||||
# Exit immediately if a command exits with a non-zero status.
|
||||
set -e
|
||||
|
||||
# Function to print error messages
|
||||
error_exit() {
|
||||
echo "Error: $1" >&2
|
||||
exit 1
|
||||
}
|
||||
|
||||
# Check if the current directory ends with /left4dead2/addons
|
||||
current_dir=$(pwd)
|
||||
expected_dir="/left4dead2/addons"
|
||||
|
||||
if [[ ! "$current_dir" == *"$expected_dir" ]]; then
|
||||
error_exit "Script must be run from your L4D2 \"addons\" folder. Current directory: $current_dir"
|
||||
fi
|
||||
|
||||
# Check for required commands
|
||||
for cmd in curl md5sum 7z; do
|
||||
if ! command -v "$cmd" >/dev/null 2>&1; then
|
||||
error_exit "Required command '$cmd' is not installed. Please install it and retry."
|
||||
fi
|
||||
done
|
||||
|
||||
# URL of the CSV file
|
||||
CSV_URL="https://l4d2center.com/maps/servers/index.csv"
|
||||
|
||||
# Temporary file to store CSV
|
||||
TEMP_CSV=$(mktemp)
|
||||
|
||||
# Ensure temporary file is removed on exit
|
||||
trap 'rm -f "$TEMP_CSV"' EXIT
|
||||
|
||||
echo "Downloading CSV from $CSV_URL..."
|
||||
curl -sSL -o "$TEMP_CSV" "$CSV_URL" || error_exit "Failed to download CSV."
|
||||
|
||||
declare -A map_md5
|
||||
declare -A map_links
|
||||
|
||||
# Read CSV and populate associative arrays
|
||||
{
|
||||
# Skip the first line (header)
|
||||
IFS= read -r header
|
||||
|
||||
while IFS=';' read -r Name Size MD5 DownloadLink || [[ $Name ]]; do
|
||||
# Trim whitespace
|
||||
Name=$(echo "$Name" | xargs)
|
||||
MD5=$(echo "$MD5" | xargs)
|
||||
DownloadLink=$(echo "$DownloadLink" | xargs)
|
||||
|
||||
# Populate associative arrays
|
||||
map_md5["$Name"]="$MD5"
|
||||
map_links["$Name"]="$DownloadLink"
|
||||
done
|
||||
} < "$TEMP_CSV"
|
||||
|
||||
# Get list of expected VPK files
|
||||
expected_vpk=("${!map_md5[@]}")
|
||||
|
||||
# Remove VPK files not in expected list or with mismatched MD5
|
||||
echo "Cleaning up existing VPK files..."
|
||||
for file in *.vpk; do
|
||||
# Check if it's a regular file
|
||||
if [[ -f "$file" ]]; then
|
||||
if [[ -z "${map_md5["$file"]}" ]]; then
|
||||
echo "Removing unexpected file: $file"
|
||||
rm -f "$file"
|
||||
else
|
||||
# Calculate MD5
|
||||
echo "Calculating MD5 for existing file: $file..."
|
||||
current_md5=$(md5sum "$file" | awk '{print $1}')
|
||||
expected_md5="${map_md5["$file"]}"
|
||||
|
||||
if [[ "$current_md5" != "$expected_md5" ]]; then
|
||||
echo "MD5 mismatch for $file. Removing."
|
||||
rm -f "$file"
|
||||
fi
|
||||
fi
|
||||
fi
|
||||
done
|
||||
|
||||
# Download and extract missing or updated VPK files
|
||||
echo "Processing required VPK files..."
|
||||
for vpk in "${expected_vpk[@]}"; do
|
||||
if [[ ! -f "$vpk" ]]; then
|
||||
echo "Downloading and extracting $vpk..."
|
||||
download_url="${map_links["$vpk"]}"
|
||||
|
||||
if [[ -z "$download_url" ]]; then
|
||||
echo "No download link found for $vpk. Skipping."
|
||||
continue
|
||||
fi
|
||||
|
||||
encoded_url=$(echo "$download_url" | sed 's/ /%20/g')
|
||||
|
||||
# Download the .7z file to a temporary location
|
||||
TEMP_7Z=$(mktemp --suffix=.7z)
|
||||
curl -# -L -o "$TEMP_7Z" "$encoded_url"
|
||||
|
||||
# Check if the download was successful
|
||||
if [[ $? -ne 0 ]]; then
|
||||
echo "Failed to download $download_url. Skipping."
|
||||
rm -f "$TEMP_7Z"
|
||||
continue
|
||||
fi
|
||||
|
||||
# Extract the .7z file
|
||||
7z x -y "$TEMP_7Z" || { echo "Failed to extract $TEMP_7Z. Skipping."; rm -f "$TEMP_7Z"; continue; }
|
||||
|
||||
# Remove the temporary .7z file
|
||||
rm -f "$TEMP_7Z"
|
||||
|
||||
else
|
||||
echo "$vpk is already up to date."
|
||||
fi
|
||||
done
|
||||
|
||||
echo "Synchronization complete."
|
||||
|
|
@ -1,12 +0,0 @@
|
|||
#!/bin/bash
|
||||
set -xeuo pipefail
|
||||
source /opt/l4d2/scripts/helpers
|
||||
overlay=$(basename "$0")
|
||||
|
||||
# Ions Vocalizer
|
||||
workshop -i 698857882
|
||||
|
||||
# admin system
|
||||
workshop --item 2524204971
|
||||
steam mkdir -p "/opt/l4d2/overlays/${overlay}/left4dead2/ems/admin system"
|
||||
steam echo "STEAM_1:0:12376499" > "/opt/l4d2/overlays/${overlay}/left4dead2/ems/admin system/admins.txt"
|
||||
|
|
@ -1,25 +0,0 @@
|
|||
#!/bin/bash
|
||||
set -xeuo pipefail
|
||||
source /opt/l4d2/scripts/helpers
|
||||
overlay=$(basename "$0")
|
||||
|
||||
# server config
|
||||
# https://github.com/SirPlease/L4D2-Competitive-Rework/blob/7ecc3a32a5e2180d6607a40119ff2f3c072502a9/cfg/server.cfg#L58-L69
|
||||
# https://www.programmersought.com/article/513810199514/
|
||||
steam mkdir -p /opt/l4d2/overlays/$overlay/left4dead2/cfg
|
||||
steam cat <<'EOF' > /opt/l4d2/overlays/$overlay/left4dead2/cfg/server.cfg
|
||||
# https://github.com/SirPlease/L4D2-Competitive-Rework/blob/7ecc3a32a5e2180d6607a40119ff2f3c072502a9/cfg/server.cfg#L58-L69
|
||||
sv_minrate 100000
|
||||
sv_maxrate 100000
|
||||
nb_update_frequency 0.014
|
||||
net_splitpacket_maxrate 50000
|
||||
net_maxcleartime 0.0001
|
||||
fps_max 0
|
||||
EOF
|
||||
|
||||
# install tickrate enabler
|
||||
steam mkdir -p "/opt/l4d2/overlays/${overlay}/left4dead2/addons"
|
||||
for file in tickrate_enabler.dll tickrate_enabler.so tickrate_enabler.vdf
|
||||
do
|
||||
curl -L "https://github.com/SirPlease/L4D2-Competitive-Rework/raw/refs/heads/master/addons/${file}" -o "/opt/l4d2/overlays/${overlay}/left4dead2/addons/${file}"
|
||||
done
|
||||
|
|
@ -1,13 +0,0 @@
|
|||
#!/bin/bash
|
||||
set -xeuo pipefail
|
||||
source /opt/l4d2/scripts/helpers
|
||||
overlay=$(basename "$0")
|
||||
|
||||
# workshop --collection 121115793 # Back To School
|
||||
|
||||
# workshop --item 2957035482 # hehe30-part1
|
||||
# workshop --item 2973628334 # hehe30-part2
|
||||
# workshop --item 3013844371 # hehe30-part3
|
||||
|
||||
# workshop --item 3478461158 # 虚伪黎明(Dawn's Deception)
|
||||
# workshop --item 3478934394 # 虚伪黎明(Dawn's Deception)PART2
|
||||
|
|
@ -1,13 +1,40 @@
|
|||
// defaults
|
||||
hostname ${server_name}
|
||||
hostname "CroneKorkN : ${name}"
|
||||
sv_contact "admin@sublimity.de"
|
||||
|
||||
|
||||
sv_steamgroup "${','.join(steamgroups)}"
|
||||
|
||||
rcon_password "${rcon_password}"
|
||||
|
||||
|
||||
motd_enabled 0
|
||||
rcon_password ${rcon_password}
|
||||
sv_steamgroup "38347879"
|
||||
|
||||
mp_autoteambalance 0
|
||||
sv_forcepreload 1
|
||||
|
||||
// server specific
|
||||
% for line in config:
|
||||
${line}
|
||||
% endfor
|
||||
sv_cheats 1
|
||||
|
||||
|
||||
sv_consistency 0
|
||||
|
||||
|
||||
sv_lan 0
|
||||
|
||||
|
||||
sv_allow_lobby_connect_only 0
|
||||
|
||||
|
||||
sv_gametypes "coop,realism,survival,versus,teamversus,scavenge,teamscavenge"
|
||||
|
||||
|
||||
sv_minrate 30000
|
||||
sv_maxrate 60000
|
||||
sv_mincmdrate 66
|
||||
sv_maxcmdrate 101
|
||||
|
||||
|
||||
sv_logsdir "logs-${name}" //Folder in the game directory where server logs will be stored.
|
||||
log on //Creates a logfile (on | off)
|
||||
sv_logecho 0 //default 0; Echo log information to the console.
|
||||
sv_logfile 1 //default 1; Log server information in the log file.
|
||||
sv_log_onefile 0 //default 0; Log server information to only one file.
|
||||
sv_logbans 1 //default 0;Log server bans in the server logs.
|
||||
sv_logflush 0 //default 0; Flush the log files to disk on each write (slow).
|
||||
|
|
|
|||
|
|
@ -1,72 +0,0 @@
|
|||
#!/bin/bash
|
||||
|
||||
set -xeuo pipefail
|
||||
|
||||
# -- DEFINE FUNCTIONS AND VARIABLES -- #
|
||||
|
||||
function steam() {
|
||||
# for systemd, so it can terminate the process (for other things sudo would have been enough)
|
||||
setpriv --reuid=steam --regid=steam --init-groups "$@" <&0
|
||||
export HOME=/opt/l4d2/steam
|
||||
}
|
||||
|
||||
# -- PREPARE SYSTEM -- #
|
||||
|
||||
getent passwd steam >/dev/null || useradd -M -d /opt/l4d2 -s /bin/bash steam
|
||||
mkdir -p /opt/l4d2 /tmp/dumps
|
||||
chown steam:steam /opt/l4d2 /tmp/dumps
|
||||
dpkg --add-architecture i386
|
||||
apt update
|
||||
DEBIAN_FRONTEND=noninteractive apt install -y libc6:i386 lib32z1
|
||||
|
||||
# workshop downloader
|
||||
test -f /opt/l4d2/scripts/steam-workshop-download || \
|
||||
steam wget -4 https://git.sublimity.de/cronekorkn/steam-workshop-downloader/raw/branch/master/steam-workshop-download -P /opt/l4d2/scripts
|
||||
steam chmod +x /opt/l4d2/scripts/steam-workshop-download
|
||||
|
||||
# -- STEAM -- #
|
||||
|
||||
steam mkdir -p /opt/l4d2/steam
|
||||
test -f /opt/l4d2/steam/steamcmd_linux.tar.gz || \
|
||||
steam wget http://media.steampowered.com/installer/steamcmd_linux.tar.gz -P /opt/l4d2/steam
|
||||
test -f /opt/l4d2/steam/steamcmd.sh || \
|
||||
steam tar -xvzf /opt/l4d2/steam/steamcmd_linux.tar.gz -C /opt/l4d2/steam
|
||||
|
||||
# fix for: /opt/l4d2/.steam/sdk32/steamclient.so: cannot open shared object file: No such file or directory
|
||||
steam mkdir -p /opt/l4d2/steam/.steam # needs to be in steam users home dir
|
||||
readlink /opt/l4d2/steam/.steam/sdk32 | grep -q ^/opt/l4d2/steam/linux32$ || \
|
||||
steam ln -sf /opt/l4d2/steam/linux32 /opt/l4d2/steam/.steam/sdk32
|
||||
readlink /opt/l4d2/steam/.steam/sdk64 | grep -q ^/opt/l4d2/steam/linux64$ || \
|
||||
steam ln -sf /opt/l4d2/steam/linux64 /opt/l4d2/steam/.steam/sdk64
|
||||
|
||||
# -- INSTALL -- #
|
||||
|
||||
# erst die windows deps zu installieren scheint ein workaround für x64 zu sein?
|
||||
steam mkdir -p /opt/l4d2/installation
|
||||
steam /opt/l4d2/steam/steamcmd.sh \
|
||||
+force_install_dir /opt/l4d2/installation \
|
||||
+login anonymous \
|
||||
+@sSteamCmdForcePlatformType windows \
|
||||
+app_update 222860 validate \
|
||||
+quit
|
||||
steam /opt/l4d2/steam/steamcmd.sh \
|
||||
+force_install_dir /opt/l4d2/installation \
|
||||
+login anonymous \
|
||||
+@sSteamCmdForcePlatformType linux \
|
||||
+app_update 222860 validate \
|
||||
+quit
|
||||
|
||||
# -- OVERLAYS -- #
|
||||
|
||||
for overlay_path in /opt/l4d2/scripts/overlays/*; do
|
||||
overlay=$(basename "$overlay_path")
|
||||
steam mkdir -p /opt/l4d2/overlays/$overlay
|
||||
bash -xeuo pipefail "$overlay_path"
|
||||
test -f /opt/l4d2/overlays/$overlay/left4dead2/cfg/server.cfg && \
|
||||
steam cp /opt/l4d2/overlays/$overlay/left4dead2/cfg/server.cfg /opt/l4d2/overlays/$overlay/left4dead2/cfg/server_$overlay.cfg
|
||||
done
|
||||
|
||||
# -- SERVERS -- #
|
||||
|
||||
#steam rm -rf /opt/l4d2/servers
|
||||
steam mkdir -p /opt/l4d2/servers
|
||||
|
|
@ -1,75 +0,0 @@
|
|||
#!/bin/bash
|
||||
|
||||
set -xeuo pipefail
|
||||
|
||||
name=""
|
||||
port=""
|
||||
configfile=""
|
||||
overlays=""
|
||||
arguments=""
|
||||
|
||||
while [[ $# -gt 0 ]]; do
|
||||
case "$1" in
|
||||
-n|--name)
|
||||
name="$2"; shift 2
|
||||
;;
|
||||
-p|--port)
|
||||
port="$2"; shift 2
|
||||
;;
|
||||
-c|--config)
|
||||
configfile="$2"; shift 2
|
||||
;;
|
||||
-o|--overlay)
|
||||
overlays="/opt/l4d2/overlays/$2:$overlays"; shift 2
|
||||
;;
|
||||
--)
|
||||
shift
|
||||
arguments+="$@"
|
||||
break
|
||||
;;
|
||||
*)
|
||||
echo "ERROR: unknown argument $1"; exit 1
|
||||
;;
|
||||
esac
|
||||
done
|
||||
|
||||
[[ -n "${name}" ]] || { echo "ERROR: -n/--name missing"; exit 1; }
|
||||
[[ -n "${port}" ]] || { echo "ERROR: -p/--port missing"; exit 1; }
|
||||
|
||||
# -- HELPER FUNCTIONS -- #
|
||||
|
||||
function steam() {
|
||||
# für systemd, damit es den prozess beenden kann
|
||||
setpriv --reuid=steam --regid=steam --init-groups "$@"
|
||||
export HOME=/opt/l4d2/steam
|
||||
}
|
||||
|
||||
# -- TIDY UP -- #
|
||||
|
||||
mountpoint -q "/opt/l4d2/servers/$name/merged" && umount "/opt/l4d2/servers/$name/merged"
|
||||
steam rm -rf "/opt/l4d2/servers/$name"
|
||||
|
||||
# -- CREATE DIRECTORIES -- #
|
||||
|
||||
steam mkdir -p \
|
||||
"/opt/l4d2/servers/$name" \
|
||||
"/opt/l4d2/servers/$name/work" \
|
||||
"/opt/l4d2/servers/$name/upper" \
|
||||
"/opt/l4d2/servers/$name/merged"
|
||||
|
||||
# -- MOUNT OVERLAYFS -- #
|
||||
|
||||
mount -t overlay overlay \
|
||||
-o "lowerdir=$overlays/opt/l4d2/installation,upperdir=/opt/l4d2/servers/$name/upper,workdir=/opt/l4d2/servers/$name/work" \
|
||||
"/opt/l4d2/servers/$name/merged"
|
||||
|
||||
# -- REPLACE SERVER.CFG -- #
|
||||
|
||||
if [[ -n "$configfile" ]]; then
|
||||
cp "$configfile" "/opt/l4d2/servers/$name/merged/left4dead2/cfg/server.cfg"
|
||||
chown steam:steam "/opt/l4d2/servers/$name/merged/left4dead2/cfg/server.cfg"
|
||||
fi
|
||||
|
||||
# -- RUN L4D2 -- #
|
||||
|
||||
steam "/opt/l4d2/servers/$name/merged/srcds_run" -norestart -pidfile "/opt/l4d2/servers/$name/pid" -game left4dead2 -ip 0.0.0.0 -port "$port" +hostname "Crone_$name" +map c1m1_hotel $arguments
|
||||
|
|
@ -1,19 +0,0 @@
|
|||
#!/bin/bash
|
||||
|
||||
set -xeuo pipefail
|
||||
|
||||
name=""
|
||||
|
||||
while [[ $# -gt 0 ]]; do
|
||||
case "$1" in
|
||||
-n|--name)
|
||||
name="$2"; shift 2
|
||||
;;
|
||||
*)
|
||||
echo "ERROR: unknown argument $1"; exit 1
|
||||
;;
|
||||
esac
|
||||
done
|
||||
|
||||
mountpoint -q "/opt/l4d2/servers/$name/merged" && umount "/opt/l4d2/servers/$name/merged"
|
||||
steam rm -rf "/opt/l4d2/servers/$name"
|
||||
|
|
@ -1,105 +1,122 @@
|
|||
users = {
|
||||
'steam': {
|
||||
'home': '/opt/l4d2/steam',
|
||||
'shell': '/bin/bash',
|
||||
},
|
||||
}
|
||||
assert node.has_bundle('steam') and node.has_bundle('steam-workshop-download')
|
||||
|
||||
directories = {
|
||||
'/opt/l4d2': {
|
||||
'owner': 'steam', 'group': 'steam',
|
||||
},
|
||||
'/opt/l4d2/steam': {
|
||||
'owner': 'steam', 'group': 'steam',
|
||||
},
|
||||
'/opt/l4d2/configs': {
|
||||
'owner': 'steam', 'group': 'steam',
|
||||
'/opt/steam/left4dead2-servers': {
|
||||
'owner': 'steam',
|
||||
'group': 'steam',
|
||||
'mode': '0755',
|
||||
'purge': True,
|
||||
},
|
||||
'/opt/l4d2/scripts': {
|
||||
'owner': 'steam', 'group': 'steam',
|
||||
},
|
||||
'/opt/l4d2/scripts/overlays': {
|
||||
'owner': 'steam', 'group': 'steam',
|
||||
# Current zfs doesnt support zfs upperdir. The support was added in October 2022. Move upperdir - unused anyway -
|
||||
# to another dir. Also move workdir alongside it, as it has to be on same fs.
|
||||
'/opt/steam-zfs-overlay-workarounds': {
|
||||
'owner': 'steam',
|
||||
'group': 'steam',
|
||||
'mode': '0755',
|
||||
'purge': True,
|
||||
},
|
||||
}
|
||||
|
||||
files = {
|
||||
'/opt/l4d2/setup': {
|
||||
'mode': '755',
|
||||
'triggers': {
|
||||
'svc_systemd:left4dead2-initialize.service:restart',
|
||||
},
|
||||
},
|
||||
'/opt/l4d2/start': {
|
||||
'mode': '755',
|
||||
'triggers': {
|
||||
f'svc_systemd:left4dead2-{server_name}.service:restart'
|
||||
for server_name in node.metadata.get('left4dead2/servers').keys()
|
||||
},
|
||||
},
|
||||
'/opt/l4d2/stop': {
|
||||
'mode': '755',
|
||||
'triggers': {
|
||||
f'svc_systemd:left4dead2-{server_name}.service:restart'
|
||||
for server_name in node.metadata.get('left4dead2/servers').keys()
|
||||
},
|
||||
},
|
||||
'/opt/l4d2/scripts/helpers': {
|
||||
'source': 'scripts/helpers',
|
||||
'mode': '755',
|
||||
'triggers': {
|
||||
'svc_systemd:left4dead2-initialize.service:restart',
|
||||
},
|
||||
},
|
||||
# /opt/steam/steam/.steam/sdk32/steamclient.so: cannot open shared object file: No such file or directory
|
||||
symlinks = {
|
||||
'/opt/steam/steam/.steam/sdk32': {
|
||||
'target': '/opt/steam/steam/linux32',
|
||||
'owner': 'steam',
|
||||
'group': 'steam',
|
||||
}
|
||||
}
|
||||
|
||||
for overlay in node.metadata.get('left4dead2/overlays'):
|
||||
files[f'/opt/l4d2/scripts/overlays/{overlay}'] = {
|
||||
'source': f'scripts/overlays/{overlay}',
|
||||
'mode': '755',
|
||||
'triggers': {
|
||||
'svc_systemd:left4dead2-initialize.service:restart',
|
||||
},
|
||||
#
|
||||
# SERVERS
|
||||
#
|
||||
|
||||
for name, config in node.metadata.get('left4dead2/servers').items():
|
||||
|
||||
#overlay
|
||||
directories[f'/opt/steam/left4dead2-servers/{name}'] = {
|
||||
'owner': 'steam',
|
||||
'group': 'steam',
|
||||
}
|
||||
directories[f'/opt/steam-zfs-overlay-workarounds/{name}/upper'] = {
|
||||
'owner': 'steam',
|
||||
'group': 'steam',
|
||||
}
|
||||
directories[f'/opt/steam-zfs-overlay-workarounds/{name}/workdir'] = {
|
||||
'owner': 'steam',
|
||||
'group': 'steam',
|
||||
}
|
||||
|
||||
svc_systemd = {
|
||||
'left4dead2-initialize.service': {
|
||||
'enabled': True,
|
||||
'running': None,
|
||||
'needs': {
|
||||
'tag:left4dead2-packages',
|
||||
'file:/opt/l4d2/setup',
|
||||
'file:/usr/local/lib/systemd/system/left4dead2-initialize.service',
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
for server_name, config in node.metadata.get('left4dead2/servers').items():
|
||||
files[f'/opt/l4d2/configs/{server_name}.cfg'] = {
|
||||
'source': 'server.cfg',
|
||||
# conf
|
||||
files[f'/opt/steam/left4dead2-servers/{name}/left4dead2/cfg/server.cfg'] = {
|
||||
'content_type': 'mako',
|
||||
'source': 'server.cfg',
|
||||
'context': {
|
||||
'server_name': server_name,
|
||||
'rcon_password': repo.vault.decrypt('encrypt$gAAAAABpAdZhxwJ47I1AXotuZmBvyZP1ecVTt9IXFkLI28JiVS74LKs9QdgIBz-FC-iXtIHHh_GVGxxKQZprn4UrXZcvZ57kCKxfHBs3cE2JiGnbWE8_mfs=').value,
|
||||
'config': config.get('config', []),
|
||||
'name': name,
|
||||
'steamgroups': node.metadata.get('left4dead2/steamgroups'),
|
||||
'rcon_password': config['rcon_password'],
|
||||
},
|
||||
'owner': 'steam',
|
||||
'mode': '644',
|
||||
'triggers': {
|
||||
f'svc_systemd:left4dead2-{server_name}.service:restart',
|
||||
},
|
||||
'group': 'steam',
|
||||
'triggers': [
|
||||
f'svc_systemd:left4dead2-{name}.service:restart',
|
||||
],
|
||||
}
|
||||
|
||||
svc_systemd[f'left4dead2-{server_name}.service'] = {
|
||||
'enabled': True,
|
||||
'running': True,
|
||||
'tags': {
|
||||
'left4dead2-servers',
|
||||
},
|
||||
'needs': {
|
||||
'svc_systemd:left4dead2-initialize.service',
|
||||
f'file:/usr/local/lib/systemd/system/left4dead2-{server_name}.service',
|
||||
},
|
||||
# service
|
||||
svc_systemd[f'left4dead2-{name}.service'] = {
|
||||
'needs': [
|
||||
f'file:/opt/steam/left4dead2-servers/{name}/left4dead2/cfg/server.cfg',
|
||||
f'file:/usr/local/lib/systemd/system/left4dead2-{name}.service',
|
||||
],
|
||||
}
|
||||
|
||||
#
|
||||
# ADDONS
|
||||
#
|
||||
|
||||
# base
|
||||
files[f'/opt/steam/left4dead2-servers/{name}/left4dead2/addons/readme.txt'] = {
|
||||
'content_type': 'any',
|
||||
'owner': 'steam',
|
||||
'group': 'steam',
|
||||
}
|
||||
directories[f'/opt/steam/left4dead2-servers/{name}/left4dead2/addons'] = {
|
||||
'owner': 'steam',
|
||||
'group': 'steam',
|
||||
'purge': True,
|
||||
'triggers': [
|
||||
f'svc_systemd:left4dead2-{name}.service:restart',
|
||||
],
|
||||
}
|
||||
for id in [
|
||||
*config.get('workshop', []),
|
||||
*node.metadata.get('left4dead2/workshop'),
|
||||
]:
|
||||
files[f'/opt/steam/left4dead2-servers/{name}/left4dead2/addons/{id}.vpk'] = {
|
||||
'content_type': 'any',
|
||||
'owner': 'steam',
|
||||
'group': 'steam',
|
||||
'triggers': [
|
||||
f'svc_systemd:left4dead2-{name}.service:restart',
|
||||
],
|
||||
}
|
||||
|
||||
# admin system
|
||||
|
||||
directories[f'/opt/steam/left4dead2-servers/{name}/left4dead2/ems/admin system'] = {
|
||||
'owner': 'steam',
|
||||
'group': 'steam',
|
||||
'mode': '0755',
|
||||
'triggers': [
|
||||
f'svc_systemd:left4dead2-{name}.service:restart',
|
||||
],
|
||||
}
|
||||
files[f'/opt/steam/left4dead2-servers/{name}/left4dead2/ems/admin system/admins.txt'] = {
|
||||
'owner': 'steam',
|
||||
'group': 'steam',
|
||||
'mode': '0755',
|
||||
'content': '\n'.join(sorted(node.metadata.get('left4dead2/admins'))),
|
||||
'triggers': [
|
||||
f'svc_systemd:left4dead2-{name}.service:restart',
|
||||
],
|
||||
}
|
||||
|
|
|
|||
|
|
@ -1,112 +1,110 @@
|
|||
from re import match
|
||||
from os import path, listdir
|
||||
assert node.has_bundle('steam')
|
||||
|
||||
from shlex import quote
|
||||
|
||||
defaults = {
|
||||
'apt': {
|
||||
'packages': {
|
||||
'libc6_i386': { # installs libc6:i386
|
||||
'tags': {'left4dead2-packages'},
|
||||
},
|
||||
'lib32z1': {
|
||||
'tags': {'left4dead2-packages'},
|
||||
},
|
||||
'unzip': {
|
||||
'tags': {'left4dead2-packages'},
|
||||
},
|
||||
'p7zip-full': { # l4d2center_maps_sync.sh
|
||||
'tags': {'left4dead2-packages'},
|
||||
},
|
||||
'steam': {
|
||||
'games': {
|
||||
'left4dead2': 222860,
|
||||
},
|
||||
},
|
||||
'left4dead2': {
|
||||
'overlays': set(listdir(path.join(repo.path, 'bundles/left4dead2/files/scripts/overlays'))),
|
||||
'servers': {
|
||||
# 'port': 27017,
|
||||
# 'overlays': ['competitive_rework'],
|
||||
# 'arguments': ['-tickrate 60'],
|
||||
# 'config': [
|
||||
# 'exec server_original.cfg',
|
||||
# 'sm_forcematch zonemod',
|
||||
# ],
|
||||
},
|
||||
},
|
||||
'nftables': {
|
||||
'input': {
|
||||
'udp dport { 27005, 27020 } accept',
|
||||
},
|
||||
},
|
||||
'systemd': {
|
||||
'units': {
|
||||
'left4dead2-initialize.service': {
|
||||
'Unit': {
|
||||
'Description': 'initialize left4dead2',
|
||||
'After': 'network-online.target',
|
||||
},
|
||||
'Service': {
|
||||
'Type': 'oneshot',
|
||||
'RemainAfterExit': 'yes',
|
||||
'ExecStart': '/opt/l4d2/setup',
|
||||
'StandardOutput': 'journal',
|
||||
'StandardError': 'journal',
|
||||
},
|
||||
'Install': {
|
||||
'WantedBy': {'multi-user.target'},
|
||||
},
|
||||
},
|
||||
},
|
||||
'servers': {},
|
||||
'admins': set(),
|
||||
'workshop': set(),
|
||||
},
|
||||
}
|
||||
|
||||
|
||||
@metadata_reactor.provides(
|
||||
'left4dead2/servers',
|
||||
)
|
||||
def rconn_password(metadata):
|
||||
# only works from localhost!
|
||||
return {
|
||||
'left4dead2': {
|
||||
'servers': {
|
||||
server: {
|
||||
'rcon_password': repo.vault.password_for(f'{node.name} left4dead2 {server} rcon', length=24),
|
||||
}
|
||||
for server in metadata.get('left4dead2/servers')
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
|
||||
@metadata_reactor.provides(
|
||||
'steam-workshop-download',
|
||||
'systemd/units',
|
||||
)
|
||||
def server_units(metadata):
|
||||
units = {}
|
||||
workshop = {}
|
||||
|
||||
for name, config in metadata.get('left4dead2/servers').items():
|
||||
assert match(r'^[A-z0-9-_-]+$', name)
|
||||
assert 27000 <= config["port"] <= 27100
|
||||
for overlay in config.get('overlays', []):
|
||||
assert overlay in metadata.get('left4dead2/overlays'), f"unknown overlay {overlay}, known: {metadata.get('left4dead2/overlays')}"
|
||||
# mount overlay
|
||||
mountpoint = f'/opt/steam/left4dead2-servers/{name}'
|
||||
mount_unit_name = mountpoint[1:].replace('-', '\\x2d').replace('/', '-') + '.mount'
|
||||
units[mount_unit_name] = {
|
||||
'Unit': {
|
||||
'Description': f"Mount left4dead2 server {name} overlay",
|
||||
'Conflicts': {'umount.target'},
|
||||
'Before': {'umount.target'},
|
||||
},
|
||||
'Mount': {
|
||||
'What': 'overlay',
|
||||
'Where': mountpoint,
|
||||
'Type': 'overlay',
|
||||
'Options': ','.join([
|
||||
'auto',
|
||||
'lowerdir=/opt/steam/left4dead2',
|
||||
f'upperdir=/opt/steam-zfs-overlay-workarounds/{name}/upper',
|
||||
f'workdir=/opt/steam-zfs-overlay-workarounds/{name}/workdir',
|
||||
]),
|
||||
},
|
||||
'Install': {
|
||||
'RequiredBy': {
|
||||
f'left4dead2-{name}.service',
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
cmd = f'/opt/l4d2/start -n {name} -p {config["port"]}'
|
||||
|
||||
if 'config' in config:
|
||||
cmd += f' -c /opt/l4d2/configs/{name}.cfg'
|
||||
|
||||
for overlay in config.get('overlays', []):
|
||||
cmd += f' -o {overlay}'
|
||||
|
||||
if 'arguments' in config:
|
||||
cmd += ' -- ' + ' '.join(config['arguments'])
|
||||
# individual workshop
|
||||
workshop_ids = config.get('workshop', set()) | metadata.get('left4dead2/workshop', set())
|
||||
if workshop_ids:
|
||||
workshop[f'left4dead2-{name}'] = {
|
||||
'ids': workshop_ids,
|
||||
'path': f'/opt/steam/left4dead2-servers/{name}/left4dead2/addons',
|
||||
'user': 'steam',
|
||||
'requires': {
|
||||
mount_unit_name,
|
||||
},
|
||||
'required_by': {
|
||||
f'left4dead2-{name}.service',
|
||||
},
|
||||
}
|
||||
|
||||
# left4dead2 server unit
|
||||
units[f'left4dead2-{name}.service'] = {
|
||||
'Unit': {
|
||||
'Description': f'left4dead2 server {name}',
|
||||
'After': {'left4dead2-initialize.service'},
|
||||
'Requires': {'left4dead2-initialize.service'},
|
||||
'After': {'steam-update.service'},
|
||||
'Requires': {'steam-update.service'},
|
||||
},
|
||||
'Service': {
|
||||
'Type': 'simple',
|
||||
'ExecStart': cmd,
|
||||
'ExecStopPost': f'/opt/l4d2/stop -n {name}',
|
||||
'User': 'steam',
|
||||
'Group': 'steam',
|
||||
'WorkingDirectory': f'/opt/steam/left4dead2-servers/{name}',
|
||||
'ExecStart': f'/opt/steam/left4dead2-servers/{name}/srcds_run -port {config["port"]} +exec server.cfg',
|
||||
'Restart': 'on-failure',
|
||||
'Nice': -10,
|
||||
'CPUWeight': 200,
|
||||
'IOSchedulingClass': 'best-effort',
|
||||
'IOSchedulingPriority': 0,
|
||||
},
|
||||
'Install': {
|
||||
'WantedBy': {'multi-user.target'},
|
||||
},
|
||||
'triggers': {
|
||||
f'svc_systemd:left4dead2-{name}.service:restart',
|
||||
},
|
||||
}
|
||||
|
||||
return {
|
||||
'steam-workshop-download': workshop,
|
||||
'systemd': {
|
||||
'units': units,
|
||||
},
|
||||
|
|
@ -116,13 +114,14 @@ def server_units(metadata):
|
|||
@metadata_reactor.provides(
|
||||
'nftables/input',
|
||||
)
|
||||
def nftables(metadata):
|
||||
ports = sorted(str(config["port"]) for config in metadata.get('left4dead2/servers').values())
|
||||
def firewall(metadata):
|
||||
ports = set(str(server['port']) for server in metadata.get('left4dead2/servers').values())
|
||||
|
||||
return {
|
||||
'nftables': {
|
||||
'input': {
|
||||
f'ip protocol {{ tcp, udp }} th dport {{ {", ".join(ports)} }} accept'
|
||||
f"tcp dport {{ {', '.join(sorted(ports))} }} accept",
|
||||
f"udp dport {{ {', '.join(sorted(ports))} }} accept",
|
||||
},
|
||||
},
|
||||
}
|
||||
|
|
|
|||
|
|
@ -1,114 +0,0 @@
|
|||
# left4me
|
||||
|
||||
L4D2 game-server management platform: a Flask web UI on gunicorn that
|
||||
provisions per-instance srcds servers via templated systemd units, with
|
||||
kernel-overlayfs layering for shared installations + per-overlay maps,
|
||||
and uid-based DSCP/priority marking on the egress path so CAKE on the
|
||||
external interface prioritizes srcds UDP over bulk traffic.
|
||||
|
||||
## Metadata
|
||||
|
||||
```python
|
||||
'metadata': {
|
||||
'left4me': {
|
||||
'domain': 'whatever.tld', # required — the only per-node knob
|
||||
# Everything below is optional and has a sensible default in the
|
||||
# bundle. Override per-node only if the default is wrong:
|
||||
# 'git_url': 'git@git.sublimity.de:cronekorkn/left4me',
|
||||
# 'git_branch': 'master',
|
||||
# 'gunicorn_workers': 1,
|
||||
# 'gunicorn_threads': 32,
|
||||
# 'job_worker_threads': 4,
|
||||
# 'port_range_start': 27015,
|
||||
# 'port_range_end': 27115,
|
||||
# secret_key is auto-derived per node
|
||||
# (repo.vault.random_bytes_as_base64_for f'{node.name} left4me secret_key').
|
||||
},
|
||||
},
|
||||
```
|
||||
|
||||
The bundle's `derived_from_domain` reactor reads `left4me/domain` and
|
||||
emits the corresponding `nginx/vhosts`, `letsencrypt/domains`,
|
||||
`monitoring/services/left4me-web` (HTTPS health check), and the game-
|
||||
port `nftables/input` accept rules. Backup paths
|
||||
(`/var/lib/left4me`, `/etc/left4me`) are set-merged into `backup/paths`
|
||||
from defaults. None of these need to be declared per-node.
|
||||
|
||||
## What this bundle does
|
||||
|
||||
- Creates system users `left4me` (uid/gid 980, home `/var/lib/left4me`,
|
||||
mode 0711) and `l4d2-sandbox` (uid/gid 981, no home, used by bwrap
|
||||
script-overlay builds).
|
||||
- Drops privileged helpers under `/usr/local/libexec/left4me/`
|
||||
(`left4me-systemctl`, `left4me-journalctl`, `left4me-overlay`,
|
||||
`left4me-script-sandbox`) plus a tight sudoers file (validated with
|
||||
`visudo -cf` before install).
|
||||
- `git_deploy`s the left4me repo to `/opt/left4me/src`, builds a venv at
|
||||
`/opt/left4me/.venv`, `pip install -e`s both `l4d2host` and `l4d2web`,
|
||||
runs `alembic upgrade head` and `flask seed-script-overlays`, then
|
||||
enables `left4me-web.service`.
|
||||
- Emits four systemd units via `systemd/units` metadata (consumed by
|
||||
`bundles/systemd/`):
|
||||
- `left4me-web.service` — gunicorn on `127.0.0.1:8000` (TLS terminates upstream).
|
||||
- `left4me-server@.service` — per-instance srcds template, started on
|
||||
demand by the web app via the `left4me-systemctl` helper.
|
||||
- `l4d2-game.slice` / `l4d2-build.slice` — cgroup slices for the
|
||||
perf-baseline (CPU/IO weights, memory caps).
|
||||
- Contributes uid-based DSCP/priority marks for srcds UDP egress to
|
||||
`nftables/output` (via `defaults`).
|
||||
|
||||
## Gotchas
|
||||
|
||||
- **Requires `bundles/nftables` and `bundles/systemd` on the node.** The
|
||||
bundle asserts membership at `bw test` time. On Debian-13 these ride
|
||||
in via the `debian-13` group, so attaching the bundle to a Debian-13
|
||||
node is enough.
|
||||
- **`left4me-web.service` does not have `NoNewPrivileges=true`.** This is
|
||||
intentional — workers `sudo` the privileged helpers; `NoNewPrivileges`
|
||||
would block setuid escalation. Per-instance `server@.service` units
|
||||
*do* have it.
|
||||
- **CAKE shaping is configured separately**, via
|
||||
`network/<iface>/cake` on the node (consumed by `bundles/network/`),
|
||||
not by this bundle.
|
||||
- **First-run admin user is manual.** After `bw apply`, ssh to the host and
|
||||
bootstrap the admin via the `left4me` wrapper (it sources the env files,
|
||||
drops to the `left4me` user, and runs the flask CLI):
|
||||
`sudo left4me create-user <username> --admin` (prompts for password via
|
||||
the flask CLI, or set `LEFT4ME_ADMIN_PASSWORD` first). The bundle
|
||||
deliberately doesn't seed an admin to keep credentials out of the
|
||||
metadata pipeline. The same `left4me` wrapper accepts any other flask
|
||||
subcommand: `sudo left4me seed-script-overlays <dir>`,
|
||||
`sudo left4me routes`, `sudo left4me shell`, etc.
|
||||
- **CPU isolation is managed by this bundle**, driven by one required
|
||||
per-node knob: `left4me/system_cpus` — a set of int CPU ids that
|
||||
pins `system.slice` / `user.slice` / `l4d2-build.slice`. The
|
||||
complement (`set(range(vm/threads)) - system_cpus`) pins
|
||||
`l4d2-game.slice`. On HT hosts, list both SMT siblings of every
|
||||
physical core you want to reserve for system, otherwise games end
|
||||
up sharing L1/L2 with system. Find pairings via
|
||||
`/sys/devices/system/cpu/cpu<n>/topology/thread_siblings_list`. On
|
||||
the prod node (`ovh.left4me`, 4 physical / 8 threads, pairings
|
||||
(0,4) (1,5) (2,6) (3,7)) the node sets `'system_cpus': {0, 4}` to
|
||||
reserve physical core 0 entirely. `l4d2-game.slice` and
|
||||
`l4d2-build.slice` carry `AllowedCPUs=` inline on their unit
|
||||
definitions; `system.slice` and `user.slice` get drop-ins registered
|
||||
under `systemd/units` with the `'<parent>.d/<basename>.conf'` key
|
||||
convention (same shape nginx and autologin use), landing at
|
||||
`/usr/local/lib/systemd/system/<slice>.d/99-left4me-cpuset.conf`.
|
||||
The reactor raises if `system_cpus` includes CPUs outside
|
||||
`[0, vm/threads)` or leaves no cores for games.
|
||||
- **Kernel feature requirement:** kernel-overlayfs (`CONFIG_OVERLAY_FS`).
|
||||
Standard on debian-13.
|
||||
- **Game ports** open by the web app on demand in the range 27015-27115
|
||||
(UDP+TCP). Add corresponding accept rules to `nftables/input` per
|
||||
node if the host's policy is default-drop on input.
|
||||
- **Pinned UIDs/GIDs (980/981).** Chosen for deterministic ownership
|
||||
across rebuilds and backup restores. If you add another bundle that
|
||||
pins UIDs in this repo, make sure it doesn't collide.
|
||||
|
||||
## Slice support requires `bundles/systemd` ≥ commit cc1c6a5
|
||||
|
||||
This bundle's `l4d2-game.slice` and `l4d2-build.slice` units rely on
|
||||
`bundles/systemd/items.py` accepting the `.slice` extension. Older
|
||||
revisions raised `Exception(f'unknown type slice')` at apply time.
|
||||
The repo-wide `bw test` will catch this if it regresses.
|
||||
|
|
@ -1,6 +0,0 @@
|
|||
# Managed by ckn-bw bundles/left4me. Local edits will be reverted.
|
||||
# Deployment units use fixed /var/lib/left4me paths; regenerate units if this changes.
|
||||
LEFT4ME_ROOT=/var/lib/left4me
|
||||
# l4d2host invokes steamcmd by absolute path — bypasses PATH lookup so the
|
||||
# script's `cd "$(dirname "$0")"` resolves next to the real install dir.
|
||||
LEFT4ME_STEAMCMD=/opt/left4me/steam/steamcmd.sh
|
||||
|
|
@ -1,6 +0,0 @@
|
|||
# Sandbox-only resolver config — bind-mounted into script-overlay sandboxes
|
||||
# at /etc/resolv.conf. The host's resolver (often a private/LAN DNS server)
|
||||
# is unreachable from inside the sandbox because IPAddressDeny= blocks
|
||||
# egress to RFC1918 / loopback. Public resolvers keep DNS working.
|
||||
nameserver 1.1.1.1
|
||||
nameserver 8.8.8.8
|
||||
|
|
@ -1,8 +0,0 @@
|
|||
# Managed by ckn-bw bundles/left4me. Local edits will be reverted.
|
||||
DATABASE_URL=sqlite:////var/lib/left4me/left4me.db
|
||||
SECRET_KEY=${node.metadata.get('left4me/secret_key')}
|
||||
JOB_WORKER_THREADS=${node.metadata.get('left4me/job_worker_threads')}
|
||||
SESSION_COOKIE_SECURE=true
|
||||
LEFT4ME_PORT_RANGE_START=${node.metadata.get('left4me/port_range_start')}
|
||||
LEFT4ME_PORT_RANGE_END=${node.metadata.get('left4me/port_range_end')}
|
||||
STEAM_WEB_API_KEY=${node.metadata.get('left4me/steam_web_api_key')}
|
||||
|
|
@ -1,5 +0,0 @@
|
|||
Defaults:left4me !requiretty
|
||||
left4me ALL=(root) NOPASSWD: /usr/local/libexec/left4me/left4me-systemctl *
|
||||
left4me ALL=(root) NOPASSWD: /usr/local/libexec/left4me/left4me-journalctl *
|
||||
left4me ALL=(root) NOPASSWD: /usr/local/libexec/left4me/left4me-overlay mount *, /usr/local/libexec/left4me/left4me-overlay umount *
|
||||
left4me ALL=(root) NOPASSWD: /usr/local/libexec/left4me/left4me-script-sandbox
|
||||
|
|
@ -1,36 +0,0 @@
|
|||
# Host-side perf baseline for left4me — see
|
||||
# docs/superpowers/specs/2026-05-09-l4d2-server-host-perf-baseline-design.md
|
||||
#
|
||||
# UDP socket buffers: distro defaults of ~128 KiB are too small for sustained
|
||||
# Source-engine UDP across multiple instances. 8 MiB matches the standard
|
||||
# 1 Gbit recommendation; rmem_default/wmem_default protect sockets that don't
|
||||
# explicitly enlarge their buffers.
|
||||
net.core.rmem_max = 8388608
|
||||
net.core.wmem_max = 8388608
|
||||
net.core.rmem_default = 524288
|
||||
net.core.wmem_default = 524288
|
||||
|
||||
# Kernel softirq UDP path: the per-CPU backlog queue starts dropping packets
|
||||
# at the default 1000 under multi-instance burst; 5000 absorbs realistic peaks.
|
||||
# netdev_budget = 600 gives softirq more drain headroom per pass.
|
||||
net.core.netdev_max_backlog = 5000
|
||||
net.core.netdev_budget = 600
|
||||
|
||||
# Latency-sensitive default: avoid swap unless the box is really under
|
||||
# pressure. Harmless on swapless hosts.
|
||||
vm.swappiness = 10
|
||||
|
||||
# Per-socket UDP buffer floors: protect game-server sockets that don't bump
|
||||
# their own SO_RCVBUF/SO_SNDBUF when softirq drains lag briefly.
|
||||
net.ipv4.udp_rmem_min = 16384
|
||||
net.ipv4.udp_wmem_min = 16384
|
||||
|
||||
# Default qdisc for ifaces we don't explicitly shape with CAKE. Debian Trixie
|
||||
# already defaults to fq_codel; setting it explicitly is belt-and-suspenders
|
||||
# and survives kernel-default churn.
|
||||
net.core.default_qdisc = fq_codel
|
||||
|
||||
# TCP congestion control: BBR for any bulk TCP egress on the host (admin SSH,
|
||||
# backups, package fetches, web-app responses) so a long flow does not push
|
||||
# the bottleneck queue ahead of game UDP. UDP srcds is unaffected.
|
||||
net.ipv4.tcp_congestion_control = bbr
|
||||
|
|
@ -1,53 +0,0 @@
|
|||
#!/bin/sh
|
||||
set -eu
|
||||
|
||||
usage() {
|
||||
printf '%s\n' "usage: left4me-journalctl <server-name> --lines <n> --follow|--no-follow" >&2
|
||||
exit 2
|
||||
}
|
||||
|
||||
validate_name() {
|
||||
name=$1
|
||||
[ -n "$name" ] || usage
|
||||
case "$name" in
|
||||
.*|*..*|*/*|*\\*) usage ;;
|
||||
esac
|
||||
case "$name" in
|
||||
*[!A-Za-z0-9_.-]*) usage ;;
|
||||
esac
|
||||
}
|
||||
|
||||
[ "$#" -eq 4 ] || usage
|
||||
name=$1
|
||||
lines_flag=$2
|
||||
lines=$3
|
||||
follow_flag=$4
|
||||
|
||||
validate_name "$name"
|
||||
[ "$lines_flag" = "--lines" ] || usage
|
||||
case "$lines" in
|
||||
''|*[!0-9]*) usage ;;
|
||||
esac
|
||||
|
||||
follow_arg=
|
||||
case "$follow_flag" in
|
||||
--follow) follow_arg=-f ;;
|
||||
--no-follow) ;;
|
||||
*) usage ;;
|
||||
esac
|
||||
|
||||
unit="left4me-server@${name}.service"
|
||||
if [ -x /bin/journalctl ]; then
|
||||
journalctl=/bin/journalctl
|
||||
elif [ -x /usr/bin/journalctl ]; then
|
||||
journalctl=/usr/bin/journalctl
|
||||
else
|
||||
printf '%s\n' 'journalctl not found at /bin/journalctl or /usr/bin/journalctl' >&2
|
||||
exit 69
|
||||
fi
|
||||
|
||||
if [ -n "$follow_arg" ]; then
|
||||
exec "$journalctl" -u "$unit" -n "$lines" -o cat "$follow_arg"
|
||||
fi
|
||||
|
||||
exec "$journalctl" -u "$unit" -n "$lines" -o cat
|
||||
|
|
@ -1,242 +0,0 @@
|
|||
#!/usr/bin/python3
|
||||
"""Privileged overlay mount helper for left4me.
|
||||
|
||||
Invoked from the systemd unit's ExecStartPre / ExecStopPost via
|
||||
`+/usr/bin/nsenter --mount=/proc/1/ns/mnt -- …`. The unit-level
|
||||
nsenter is what makes this work: it runs the helper Python interpreter
|
||||
inside PID 1's mount namespace. Without it, the `+` Exec prefix
|
||||
removes the sandbox/credentials but does NOT detach from the unit's
|
||||
per-service mount namespace, and the helper process itself would pin
|
||||
that namespace alive — turning every umount into a multi-second EBUSY
|
||||
race with the kernel's deferred namespace cleanup. With the unit-level
|
||||
nsenter the helper has no such reference and umount succeeds first try.
|
||||
|
||||
Validates inputs strictly, then performs `mount -t overlay` /
|
||||
`umount` directly — no internal nsenter, since the helper is already
|
||||
running where the syscalls need to take effect.
|
||||
|
||||
Verbs:
|
||||
mount <name> Reads ${LEFT4ME_ROOT}/instances/<name>/instance.env
|
||||
for L4D2_LOWERDIRS, validates every lowerdir is
|
||||
under one of installation/overlays/workshop_cache/
|
||||
global_overlay_cache, then mounts the kernel
|
||||
overlay at runtime/<name>/merged.
|
||||
umount <name> Unmounts runtime/<name>/merged and cleans up the
|
||||
kernel-overlayfs `work/work` orphan.
|
||||
|
||||
Set LEFT4ME_OVERLAY_PRINT_ONLY=1 to print the would-be argv (one line,
|
||||
shell-quoted) and exit 0 instead of execv. Used by tests.
|
||||
"""
|
||||
|
||||
import os
|
||||
import re
|
||||
import shlex
|
||||
import shutil
|
||||
import subprocess
|
||||
import sys
|
||||
from pathlib import Path
|
||||
|
||||
NAME_RE = re.compile(r"^[a-z0-9][a-z0-9_-]{0,63}$")
|
||||
DEFAULT_ROOT = "/var/lib/left4me"
|
||||
LOWERDIR_ALLOWLIST = (
|
||||
"installation",
|
||||
"overlays",
|
||||
"global_overlay_cache",
|
||||
"workshop_cache",
|
||||
)
|
||||
MAX_LOWERDIRS = 500
|
||||
MOUNT_BIN = "/bin/mount"
|
||||
UMOUNT_BIN = "/bin/umount"
|
||||
|
||||
|
||||
def die(msg: str) -> None:
|
||||
sys.stderr.write(f"left4me-overlay: {msg}\n")
|
||||
sys.exit(1)
|
||||
|
||||
|
||||
def root() -> Path:
|
||||
return Path(os.environ.get("LEFT4ME_ROOT") or DEFAULT_ROOT)
|
||||
|
||||
|
||||
def validate_name(name: str) -> str:
|
||||
if not NAME_RE.fullmatch(name):
|
||||
die(f"invalid instance name: {name!r}")
|
||||
return name
|
||||
|
||||
|
||||
def parse_lowerdirs(env_path: Path) -> list[str]:
|
||||
if not env_path.is_file():
|
||||
die(f"instance.env not found: {env_path}")
|
||||
raw = None
|
||||
for line in env_path.read_text().splitlines():
|
||||
if "=" not in line:
|
||||
continue
|
||||
key, value = line.split("=", 1)
|
||||
if key.strip() == "L4D2_LOWERDIRS":
|
||||
raw = value
|
||||
break
|
||||
if raw is None:
|
||||
die(f"L4D2_LOWERDIRS not set in {env_path}")
|
||||
if raw == "":
|
||||
die(f"L4D2_LOWERDIRS is empty in {env_path}")
|
||||
parts = raw.split(":")
|
||||
if any(p == "" for p in parts):
|
||||
die(f"L4D2_LOWERDIRS contains an empty entry: {raw!r}")
|
||||
if len(parts) > MAX_LOWERDIRS:
|
||||
die(f"L4D2_LOWERDIRS has {len(parts)} entries (cap {MAX_LOWERDIRS})")
|
||||
return parts
|
||||
|
||||
|
||||
def canonical_under(allowed_roots: list[Path], path: Path) -> Path:
|
||||
try:
|
||||
canonical = path.resolve(strict=True)
|
||||
except (FileNotFoundError, RuntimeError):
|
||||
die(f"path does not exist or has a symlink loop: {path}")
|
||||
for r in allowed_roots:
|
||||
if canonical == r or r in canonical.parents:
|
||||
return canonical
|
||||
die(f"path is outside the permitted roots: {path} (resolved: {canonical})")
|
||||
|
||||
|
||||
_LISTXATTR = getattr(os, "listxattr", None)
|
||||
|
||||
|
||||
def _entry_has_fuse_xattr(path: str) -> str | None:
|
||||
if _LISTXATTR is None:
|
||||
return None
|
||||
try:
|
||||
attrs = _LISTXATTR(path, follow_symlinks=False)
|
||||
except OSError:
|
||||
return None
|
||||
for a in attrs:
|
||||
if a.startswith("user.fuseoverlayfs."):
|
||||
return a
|
||||
return None
|
||||
|
||||
|
||||
def assert_no_fuse_xattrs(upper: Path) -> None:
|
||||
if not upper.exists() or _LISTXATTR is None:
|
||||
return
|
||||
for dirpath, dirnames, filenames in os.walk(upper):
|
||||
for entry in (dirpath, *(os.path.join(dirpath, n) for n in dirnames),
|
||||
*(os.path.join(dirpath, n) for n in filenames)):
|
||||
tainted = _entry_has_fuse_xattr(entry)
|
||||
if tainted:
|
||||
die(
|
||||
f"upperdir contains fuse-overlayfs xattr {tainted!r} on {entry}; "
|
||||
"wipe upper/ and work/ before mounting"
|
||||
)
|
||||
|
||||
|
||||
def exec_or_print(argv: list[str]) -> None:
|
||||
if os.environ.get("LEFT4ME_OVERLAY_PRINT_ONLY") == "1":
|
||||
print(" ".join(shlex.quote(a) for a in argv))
|
||||
sys.exit(0)
|
||||
os.execv(argv[0], argv)
|
||||
|
||||
|
||||
def cmd_mount(name: str) -> None:
|
||||
name = validate_name(name)
|
||||
r = root()
|
||||
runtime_name_dir = (r / "runtime" / name).resolve(strict=True)
|
||||
merged_for_check = (runtime_name_dir / "merged").resolve(strict=True)
|
||||
|
||||
# Idempotency for unit restart cycles: if a previous start mounted
|
||||
# successfully but ExecStart failed afterwards (and Restart=on-failure
|
||||
# fires another cycle), the second ExecStartPre would otherwise refuse
|
||||
# to mount-on-top. Short-circuit here so the second cycle just gets
|
||||
# straight to ExecStart. PRINT_ONLY (test mode) bypasses this so the
|
||||
# tests can exercise the full nsenter argv regardless of mount state.
|
||||
if (
|
||||
os.environ.get("LEFT4ME_OVERLAY_PRINT_ONLY") != "1"
|
||||
and os.path.ismount(merged_for_check)
|
||||
):
|
||||
return
|
||||
|
||||
instance_env = r / "instances" / name / "instance.env"
|
||||
raw_lowerdirs = parse_lowerdirs(instance_env)
|
||||
|
||||
allowed_roots = [(r / sub).resolve() for sub in LOWERDIR_ALLOWLIST]
|
||||
canonical_lowerdirs = [str(canonical_under(allowed_roots, Path(p))) for p in raw_lowerdirs]
|
||||
|
||||
upper = (runtime_name_dir / "upper").resolve(strict=True)
|
||||
work = (runtime_name_dir / "work").resolve(strict=True)
|
||||
merged = merged_for_check
|
||||
for label, path in (("upper", upper), ("work", work), ("merged", merged)):
|
||||
if path.parent != runtime_name_dir:
|
||||
die(f"{label} resolved outside runtime/{name}: {path}")
|
||||
|
||||
assert_no_fuse_xattrs(upper)
|
||||
|
||||
options = f"lowerdir={':'.join(canonical_lowerdirs)},upperdir={upper},workdir={work}"
|
||||
argv = [
|
||||
MOUNT_BIN,
|
||||
"-t", "overlay",
|
||||
"overlay",
|
||||
"-o", options,
|
||||
str(merged),
|
||||
]
|
||||
exec_or_print(argv)
|
||||
|
||||
|
||||
def cmd_umount(name: str) -> None:
|
||||
name = validate_name(name)
|
||||
r = root()
|
||||
runtime_name_dir = (r / "runtime" / name).resolve(strict=True)
|
||||
merged_path = runtime_name_dir / "merged"
|
||||
work_inner = runtime_name_dir / "work" / "work"
|
||||
|
||||
argv = [
|
||||
UMOUNT_BIN,
|
||||
# Resolve only if it exists; PRINT_ONLY tests always pre-create it.
|
||||
str(merged_path.resolve(strict=True) if merged_path.exists() else merged_path),
|
||||
]
|
||||
|
||||
# PRINT_ONLY: emit the umount argv and exit. Tests assert exact shape
|
||||
# of this dry-run; the post-umount cleanup of work_inner is a runtime
|
||||
# behaviour exercised on the host, not in unit tests.
|
||||
if os.environ.get("LEFT4ME_OVERLAY_PRINT_ONLY") == "1":
|
||||
print(" ".join(shlex.quote(a) for a in argv))
|
||||
sys.exit(0)
|
||||
|
||||
if merged_path.exists():
|
||||
merged = merged_path.resolve(strict=True)
|
||||
if merged.parent != runtime_name_dir:
|
||||
die(f"merged resolved outside runtime/{name}: {merged}")
|
||||
# Idempotency: only umount if currently a mount point. Mirrors
|
||||
# cmd_mount's symmetric check; a redundant cleanup pass — or a
|
||||
# call after a partial _purge_instance — must be a no-op.
|
||||
#
|
||||
# No retry loop here: with the helper running in PID 1's mount
|
||||
# namespace (via the unit-level `nsenter --mount=/proc/1/ns/mnt`
|
||||
# in ExecStopPost), it holds no reference to the unit's
|
||||
# per-service mount namespace, so the cgroup-empty → namespace
|
||||
# reaped → umount-clears sequence happens without any race
|
||||
# window for us to ride out. EBUSY here is a real error.
|
||||
if os.path.ismount(merged):
|
||||
subprocess.run(argv, check=True)
|
||||
|
||||
# Kernel-overlayfs creates work_inner during mount with root:root mode
|
||||
# 0/0. After unmount it's an orphan that the unit's User= (left4me)
|
||||
# cannot traverse via shutil.rmtree, so reset/delete in instances.py
|
||||
# blows up with EACCES on `runtime/<name>/work/work`. The helper is
|
||||
# the only code path with root that knows about this directory, so
|
||||
# the cleanup belongs here. Safe to nuke — the kernel re-creates it
|
||||
# on the next mount. Run unconditionally — covers both "we just
|
||||
# unmounted" and "previous teardown didn't finish" cases.
|
||||
if work_inner.exists():
|
||||
shutil.rmtree(work_inner)
|
||||
|
||||
|
||||
def main(argv: list[str]) -> None:
|
||||
if len(argv) != 3 or argv[1] not in ("mount", "umount"):
|
||||
sys.stderr.write("usage: left4me-overlay mount|umount <name>\n")
|
||||
sys.exit(2)
|
||||
if argv[1] == "mount":
|
||||
cmd_mount(argv[2])
|
||||
else:
|
||||
cmd_umount(argv[2])
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main(sys.argv)
|
||||
|
|
@ -1,82 +0,0 @@
|
|||
#!/bin/bash
|
||||
# Privileged sandbox launcher for left4me script overlays.
|
||||
#
|
||||
# Invoked via sudo by the web user with two arguments:
|
||||
# <overlay_id> numeric overlay id; bind-mounts /var/lib/left4me/overlays/<id>
|
||||
# read-write at /overlay inside the sandbox.
|
||||
# <script_path> absolute path to a bash file already written by the web app;
|
||||
# bind-mounted read-only at /script.sh inside the sandbox.
|
||||
#
|
||||
# The script runs as a transient systemd .service with the full hardening
|
||||
# surface: cgroup limits + walltime kill, NoNewPrivileges, ProtectSystem,
|
||||
# ProtectHome, kernel-tunable / -module / -log protection, namespace
|
||||
# restriction, address-family restriction, capability bounding (empty),
|
||||
# seccomp filter (@system-service @network-io), MemoryDenyWriteExecute,
|
||||
# LockPersonality, RestrictSUIDSGID. Network namespace is *not* restricted —
|
||||
# scripts must reach the public internet to download workshop / l4d2center
|
||||
# / cedapug content. PID namespace is shared with the host (no
|
||||
# PrivatePID= directive in systemd); host PIDs are visible via /proc but
|
||||
# not signal-able due to UID mismatch.
|
||||
set -euo pipefail
|
||||
|
||||
[[ $# -eq 2 ]] || { echo "usage: $0 <overlay_id> <script>" >&2; exit 64; }
|
||||
|
||||
OVERLAY_ID=$1
|
||||
SCRIPT=$2
|
||||
|
||||
[[ "$OVERLAY_ID" =~ ^[0-9]+$ ]] || { echo "bad overlay id" >&2; exit 64; }
|
||||
OVERLAY_DIR=/var/lib/left4me/overlays/$OVERLAY_ID
|
||||
[[ -d $OVERLAY_DIR ]] || { echo "no overlay dir at $OVERLAY_DIR" >&2; exit 65; }
|
||||
[[ -f $SCRIPT ]] || { echo "no script at $SCRIPT" >&2; exit 65; }
|
||||
|
||||
if [[ "${LEFT4ME_SCRIPT_SANDBOX_DRY_RUN:-}" == "1" ]]; then
|
||||
echo "DRY RUN: overlay_id=$OVERLAY_ID script=$SCRIPT overlay_dir=$OVERLAY_DIR"
|
||||
exit 0
|
||||
fi
|
||||
|
||||
# Make sure the sandbox UID owns the overlay dir so the script can write there.
|
||||
# Idempotent: a no-op when the dir is already l4d2-sandbox-owned (re-run case),
|
||||
# and corrects the ownership the first time the dir was created by the web app
|
||||
# under the left4me UID. World-readable so the gameserver process (left4me)
|
||||
# can read the overlay contents via the kernel-overlayfs lowerdir at runtime.
|
||||
chown -R l4d2-sandbox:l4d2-sandbox "$OVERLAY_DIR"
|
||||
chmod 0755 "$OVERLAY_DIR"
|
||||
|
||||
SCRIPT_RC=0
|
||||
systemd-run --quiet --collect --wait --pipe \
|
||||
--unit="left4me-script-${OVERLAY_ID}-$$" \
|
||||
--slice=l4d2-build.slice \
|
||||
-p OOMScoreAdjust=500 \
|
||||
-p User=l4d2-sandbox -p Group=l4d2-sandbox \
|
||||
-p UMask=0022 \
|
||||
-p NoNewPrivileges=yes \
|
||||
-p ProtectSystem=strict -p ProtectHome=yes \
|
||||
-p PrivateTmp=yes -p PrivateDevices=yes -p PrivateIPC=yes \
|
||||
-p ProtectKernelTunables=yes -p ProtectKernelModules=yes \
|
||||
-p ProtectKernelLogs=yes -p ProtectControlGroups=yes \
|
||||
-p RestrictNamespaces=yes \
|
||||
-p RestrictAddressFamilies="AF_INET AF_INET6 AF_UNIX" \
|
||||
-p RestrictSUIDSGID=yes -p LockPersonality=yes \
|
||||
-p MemoryDenyWriteExecute=yes \
|
||||
-p SystemCallFilter="@system-service @network-io" \
|
||||
-p SystemCallArchitectures=native \
|
||||
-p CapabilityBoundingSet= -p AmbientCapabilities= \
|
||||
-p IPAddressDeny="127.0.0.0/8 ::1/128 169.254.0.0/16 fe80::/10 224.0.0.0/4 ff00::/8 10.0.0.0/8 172.16.0.0/12 192.168.0.0/16 100.64.0.0/10 fc00::/7" \
|
||||
-p TemporaryFileSystem="/etc /var/lib" \
|
||||
-p BindReadOnlyPaths="/etc/left4me/sandbox-resolv.conf:/etc/resolv.conf /etc/ssl /etc/ca-certificates /etc/nsswitch.conf /etc/alternatives ${SCRIPT}:/script.sh" \
|
||||
-p BindPaths="${OVERLAY_DIR}:/overlay" \
|
||||
-p WorkingDirectory=/overlay \
|
||||
-p Environment="HOME=/tmp PATH=/usr/bin:/usr/sbin OVERLAY=/overlay" \
|
||||
-p MemoryMax=4G -p MemorySwapMax=0 -p TasksMax=512 \
|
||||
-p CPUQuota=200% -p RuntimeMaxSec=3600 \
|
||||
-- /bin/bash /script.sh || SCRIPT_RC=$?
|
||||
|
||||
# Normalize perms so the web service (left4me uid) can read overlay files
|
||||
# directly via Python open() — needed by the file tree's download endpoint.
|
||||
# UMask=0022 above takes care of *new* writes; this catches anything the
|
||||
# script created with a tighter mode (e.g. cedapug_maps writes its
|
||||
# .cedapug/manifest.tsv as 0600 by default).
|
||||
find "$OVERLAY_DIR" -type f ! -perm -o+r -exec chmod o+r {} + 2>/dev/null || true
|
||||
find "$OVERLAY_DIR" -type d ! -perm -o+rx -exec chmod o+rx {} + 2>/dev/null || true
|
||||
|
||||
exit $SCRIPT_RC
|
||||
|
|
@ -1,44 +0,0 @@
|
|||
#!/bin/sh
|
||||
set -eu
|
||||
|
||||
usage() {
|
||||
printf '%s\n' "usage: left4me-systemctl enable|disable|show <server-name>" >&2
|
||||
exit 2
|
||||
}
|
||||
|
||||
validate_name() {
|
||||
name=$1
|
||||
[ -n "$name" ] || usage
|
||||
case "$name" in
|
||||
.*|*..*|*/*|*\\*) usage ;;
|
||||
esac
|
||||
case "$name" in
|
||||
*[!A-Za-z0-9_.-]*) usage ;;
|
||||
esac
|
||||
}
|
||||
|
||||
[ "$#" -eq 2 ] || usage
|
||||
action=$1
|
||||
name=$2
|
||||
|
||||
case "$action" in
|
||||
enable|disable|show) ;;
|
||||
*) usage ;;
|
||||
esac
|
||||
|
||||
validate_name "$name"
|
||||
unit="left4me-server@${name}.service"
|
||||
if [ -x /bin/systemctl ]; then
|
||||
systemctl=/bin/systemctl
|
||||
elif [ -x /usr/bin/systemctl ]; then
|
||||
systemctl=/usr/bin/systemctl
|
||||
else
|
||||
printf '%s\n' 'systemctl not found at /bin/systemctl or /usr/bin/systemctl' >&2
|
||||
exit 69
|
||||
fi
|
||||
|
||||
case "$action" in
|
||||
enable) exec "$systemctl" enable --now "$unit" ;;
|
||||
disable) exec "$systemctl" disable --now "$unit" ;;
|
||||
show) exec "$systemctl" show --property=ActiveState --property=SubState "$unit" ;;
|
||||
esac
|
||||
|
|
@ -1,17 +0,0 @@
|
|||
#!/bin/sh
|
||||
# Run l4d2web flask CLI commands as the left4me user with the deploy env loaded.
|
||||
# Usage: left4me <flask-subcommand> [args...]
|
||||
# Examples:
|
||||
# left4me create-user alice --admin
|
||||
# left4me seed-script-overlays /opt/left4me/src/examples/script-overlays
|
||||
# left4me routes
|
||||
set -eu
|
||||
exec sudo -u left4me sh -c '
|
||||
set -a
|
||||
. /etc/left4me/host.env
|
||||
. /etc/left4me/web.env
|
||||
set +a
|
||||
export JOB_WORKER_ENABLED=false
|
||||
export PYTHONPATH=/opt/left4me/src
|
||||
exec /opt/left4me/.venv/bin/flask --app l4d2web.app:create_app "$@"
|
||||
' sh "$@"
|
||||
|
|
@ -1,299 +0,0 @@
|
|||
# Items for the left4me bundle.
|
||||
# Systemd units come from metadata via bundles/systemd/ — there are no
|
||||
# .service or .slice files in this bundle's files/ tree. Cpuset drop-ins
|
||||
# for system.slice / user.slice are likewise emitted via systemd/units
|
||||
# in metadata.py (key: '<parent>.d/<basename>.conf').
|
||||
|
||||
directories = {
|
||||
'/opt/left4me': {
|
||||
'owner': 'left4me',
|
||||
'group': 'left4me',
|
||||
},
|
||||
'/opt/left4me/src': {
|
||||
'owner': 'left4me',
|
||||
'group': 'left4me',
|
||||
},
|
||||
'/etc/left4me': {
|
||||
'owner': 'root',
|
||||
'group': 'root',
|
||||
'mode': '0755',
|
||||
},
|
||||
'/var/lib/left4me': {
|
||||
# left4me's home dir — useradd creates with 0700; loosen to 0711 so
|
||||
# l4d2-sandbox can traverse (but not list) for bwrap bind-mounts.
|
||||
'owner': 'left4me',
|
||||
'group': 'left4me',
|
||||
'mode': '0711',
|
||||
},
|
||||
'/var/lib/left4me/installation': {'owner': 'left4me', 'group': 'left4me'},
|
||||
'/var/lib/left4me/overlays': {'owner': 'left4me', 'group': 'left4me'},
|
||||
'/var/lib/left4me/instances': {'owner': 'left4me', 'group': 'left4me'},
|
||||
'/var/lib/left4me/runtime': {'owner': 'left4me', 'group': 'left4me'},
|
||||
'/var/lib/left4me/workshop_cache': {'owner': 'left4me', 'group': 'left4me'},
|
||||
'/var/lib/left4me/tmp': {'owner': 'left4me', 'group': 'left4me'},
|
||||
'/opt/left4me/steam': {'owner': 'left4me', 'group': 'left4me'},
|
||||
'/usr/local/libexec/left4me': {
|
||||
'owner': 'root',
|
||||
'group': 'root',
|
||||
'mode': '0755',
|
||||
},
|
||||
}
|
||||
|
||||
groups = {
|
||||
'left4me': {'gid': 980},
|
||||
'l4d2-sandbox': {'gid': 981},
|
||||
}
|
||||
|
||||
users = {
|
||||
'left4me': {
|
||||
'uid': 980,
|
||||
'gid': 980,
|
||||
'home': '/var/lib/left4me',
|
||||
'shell': '/usr/sbin/nologin',
|
||||
},
|
||||
'l4d2-sandbox': {
|
||||
'uid': 981,
|
||||
'gid': 981,
|
||||
'shell': '/usr/sbin/nologin',
|
||||
},
|
||||
}
|
||||
# UIDs/GIDs pinned in the system-package range (100-999, per Debian
|
||||
# policy) so file ownership is deterministic across rebuilds and
|
||||
# backup restores. 980/981 are unused elsewhere in this repo.
|
||||
|
||||
# Privileged helpers (mode 0755 root:root). Listed by sudoers as the only
|
||||
# commands left4me can invoke as root NOPASSWD.
|
||||
HELPERS = (
|
||||
'left4me-systemctl',
|
||||
'left4me-journalctl',
|
||||
'left4me-overlay',
|
||||
'left4me-script-sandbox',
|
||||
)
|
||||
|
||||
files = {
|
||||
'/usr/local/sbin/left4me': {
|
||||
'source': 'usr/local/sbin/left4me', # explicit — basename collides with sudoers
|
||||
'mode': '0755',
|
||||
'owner': 'root',
|
||||
'group': 'root',
|
||||
},
|
||||
**{
|
||||
f'/usr/local/libexec/left4me/{h}': {
|
||||
'source': f'usr/local/libexec/left4me/{h}',
|
||||
'mode': '0755',
|
||||
'owner': 'root',
|
||||
'group': 'root',
|
||||
}
|
||||
for h in HELPERS
|
||||
},
|
||||
'/etc/left4me/sandbox-resolv.conf': {
|
||||
'source': 'etc/left4me/sandbox-resolv.conf',
|
||||
'mode': '0644',
|
||||
'owner': 'root',
|
||||
'group': 'root',
|
||||
},
|
||||
'/etc/sudoers.d/left4me': {
|
||||
'source': 'etc/sudoers.d/left4me',
|
||||
'mode': '0440',
|
||||
'owner': 'root',
|
||||
'group': 'root',
|
||||
'test_with': 'visudo -cf {}',
|
||||
},
|
||||
'/etc/sysctl.d/99-left4me.conf': {
|
||||
'source': 'etc/sysctl.d/99-left4me.conf',
|
||||
'mode': '0644',
|
||||
'owner': 'root',
|
||||
'group': 'root',
|
||||
'triggers': [
|
||||
'action:left4me_sysctl_reload',
|
||||
],
|
||||
},
|
||||
'/etc/left4me/host.env': {
|
||||
'source': 'etc/left4me/host.env.mako',
|
||||
'content_type': 'mako',
|
||||
'mode': '0640',
|
||||
'owner': 'root',
|
||||
# group=left4me so the alembic + seed-overlays actions (which run as
|
||||
# `sudo -u left4me sh -c '. /etc/left4me/host.env'`) can source it.
|
||||
# Same pattern as web.env below.
|
||||
'group': 'left4me',
|
||||
'needs': [
|
||||
'group:left4me',
|
||||
],
|
||||
},
|
||||
'/etc/left4me/web.env': {
|
||||
'source': 'etc/left4me/web.env.mako',
|
||||
'content_type': 'mako',
|
||||
'mode': '0640',
|
||||
'owner': 'root',
|
||||
'group': 'left4me',
|
||||
'needs': [
|
||||
'group:left4me',
|
||||
],
|
||||
},
|
||||
}
|
||||
|
||||
actions = {
|
||||
'left4me_sysctl_reload': {
|
||||
'command': 'sysctl --system >/dev/null',
|
||||
'triggered': True,
|
||||
},
|
||||
'left4me_dpkg_add_i386_arch': {
|
||||
# steamcmd is 32-bit and pulls libc6:i386 + lib32z1 from the i386 arch.
|
||||
# apt-get update is part of this action because newly-added foreign
|
||||
# archs need a fresh package list before any :i386 package resolves.
|
||||
'command': 'dpkg --add-architecture i386 && apt-get update',
|
||||
'unless': 'dpkg --print-foreign-architectures | grep -qx i386',
|
||||
'cascade_skip': False,
|
||||
},
|
||||
'left4me_install_steamcmd': {
|
||||
# Steam's tarball is rolling with no published checksum, so we can't
|
||||
# use download: (which requires a hash). Guard with a presence check
|
||||
# on steamcmd.sh — steamcmd self-updates at runtime, so chasing the
|
||||
# tarball version from bw isn't useful.
|
||||
'command': (
|
||||
'sudo -u left4me sh -c "'
|
||||
'cd /opt/left4me/steam && '
|
||||
'curl -fsSL https://media.steampowered.com/installer/steamcmd_linux.tar.gz | '
|
||||
'tar -xz'
|
||||
'"'
|
||||
),
|
||||
'unless': 'test -x /opt/left4me/steam/steamcmd.sh',
|
||||
'cascade_skip': False,
|
||||
'needs': [
|
||||
'directory:/opt/left4me/steam',
|
||||
'pkg_apt:curl',
|
||||
'pkg_apt:libc6_i386', # bw pkg_apt convention: _ → :
|
||||
'pkg_apt:lib32z1',
|
||||
'user:left4me',
|
||||
],
|
||||
},
|
||||
}
|
||||
|
||||
# steamcmd is invoked by absolute path (LEFT4ME_STEAMCMD in host.env),
|
||||
# not via PATH lookup — see l4d2host/cli.py:install. We don't need to put
|
||||
# anything in /usr/local/bin for it.
|
||||
|
||||
git_deploy = {
|
||||
'/opt/left4me/src': {
|
||||
'repo': node.metadata.get('left4me/git_url'),
|
||||
'rev': node.metadata.get('left4me/git_branch'),
|
||||
'triggers': [
|
||||
# On a code-update apply, refresh the DB schema. pip_install
|
||||
# would have triggered alembic in the create_venv path, but on
|
||||
# a normal apply pip_install's `unless` skips (packages still
|
||||
# importable from the previous editable install), and that
|
||||
# would leave alembic_upgrade dormant. Wiring git_deploy →
|
||||
# alembic directly ensures new migrations land whenever new
|
||||
# code lands. alembic upgrade head is idempotent (no-op when
|
||||
# already at head), so this is safe to fire on every code
|
||||
# update; the seed_overlays + service:restart cascade off
|
||||
# alembic also covers picking up the new code in gunicorn.
|
||||
'action:left4me_alembic_upgrade',
|
||||
],
|
||||
# chown_src and pip_install are NOT in triggers — they run every
|
||||
# apply gated by their own `unless` guards, which makes the chain
|
||||
# self-healing after a partial failure. (Items in a triggers list
|
||||
# must be triggered:True, which would lose that property.)
|
||||
},
|
||||
}
|
||||
|
||||
actions['left4me_chown_src'] = {
|
||||
# Runs every apply (cheap — chown -R on a small tree). Self-heals
|
||||
# whenever git_deploy extracts a new tarball as root-owned files.
|
||||
# Not in any triggers list so doesn't need triggered:True.
|
||||
'command': 'chown -R left4me:left4me /opt/left4me/src',
|
||||
'unless': 'test -z "$(find /opt/left4me/src \\! -user left4me -print -quit 2>/dev/null)"',
|
||||
'cascade_skip': False,
|
||||
'needs': [
|
||||
'git_deploy:/opt/left4me/src',
|
||||
'user:left4me',
|
||||
'group:left4me',
|
||||
],
|
||||
}
|
||||
|
||||
actions['left4me_create_venv'] = {
|
||||
'command': 'sudo -u left4me /usr/bin/python3 -m venv /opt/left4me/.venv',
|
||||
'unless': 'test -x /opt/left4me/.venv/bin/python',
|
||||
'cascade_skip': False,
|
||||
'needs': [
|
||||
'directory:/opt/left4me',
|
||||
'pkg_apt:python3-venv',
|
||||
'user:left4me',
|
||||
],
|
||||
'triggers': [
|
||||
'action:left4me_pip_upgrade',
|
||||
],
|
||||
}
|
||||
|
||||
actions['left4me_pip_upgrade'] = {
|
||||
'command': 'sudo -u left4me /opt/left4me/.venv/bin/python -m pip install --upgrade pip',
|
||||
'triggered': True,
|
||||
'cascade_skip': False,
|
||||
'needs': [
|
||||
'pkg_apt:python3-pip',
|
||||
],
|
||||
# No triggers — pip_install runs on every apply (gated by `unless`)
|
||||
# rather than being chained from here. Keeps pip_upgrade scoped to
|
||||
# exactly its purpose.
|
||||
}
|
||||
|
||||
actions['left4me_pip_install'] = {
|
||||
# Single pip invocation installs both editable packages from the same
|
||||
# checkout. Runs on every apply: pip install -e is fast on no-op, and
|
||||
# any gate weaker than "egg-info matches pyproject.toml" can mask
|
||||
# script regeneration — e.g. adding [project.scripts] later wouldn't
|
||||
# be picked up if `unless` only checks importability.
|
||||
'command': 'sudo -u left4me /opt/left4me/.venv/bin/pip install -e /opt/left4me/src/l4d2host -e /opt/left4me/src/l4d2web',
|
||||
'cascade_skip': False,
|
||||
'needs': [
|
||||
'git_deploy:/opt/left4me/src',
|
||||
'action:left4me_create_venv',
|
||||
'action:left4me_chown_src',
|
||||
],
|
||||
'triggers': [
|
||||
'action:left4me_alembic_upgrade',
|
||||
],
|
||||
}
|
||||
|
||||
actions['left4me_alembic_upgrade'] = {
|
||||
# Mirrors deploy-test-server.sh:239-242. Runs as left4me with both env
|
||||
# files sourced; JOB_WORKER_ENABLED=false so a stray worker doesn't race
|
||||
# with the migration.
|
||||
'command': (
|
||||
'sudo -u left4me sh -c "'
|
||||
'cd /opt/left4me/src/l4d2web && '
|
||||
'set -a && . /etc/left4me/host.env && . /etc/left4me/web.env && set +a && '
|
||||
'env JOB_WORKER_ENABLED=false PYTHONPATH=/opt/left4me/src '
|
||||
'/opt/left4me/.venv/bin/alembic -c /opt/left4me/src/l4d2web/alembic.ini upgrade head'
|
||||
'"'
|
||||
),
|
||||
'triggered': True,
|
||||
'cascade_skip': False,
|
||||
'needs': [
|
||||
'action:left4me_pip_install',
|
||||
'file:/etc/left4me/host.env',
|
||||
'file:/etc/left4me/web.env',
|
||||
],
|
||||
'triggers': [
|
||||
'action:left4me_seed_overlays',
|
||||
'svc_systemd:left4me-web.service:restart',
|
||||
],
|
||||
}
|
||||
|
||||
actions['left4me_seed_overlays'] = {
|
||||
# Idempotent: refreshes script bodies in place; existing overlay rows keep their ids.
|
||||
'command': (
|
||||
'sudo -u left4me sh -c "'
|
||||
'set -a && . /etc/left4me/host.env && . /etc/left4me/web.env && set +a && '
|
||||
'env JOB_WORKER_ENABLED=false PYTHONPATH=/opt/left4me/src '
|
||||
'/opt/left4me/.venv/bin/flask --app l4d2web.app:create_app '
|
||||
'seed-script-overlays /opt/left4me/src/examples/script-overlays'
|
||||
'"'
|
||||
),
|
||||
'triggered': True,
|
||||
'cascade_skip': False,
|
||||
'needs': [
|
||||
'action:left4me_alembic_upgrade',
|
||||
],
|
||||
}
|
||||
|
|
@ -1,304 +0,0 @@
|
|||
assert node.has_bundle('nftables')
|
||||
assert node.has_bundle('systemd')
|
||||
assert node.has_bundle('systemd-timers')
|
||||
|
||||
|
||||
defaults = {
|
||||
'left4me': {
|
||||
# Application-wide defaults; node only overrides if it really needs to.
|
||||
'git_url': 'https://git.sublimity.de/cronekorkn/left4me.git',
|
||||
'git_branch': 'master',
|
||||
'secret_key': repo.vault.random_bytes_as_base64_for(f'{node.name} left4me secret_key', length=32).value,
|
||||
'gunicorn_workers': 1,
|
||||
'gunicorn_threads': 32,
|
||||
'job_worker_threads': 4,
|
||||
# Steam Web API key for the live-state panel's GetPlayerSummaries
|
||||
# lookups (persona names + avatars). Empty default — nodes override
|
||||
# in their own metadata with the actual key. If left empty in prod,
|
||||
# the live-state panel still works but falls back to RCON in-game
|
||||
# names and placeholder avatars.
|
||||
'steam_web_api_key': '',
|
||||
# Whole 27000-block: covers Steam's defaults (27015 game, 27005
|
||||
# client/RCON) plus headroom for ad-hoc ports without further
|
||||
# nftables changes. Mirrored into LEFT4ME_PORT_RANGE_{START,END}
|
||||
# by web.env.mako and into the nftables input rule by the
|
||||
# nftables_input reactor below.
|
||||
'port_range_start': 27000,
|
||||
'port_range_end': 27999,
|
||||
},
|
||||
'apt': {
|
||||
'packages': {
|
||||
'p7zip-full': {},
|
||||
'nftables': {},
|
||||
'iproute2': {},
|
||||
'curl': {},
|
||||
'ca-certificates': {},
|
||||
'python3': {},
|
||||
'python3-venv': {},
|
||||
'python3-pip': {},
|
||||
'python3-dev': {},
|
||||
# steamcmd is a 32-bit ELF; needs i386 multiarch + these libs.
|
||||
# `_` → `:` is bundlewrap's pkg_apt convention for multiarch
|
||||
# names (see pkg_apt.py:48).
|
||||
'libc6_i386': { # installs libc6:i386
|
||||
'needs': ['action:left4me_dpkg_add_i386_arch'],
|
||||
},
|
||||
'lib32z1': {
|
||||
'needs': ['action:left4me_dpkg_add_i386_arch'],
|
||||
},
|
||||
},
|
||||
},
|
||||
'nftables': {
|
||||
# Match deploy/files/usr/local/lib/left4me/nft/left4me-mark.nft.
|
||||
# Mark srcds UDP egress (uid left4me) with DSCP EF + skb priority 6
|
||||
# so CAKE classifies it into the priority tin.
|
||||
'output': {
|
||||
'meta skuid "left4me" meta l4proto udp ip dscp set ef meta priority set 0006:0000',
|
||||
'meta skuid "left4me" meta l4proto udp ip6 dscp set ef meta priority set 0006:0000',
|
||||
},
|
||||
},
|
||||
'systemd': {
|
||||
'services': {
|
||||
'left4me-web.service': {
|
||||
'enabled': True,
|
||||
'running': True,
|
||||
'needs': [
|
||||
'action:left4me_alembic_upgrade',
|
||||
'file:/etc/left4me/host.env',
|
||||
'file:/etc/left4me/web.env',
|
||||
],
|
||||
},
|
||||
# Note: left4me-server@.service is a TEMPLATE — instances are
|
||||
# started on-demand by the web app via the left4me-systemctl
|
||||
# helper. Don't enable/start it from here.
|
||||
# The slices are installed (file present) but don't need
|
||||
# enable/start — they're activated implicitly when a unit
|
||||
# uses Slice=.
|
||||
},
|
||||
},
|
||||
'backup': {
|
||||
# Application-owned paths. Set-merged with backup group / node-level paths.
|
||||
'paths': {
|
||||
'/var/lib/left4me',
|
||||
'/etc/left4me',
|
||||
},
|
||||
},
|
||||
'systemd-timers': {
|
||||
# Daily re-fetch of Steam Workshop metadata + .vpk downloads for any
|
||||
# item whose author published an update. The CLI just inserts a
|
||||
# `refresh_workshop_items` job; the web worker picks it up next.
|
||||
# Idempotent — a re-fire while a refresh is already queued/running
|
||||
# is a no-op (see l4d2web/cli.py:workshop_refresh).
|
||||
'left4me-workshop-refresh': {
|
||||
'command': '/opt/left4me/.venv/bin/flask --app l4d2web.app:create_app workshop-refresh',
|
||||
'when': '*-*-* 04:00:00',
|
||||
'persistent': True,
|
||||
'user': 'left4me',
|
||||
'working_dir': '/opt/left4me/src',
|
||||
'environment_files': (
|
||||
'/etc/left4me/host.env',
|
||||
'/etc/left4me/web.env',
|
||||
),
|
||||
'after': {
|
||||
'network-online.target',
|
||||
'left4me-web.service',
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
|
||||
@metadata_reactor.provides(
|
||||
'nginx/vhosts',
|
||||
)
|
||||
def nginx_vhosts(metadata):
|
||||
# letsencrypt/domains and monitoring/services for the vhost are auto-
|
||||
# populated by bundles/nginx/metadata.py. We just declare check_path:
|
||||
# '/health' so the auto-check hits the Flask health endpoint, not '/'.
|
||||
domain = metadata.get('left4me/domain')
|
||||
return {
|
||||
'nginx': {
|
||||
'vhosts': {
|
||||
domain: {
|
||||
'content': 'nginx/proxy_pass.conf',
|
||||
'context': {
|
||||
'target': 'http://127.0.0.1:8000',
|
||||
},
|
||||
'check_path': '/health',
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
|
||||
@metadata_reactor.provides(
|
||||
'nftables/input',
|
||||
)
|
||||
def nftables_input(metadata):
|
||||
port_start = metadata.get('left4me/port_range_start')
|
||||
port_end = metadata.get('left4me/port_range_end')
|
||||
return {
|
||||
'nftables': {
|
||||
'input': {
|
||||
f'udp dport {port_start}-{port_end} accept',
|
||||
f'tcp dport {port_start}-{port_end} accept',
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
|
||||
@metadata_reactor.provides(
|
||||
'systemd/units',
|
||||
)
|
||||
def systemd_units(metadata):
|
||||
workers = metadata.get('left4me/gunicorn_workers')
|
||||
threads = metadata.get('left4me/gunicorn_threads')
|
||||
|
||||
# cgroup-v2 cpuset. `system_cpus` (set of int CPU ids, declared per
|
||||
# node) pins system/user/build; the complement pins l4d2-game. On HT
|
||||
# hosts, list both siblings of a physical core so games don't share
|
||||
# L1/L2 with system work — pairings via
|
||||
# /sys/devices/system/cpu/cpu<n>/topology/thread_siblings_list.
|
||||
vm_threads = metadata.get('vm/threads', metadata.get('vm/cores'))
|
||||
all_cpus = set(range(vm_threads))
|
||||
system_cpus = metadata.get('left4me/system_cpus')
|
||||
if not system_cpus <= all_cpus:
|
||||
raise Exception(
|
||||
f'left4me/system_cpus={sorted(system_cpus)} on {vm_threads}-thread host '
|
||||
f'includes CPUs outside [0, {vm_threads})'
|
||||
)
|
||||
game_cpus = all_cpus - system_cpus
|
||||
if not game_cpus:
|
||||
raise Exception(
|
||||
f'left4me/system_cpus={sorted(system_cpus)} on {vm_threads}-thread host '
|
||||
f'leaves no cores for games'
|
||||
)
|
||||
system_cpus_string = ','.join(str(t) for t in sorted(system_cpus))
|
||||
game_cpus_string = ','.join(str(t) for t in sorted(game_cpus))
|
||||
|
||||
# Drop-in for upstream system.slice / user.slice (units we don't own).
|
||||
# Same '<parent>.d/<basename>.conf' convention as nginx and autologin.
|
||||
cpuset_dropin = {'Slice': {'AllowedCPUs': system_cpus_string}}
|
||||
|
||||
return {
|
||||
'systemd': {
|
||||
'units': {
|
||||
'left4me-web.service': {
|
||||
'Unit': {
|
||||
'Description': 'left4me web application',
|
||||
'After': 'network-online.target',
|
||||
'Wants': 'network-online.target',
|
||||
},
|
||||
'Service': {
|
||||
'Type': 'simple',
|
||||
'User': 'left4me',
|
||||
'Group': 'left4me',
|
||||
'WorkingDirectory': '/opt/left4me/src',
|
||||
'Environment': {
|
||||
'HOME=/var/lib/left4me',
|
||||
'PATH=/opt/left4me/.venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin',
|
||||
},
|
||||
'EnvironmentFile': (
|
||||
'/etc/left4me/host.env',
|
||||
'/etc/left4me/web.env',
|
||||
),
|
||||
'ExecStart': (
|
||||
'/opt/left4me/.venv/bin/gunicorn '
|
||||
f'--workers {workers} --threads {threads} '
|
||||
"--bind 127.0.0.1:8000 'l4d2web.app:create_app()'"
|
||||
),
|
||||
'Restart': 'on-failure',
|
||||
'RestartSec': '3',
|
||||
# NoNewPrivileges intentionally NOT set: workers sudo to the helpers.
|
||||
'ProtectSystem': 'full',
|
||||
'ReadWritePaths': '/var/lib/left4me',
|
||||
'PrivateTmp': 'true',
|
||||
},
|
||||
'Install': {
|
||||
'WantedBy': {'multi-user.target'},
|
||||
},
|
||||
},
|
||||
'left4me-server@.service': {
|
||||
'Unit': {
|
||||
'Description': 'left4me server instance %i',
|
||||
'After': 'network-online.target',
|
||||
'Wants': 'network-online.target',
|
||||
'StartLimitBurst': '5',
|
||||
'StartLimitIntervalSec': '60s',
|
||||
},
|
||||
'Service': {
|
||||
'Type': 'simple',
|
||||
'User': 'left4me',
|
||||
'Group': 'left4me',
|
||||
'EnvironmentFile': (
|
||||
'/etc/left4me/host.env',
|
||||
'/var/lib/left4me/instances/%i/instance.env',
|
||||
),
|
||||
'WorkingDirectory': '-/var/lib/left4me/runtime/%i/merged/left4dead2',
|
||||
'ExecStartPre': (
|
||||
'+/usr/bin/nsenter --mount=/proc/1/ns/mnt -- '
|
||||
'/usr/local/libexec/left4me/left4me-overlay mount %i'
|
||||
),
|
||||
'ExecStart': (
|
||||
'/var/lib/left4me/runtime/%i/merged/srcds_run '
|
||||
'-game left4dead2 +hostport ${L4D2_PORT} $L4D2_ARGS'
|
||||
),
|
||||
'ExecStopPost': (
|
||||
'+/usr/bin/nsenter --mount=/proc/1/ns/mnt -- '
|
||||
'/usr/local/libexec/left4me/left4me-overlay umount %i'
|
||||
),
|
||||
'Restart': 'on-failure',
|
||||
'RestartSec': '5',
|
||||
'Slice': 'l4d2-game.slice',
|
||||
'Nice': '-5',
|
||||
'IOSchedulingClass': 'best-effort',
|
||||
'IOSchedulingPriority': '4',
|
||||
'OOMScoreAdjust': '-200',
|
||||
'MemoryHigh': '1.5G',
|
||||
'MemoryMax': '2G',
|
||||
'TasksMax': '256',
|
||||
'LimitNOFILE': '65536',
|
||||
'KillSignal': 'SIGINT',
|
||||
'TimeoutStopSec': '15s',
|
||||
'LogRateLimitIntervalSec': '0',
|
||||
'NoNewPrivileges': 'true',
|
||||
'PrivateTmp': 'true',
|
||||
'PrivateDevices': 'true',
|
||||
'ProtectHome': 'true',
|
||||
'ProtectSystem': 'strict',
|
||||
'ReadOnlyPaths': '/var/lib/left4me/installation /var/lib/left4me/overlays',
|
||||
'ReadWritePaths': '/var/lib/left4me/runtime/%i',
|
||||
'RestrictSUIDSGID': 'true',
|
||||
'LockPersonality': 'true',
|
||||
},
|
||||
'Install': {
|
||||
'WantedBy': {'multi-user.target'},
|
||||
},
|
||||
},
|
||||
'l4d2-game.slice': {
|
||||
'Unit': {
|
||||
'Description': 'left4me game-server slice',
|
||||
'Before': 'slices.target',
|
||||
},
|
||||
'Slice': {
|
||||
'CPUWeight': '1000',
|
||||
'IOWeight': '1000',
|
||||
'AllowedCPUs': game_cpus_string,
|
||||
},
|
||||
},
|
||||
'l4d2-build.slice': {
|
||||
'Unit': {
|
||||
'Description': 'left4me script-sandbox build slice',
|
||||
'Before': 'slices.target',
|
||||
},
|
||||
'Slice': {
|
||||
'CPUWeight': '10',
|
||||
'IOWeight': '10',
|
||||
'AllowedCPUs': system_cpus_string,
|
||||
},
|
||||
},
|
||||
'system.slice.d/99-left4me-cpuset.conf': cpuset_dropin,
|
||||
'user.slice.d/99-left4me-cpuset.conf': cpuset_dropin,
|
||||
},
|
||||
},
|
||||
}
|
||||
|
|
@ -1,60 +1,9 @@
|
|||
# letsencrypt
|
||||
|
||||
Issues and renews Let's Encrypt certs via [dehydrated][upstream] with
|
||||
DNS-01 against the in-house bind-acme server.
|
||||
|
||||
[upstream]: https://github.com/dehydrated-io/dehydrated/wiki/example-dns-01-nsupdate-script
|
||||
|
||||
## First-apply behaviour
|
||||
|
||||
Immediately after `bw apply <node>`, nginx serves a **self-signed
|
||||
cert** for each declared domain — generated by
|
||||
`/etc/dehydrated/letsencrypt-ensure-some-certificate` so nginx has
|
||||
something to start with. The real Let's Encrypt cert arrives at most
|
||||
24h later when the systemd timer fires
|
||||
(`/usr/bin/dehydrated --cron --accept-terms --challenge dns-01`). To
|
||||
shortcut the wait:
|
||||
|
||||
```sh
|
||||
ssh <node> 'sudo /usr/bin/dehydrated --cron --accept-terms --challenge dns-01'
|
||||
ssh <node> 'sudo systemctl reload nginx'
|
||||
```
|
||||
|
||||
## DNS-01 prerequisites
|
||||
|
||||
`hook.sh` does `nsupdate` against the bind-acme server (referenced
|
||||
by `letsencrypt/acme_node`). For the challenge to succeed:
|
||||
|
||||
1. The acme node must be in the same metadata graph (so
|
||||
`bw metadata <node> -k letsencrypt/acme_node` resolves).
|
||||
2. **All NS servers** for the validated domain must serve the
|
||||
`_acme-challenge.<domain>` CNAME — Let's Encrypt validates from
|
||||
primary AND secondary geographic regions; both authoritative
|
||||
servers must agree. If a secondary NS is also a bw-managed node,
|
||||
`bw apply` it after adding the domain (see e.g. `ovh.secondary`).
|
||||
3. The bind-acme node's TSIG key must be reachable. `hook.sh` is
|
||||
rendered with the bind-acme server's `network/internal/ipv4` —
|
||||
for clients outside that LAN, the route must exist (typically via
|
||||
wireguard `s2s` peer membership).
|
||||
|
||||
## Negative-cache penalty
|
||||
|
||||
If the first DNS-01 attempt fails (e.g. zone not yet applied to the
|
||||
secondary NS), Let's Encrypt's resolvers cache NXDOMAIN for the SOA's
|
||||
negative TTL (often 900s = 15 min). Subsequent attempts during that
|
||||
window also fail and refresh the cache. Combined with LE's rate limit
|
||||
of **5 failed authorisations per domain per hour**, recovery requires
|
||||
you to **stop retrying** for ~15 minutes after fixing the DNS, then
|
||||
make at most one attempt.
|
||||
|
||||
## nsupdate sample
|
||||
|
||||
For interactive testing of the bind-acme TSIG path:
|
||||
https://github.com/dehydrated-io/dehydrated/wiki/example-dns-01-nsupdate-script
|
||||
|
||||
```sh
|
||||
printf "server 127.0.0.1
|
||||
zone acme.resolver.name.
|
||||
update add _acme-challenge.ckn.li.acme.resolver.name. 600 IN TXT \"hello\"
|
||||
update add _acme-challenge.ckn.li.acme.resolver.name. 600 IN TXT "hello"
|
||||
send
|
||||
" | nsupdate -y hmac-sha512:acme:XXXXXX
|
||||
" | nsupdate -y hmac-sha512:acme:Y9BHl85l352BGZDXa/vg90hh2+5PYe4oJxpkq/oQvIODDkW8bAyQSFr0gKQQxjyIOyYlTjf0MGcdWFv46G/3Rg==
|
||||
```
|
||||
|
|
|
|||
|
|
@ -31,12 +31,6 @@ deploy_cert() {
|
|||
% for domain, conf in sorted(domains.items()):
|
||||
<% if not conf: continue %>\
|
||||
${domain})
|
||||
% if conf.get('scp', None):
|
||||
scp "$KEYFILE" "${conf['scp']}/${conf.get('privkey_name', 'privkey.pem')}"
|
||||
scp "$CERTFILE" "${conf['scp']}/${conf.get('cert_name', 'cert.pem')}"
|
||||
scp "$FULLCHAINFILE" "${conf['scp']}/${conf.get('fullchain_name', 'fullchain.pem')}"
|
||||
scp "$CHAINFILE" "${conf['scp']}/${conf.get('chain_name', 'chain.pem')}"
|
||||
% endif
|
||||
% if conf.get('location', None):
|
||||
cat "$KEYFILE" > "${conf['location']}/${conf.get('privkey_name', 'privkey.pem')}"
|
||||
cat "$CERTFILE" > "${conf['location']}/${conf.get('cert_name', 'cert.pem')}"
|
||||
|
|
|
|||
|
|
@ -42,7 +42,7 @@ files = {
|
|||
}
|
||||
|
||||
actions['letsencrypt_update_certificates'] = {
|
||||
'command': 'systemctl start letsencrypt.service',
|
||||
'command': 'dehydrated --cron --accept-terms --challenge dns-01',
|
||||
'triggered': True,
|
||||
'skip': delegated,
|
||||
'needs': {
|
||||
|
|
|
|||
|
|
@ -2,7 +2,7 @@ defaults = {
|
|||
'apt': {
|
||||
'packages': {
|
||||
'dehydrated': {},
|
||||
'bind9-dnsutils': {},
|
||||
'dnsutils': {},
|
||||
},
|
||||
},
|
||||
'letsencrypt': {
|
||||
|
|
|
|||
|
|
@ -12,8 +12,9 @@ def generate_sysctl_key_value_pairs_from_json(json_data, parents=[]):
|
|||
|
||||
key_value_pairs = generate_sysctl_key_value_pairs_from_json(node.metadata.get('sysctl'))
|
||||
|
||||
|
||||
files= {
|
||||
'/etc/sysctl.d/managed.conf': {
|
||||
'/etc/sysctl.conf': {
|
||||
'content': '\n'.join(
|
||||
sorted(
|
||||
f"{'.'.join(path)}={value}"
|
||||
|
|
@ -24,9 +25,6 @@ files= {
|
|||
'svc_systemd:systemd-sysctl.service:restart',
|
||||
],
|
||||
},
|
||||
'/etc/modules-load.d/managed.conf': {
|
||||
'content': '\n'.join(sorted(node.metadata.get('modules-load'))),
|
||||
}
|
||||
}
|
||||
|
||||
svc_systemd = {
|
||||
|
|
|
|||
|
|
@ -1,6 +1,3 @@
|
|||
defaults = {
|
||||
'sysctl': {
|
||||
'net.ipv4.icmp_ratelimit': '100',
|
||||
},
|
||||
'modules-load': set(),
|
||||
'sysctl': {},
|
||||
}
|
||||
|
|
|
|||
|
|
@ -7,7 +7,12 @@ defaults = {
|
|||
'locale': {
|
||||
'default': ('en_US.UTF-8', 'UTF-8'),
|
||||
'installed': {
|
||||
('de_AT.UTF-8', 'UTF-8'),
|
||||
('de_CH.UTF-8', 'UTF-8'),
|
||||
('de_DE.UTF-8', 'UTF-8'),
|
||||
('de_LU.UTF-8', 'UTF-8'),
|
||||
('en_CA.UTF-8', 'UTF-8'),
|
||||
('en_GB.UTF-8', 'UTF-8'),
|
||||
('en_US.UTF-8', 'UTF-8'),
|
||||
},
|
||||
},
|
||||
|
|
|
|||
|
|
@ -2,5 +2,5 @@
|
|||
|
||||
cd "$OLDPWD"
|
||||
|
||||
export BW_ITEM_WORKERS=$(expr "$(sysctl -n hw.logicalcpu)" '*' 12 '/' 10)
|
||||
export BW_ITEM_WORKERS=$(expr "$(nproc)" '*' 12 '/' 10)
|
||||
export BW_NODE_WORKERS=$(expr 320 '/' "$BW_ITEM_WORKERS")
|
||||
|
|
|
|||
|
|
@ -2,5 +2,7 @@
|
|||
|
||||
cd "$OLDPWD"
|
||||
|
||||
PATH_add "/opt/homebrew/opt/gnu-sed/libexec/gnubin"
|
||||
PATH_add "/opt/homebrew/opt/grep/libexec/gnubin"
|
||||
GNU_PATH="$HOME/.local/gnu_bin"
|
||||
mkdir -p "$GNU_PATH"
|
||||
test -f "$GNU_PATH/sed" || ln -s "$(which gsed)" "$GNU_PATH/sed"
|
||||
PATH_add "$GNU_PATH"
|
||||
|
|
|
|||
|
|
@ -2,8 +2,6 @@
|
|||
|
||||
cd "$OLDPWD"
|
||||
|
||||
pyenv install --skip-existing
|
||||
|
||||
if test -f .venv/bin/python && test "$(realpath .venv/bin/python)" != "$(realpath "$(pyenv which python)")"
|
||||
then
|
||||
echo "rebuilding venv für new python version"
|
||||
|
|
|
|||
|
|
@ -1,26 +0,0 @@
|
|||
# Mailman
|
||||
|
||||
- django admin udner /admin
|
||||
|
||||
## Testmail
|
||||
|
||||
`echo export REST_API_PASS=$(bw metadata mseibert.mailman -k mailman/api_password | jq -r .mailman.api_password)`
|
||||
```sh
|
||||
curl -s -o /dev/null \
|
||||
-w "Status: %{http_code}\nTime: %{time_total}s\n" \
|
||||
-u restadmin:$REST_API_PASS \
|
||||
-H "Content-Type: application/json" \
|
||||
-X POST http://localhost:8001/3.1/queues/in \
|
||||
-d "{
|
||||
\"list_id\": \"testlist-2.mailman.ckn.li\",
|
||||
\"text\": \"From: i@ckn.li\nTo: testlist-2@mailman.ckn.li\nSubject: Curl Test $(date '+%Y-%m-%d %H:%M:%S')\n\nThis message was sent at $(date '+%Y-%m-%d %H:%M:%S').\"
|
||||
}"
|
||||
```
|
||||
|
||||
## Log locations
|
||||
|
||||
`tail -f /var/log/mailman3/*.log`
|
||||
|
||||
`journalctl -f | grep postfix/`
|
||||
|
||||
`mailq | head -20`
|
||||
|
|
@ -1,22 +0,0 @@
|
|||
# This is the mailman extension configuration file to enable HyperKitty as an
|
||||
# archiver. Remember to add the following lines in the mailman.cfg file:
|
||||
#
|
||||
# [archiver.hyperkitty]
|
||||
# class: mailman_hyperkitty.Archiver
|
||||
# enable: yes
|
||||
# configuration: /etc/mailman3/mailman-hyperkitty.cfg
|
||||
#
|
||||
|
||||
[general]
|
||||
|
||||
# This is your HyperKitty installation, preferably on the localhost. This
|
||||
# address will be used by Mailman to forward incoming emails to HyperKitty
|
||||
# for archiving. It does not need to be publicly available, in fact it's
|
||||
# better if it is not.
|
||||
# However, if your Mailman installation is accessed via HTTPS, the URL needs
|
||||
# to match your SSL certificate (e.g. https://lists.example.com/hyperkitty).
|
||||
base_url: http://${hostname}/mailman3/hyperkitty/
|
||||
|
||||
# The shared api_key, must be identical except for quoting to the value of
|
||||
# MAILMAN_ARCHIVER_KEY in HyperKitty's settings.
|
||||
api_key: ${archiver_key}
|
||||
|
|
@ -1,190 +0,0 @@
|
|||
ACCOUNT_EMAIL_VERIFICATION='none'
|
||||
|
||||
# This file is imported by the Mailman Suite. It is used to override
|
||||
# the default settings from /usr/share/mailman3-web/settings.py.
|
||||
|
||||
# SECURITY WARNING: keep the secret key used in production secret!
|
||||
SECRET_KEY = '${secret_key}'
|
||||
|
||||
ADMINS = (
|
||||
('Mailman Suite Admin', 'root@localhost'),
|
||||
)
|
||||
|
||||
# Hosts/domain names that are valid for this site; required if DEBUG is False
|
||||
# See https://docs.djangoproject.com/en/1.8/ref/settings/#allowed-hosts
|
||||
# Set to '*' per default in the Deian package to allow all hostnames. Mailman3
|
||||
# is meant to run behind a webserver reverse proxy anyway.
|
||||
ALLOWED_HOSTS = [
|
||||
'${hostname}',
|
||||
]
|
||||
|
||||
# Mailman API credentials
|
||||
MAILMAN_REST_API_URL = 'http://localhost:8001'
|
||||
MAILMAN_REST_API_USER = 'restadmin'
|
||||
MAILMAN_REST_API_PASS = '${api_password}'
|
||||
MAILMAN_ARCHIVER_KEY = '${archiver_key}'
|
||||
MAILMAN_ARCHIVER_FROM = ('127.0.0.1', '::1')
|
||||
|
||||
# Application definition
|
||||
|
||||
INSTALLED_APPS = (
|
||||
'hyperkitty',
|
||||
'postorius',
|
||||
'django_mailman3',
|
||||
# Uncomment the next line to enable the admin:
|
||||
'django.contrib.admin',
|
||||
# Uncomment the next line to enable admin documentation:
|
||||
# 'django.contrib.admindocs',
|
||||
'django.contrib.auth',
|
||||
'django.contrib.contenttypes',
|
||||
'django.contrib.sessions',
|
||||
'django.contrib.sites',
|
||||
'django.contrib.messages',
|
||||
'django.contrib.staticfiles',
|
||||
'rest_framework',
|
||||
'django_gravatar',
|
||||
'compressor',
|
||||
'haystack',
|
||||
'django_extensions',
|
||||
'django_q',
|
||||
'allauth',
|
||||
'allauth.account',
|
||||
'allauth.socialaccount',
|
||||
'django_mailman3.lib.auth.fedora',
|
||||
#'allauth.socialaccount.providers.openid',
|
||||
#'allauth.socialaccount.providers.github',
|
||||
#'allauth.socialaccount.providers.gitlab',
|
||||
#'allauth.socialaccount.providers.google',
|
||||
#'allauth.socialaccount.providers.facebook',
|
||||
#'allauth.socialaccount.providers.twitter',
|
||||
#'allauth.socialaccount.providers.stackexchange',
|
||||
)
|
||||
|
||||
|
||||
# Database
|
||||
# https://docs.djangoproject.com/en/1.8/ref/settings/#databases
|
||||
|
||||
DATABASES = {
|
||||
'default': {
|
||||
# Use 'sqlite3', 'postgresql_psycopg2', 'mysql', 'sqlite3' or 'oracle'.
|
||||
#'ENGINE': 'django.db.backends.sqlite3',
|
||||
'ENGINE': 'django.db.backends.postgresql_psycopg2',
|
||||
#'ENGINE': 'django.db.backends.mysql',
|
||||
# DB name or path to database file if using sqlite3.
|
||||
#'NAME': '/var/lib/mailman3/web/mailman3web.db',
|
||||
'NAME': 'mailman',
|
||||
# The following settings are not used with sqlite3:
|
||||
'USER': 'mailman',
|
||||
'PASSWORD': '${db_password}',
|
||||
# HOST: empty for localhost through domain sockets or '127.0.0.1' for
|
||||
# localhost through TCP.
|
||||
'HOST': '127.0.0.1',
|
||||
# PORT: set to empty string for default.
|
||||
'PORT': '5432',
|
||||
# OPTIONS: Extra parameters to use when connecting to the database.
|
||||
'OPTIONS': {
|
||||
# Set sql_mode to 'STRICT_TRANS_TABLES' for MySQL. See
|
||||
# https://docs.djangoproject.com/en/1.11/ref/
|
||||
# databases/#setting-sql-mode
|
||||
#'init_command': "SET sql_mode='STRICT_TRANS_TABLES'",
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
# If you're behind a proxy, use the X-Forwarded-Host header
|
||||
# See https://docs.djangoproject.com/en/1.8/ref/settings/#use-x-forwarded-host
|
||||
USE_X_FORWARDED_HOST = True
|
||||
|
||||
# And if your proxy does your SSL encoding for you, set SECURE_PROXY_SSL_HEADER
|
||||
# https://docs.djangoproject.com/en/1.8/ref/settings/#secure-proxy-ssl-header
|
||||
# SECURE_PROXY_SSL_HEADER = ('HTTP_X_FORWARDED_PROTO', 'https')
|
||||
# SECURE_PROXY_SSL_HEADER = ('HTTP_X_FORWARDED_SCHEME', 'https')
|
||||
|
||||
# Other security settings
|
||||
# SECURE_SSL_REDIRECT = True
|
||||
# If you set SECURE_SSL_REDIRECT to True, make sure the SECURE_REDIRECT_EXEMPT
|
||||
# contains at least this line:
|
||||
# SECURE_REDIRECT_EXEMPT = [
|
||||
# "archives/api/mailman/.*", # Request from Mailman.
|
||||
# ]
|
||||
# SESSION_COOKIE_SECURE = True
|
||||
# SECURE_CONTENT_TYPE_NOSNIFF = True
|
||||
# SECURE_BROWSER_XSS_FILTER = True
|
||||
# CSRF_COOKIE_SECURE = True
|
||||
# CSRF_COOKIE_HTTPONLY = True
|
||||
# X_FRAME_OPTIONS = 'DENY'
|
||||
|
||||
|
||||
# Internationalization
|
||||
# https://docs.djangoproject.com/en/1.8/topics/i18n/
|
||||
|
||||
LANGUAGE_CODE = 'en-us'
|
||||
|
||||
TIME_ZONE = 'UTC'
|
||||
|
||||
USE_I18N = True
|
||||
USE_L10N = True
|
||||
USE_TZ = True
|
||||
|
||||
|
||||
# Set default domain for email addresses.
|
||||
EMAILNAME = 'localhost.local'
|
||||
|
||||
# If you enable internal authentication, this is the address that the emails
|
||||
# will appear to be coming from. Make sure you set a valid domain name,
|
||||
# otherwise the emails may get rejected.
|
||||
# https://docs.djangoproject.com/en/1.8/ref/settings/#default-from-email
|
||||
# DEFAULT_FROM_EMAIL = "mailing-lists@you-domain.org"
|
||||
DEFAULT_FROM_EMAIL = 'postorius@{}'.format(EMAILNAME)
|
||||
|
||||
# If you enable email reporting for error messages, this is where those emails
|
||||
# will appear to be coming from. Make sure you set a valid domain name,
|
||||
# otherwise the emails may get rejected.
|
||||
# https://docs.djangoproject.com/en/1.8/ref/settings/#std:setting-SERVER_EMAIL
|
||||
# SERVER_EMAIL = 'root@your-domain.org'
|
||||
SERVER_EMAIL = 'root@{}'.format(EMAILNAME)
|
||||
|
||||
|
||||
# Django Allauth
|
||||
ACCOUNT_DEFAULT_HTTP_PROTOCOL = "https"
|
||||
|
||||
|
||||
#
|
||||
# Social auth
|
||||
#
|
||||
SOCIALACCOUNT_PROVIDERS = {
|
||||
#'openid': {
|
||||
# 'SERVERS': [
|
||||
# dict(id='yahoo',
|
||||
# name='Yahoo',
|
||||
# openid_url='http://me.yahoo.com'),
|
||||
# ],
|
||||
#},
|
||||
#'google': {
|
||||
# 'SCOPE': ['profile', 'email'],
|
||||
# 'AUTH_PARAMS': {'access_type': 'online'},
|
||||
#},
|
||||
#'facebook': {
|
||||
# 'METHOD': 'oauth2',
|
||||
# 'SCOPE': ['email'],
|
||||
# 'FIELDS': [
|
||||
# 'email',
|
||||
# 'name',
|
||||
# 'first_name',
|
||||
# 'last_name',
|
||||
# 'locale',
|
||||
# 'timezone',
|
||||
# ],
|
||||
# 'VERSION': 'v2.4',
|
||||
#},
|
||||
}
|
||||
|
||||
# On a production setup, setting COMPRESS_OFFLINE to True will bring a
|
||||
# significant performance improvement, as CSS files will not need to be
|
||||
# recompiled on each requests. It means running an additional "compress"
|
||||
# management command after each code upgrade.
|
||||
# http://django-compressor.readthedocs.io/en/latest/usage/#offline-compression
|
||||
COMPRESS_OFFLINE = True
|
||||
|
||||
POSTORIUS_TEMPLATE_BASE_URL = 'http://${hostname}/mailman3/'
|
||||
|
|
@ -1,271 +0,0 @@
|
|||
# Copyright (C) 2008-2017 by the Free Software Foundation, Inc.
|
||||
#
|
||||
# This file is part of GNU Mailman.
|
||||
#
|
||||
# GNU Mailman is free software: you can redistribute it and/or modify it under
|
||||
# the terms of the GNU General Public License as published by the Free
|
||||
# Software Foundation, either version 3 of the License, or (at your option)
|
||||
# any later version.
|
||||
#
|
||||
# GNU Mailman is distributed in the hope that it will be useful, but WITHOUT
|
||||
# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
|
||||
# FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
|
||||
# more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License along with
|
||||
# GNU Mailman. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
# This file contains the Debian configuration for mailman. It uses ini-style
|
||||
# formats under the lazr.config regime to define all system configuration
|
||||
# options. See <https://launchpad.net/lazr.config> for details.
|
||||
|
||||
|
||||
[mailman]
|
||||
# This address is the "site owner" address. Certain messages which must be
|
||||
# delivered to a human, but which can't be delivered to a list owner (e.g. a
|
||||
# bounce from a list owner), will be sent to this address. It should point to
|
||||
# a human.
|
||||
site_owner: ${site_owner_email}
|
||||
|
||||
# This is the local-part of an email address used in the From field whenever a
|
||||
# message comes from some entity to which there is no natural reply recipient.
|
||||
# Mailman will append '@' and the host name of the list involved. This
|
||||
# address must not bounce and it must not point to a Mailman process.
|
||||
noreply_address: noreply
|
||||
|
||||
# The default language for this server.
|
||||
default_language: de
|
||||
|
||||
# Membership tests for posting purposes are usually performed by looking at a
|
||||
# set of headers, passing the test if any of their values match a member of
|
||||
# the list. Headers are checked in the order given in this variable. The
|
||||
# value From_ means to use the envelope sender. Field names are case
|
||||
# insensitive. This is a space separate list of headers.
|
||||
sender_headers: from from_ reply-to sender
|
||||
|
||||
# Mail command processor will ignore mail command lines after designated max.
|
||||
email_commands_max_lines: 10
|
||||
|
||||
# Default length of time a pending request is live before it is evicted from
|
||||
# the pending database.
|
||||
pending_request_life: 3d
|
||||
|
||||
# How long should files be saved before they are evicted from the cache?
|
||||
cache_life: 7d
|
||||
|
||||
# A callable to run with no arguments early in the initialization process.
|
||||
# This runs before database initialization.
|
||||
pre_hook:
|
||||
|
||||
# A callable to run with no arguments late in the initialization process.
|
||||
# This runs after adapters are initialized.
|
||||
post_hook:
|
||||
|
||||
# Which paths.* file system layout to use.
|
||||
# You should not change this variable.
|
||||
layout: debian
|
||||
|
||||
# Can MIME filtered messages be preserved by list owners?
|
||||
filtered_messages_are_preservable: no
|
||||
|
||||
# How should text/html parts be converted to text/plain when the mailing list
|
||||
# is set to convert HTML to plaintext? This names a command to be called,
|
||||
# where the substitution variable $filename is filled in by Mailman, and
|
||||
# contains the path to the temporary file that the command should read from.
|
||||
# The command should print the converted text to stdout.
|
||||
html_to_plain_text_command: /usr/bin/lynx -dump $filename
|
||||
|
||||
# Specify what characters are allowed in list names. Characters outside of
|
||||
# the class [-_.+=!$*{}~0-9a-z] matched case insensitively are never allowed,
|
||||
# but this specifies a subset as the only allowable characters. This must be
|
||||
# a valid character class regexp or the effect on list creation is
|
||||
# unpredictable.
|
||||
listname_chars: [-_.0-9a-z]
|
||||
|
||||
|
||||
[shell]
|
||||
# `mailman shell` (also `withlist`) gives you an interactive prompt that you
|
||||
# can use to interact with an initialized and configured Mailman system. Use
|
||||
# --help for more information. This section allows you to configure certain
|
||||
# aspects of this interactive shell.
|
||||
|
||||
# Customize the interpreter prompt.
|
||||
prompt: >>>
|
||||
|
||||
# Banner to show on startup.
|
||||
banner: Welcome to the GNU Mailman shell
|
||||
|
||||
# Use IPython as the shell, which must be found on the system. Valid values
|
||||
# are `no`, `yes`, and `debug` where the latter is equivalent to `yes` except
|
||||
# that any import errors will be displayed to stderr.
|
||||
use_ipython: no
|
||||
|
||||
# Set this to allow for command line history if readline is available. This
|
||||
# can be as simple as $var_dir/history.py to put the file in the var directory.
|
||||
history_file:
|
||||
|
||||
|
||||
[paths.debian]
|
||||
# Important directories for Mailman operation. These are defined here so that
|
||||
# different layouts can be supported. For example, a developer layout would
|
||||
# be different from a FHS layout. Most paths are based off the var_dir, and
|
||||
# often just setting that will do the right thing for all the other paths.
|
||||
# You might also have to set spool_dir though.
|
||||
#
|
||||
# Substitutions are allowed, but must be of the form $var where 'var' names a
|
||||
# configuration variable in the paths.* section. Substitutions are expanded
|
||||
# recursively until no more $-variables are present. Beware of infinite
|
||||
# expansion loops!
|
||||
#
|
||||
# This is the root of the directory structure that Mailman will use to store
|
||||
# its run-time data.
|
||||
var_dir: /var/lib/mailman3
|
||||
# This is where the Mailman queue files directories will be created.
|
||||
queue_dir: $var_dir/queue
|
||||
# This is the directory containing the Mailman 'runner' and 'master' commands
|
||||
# if set to the string '$argv', it will be taken as the directory containing
|
||||
# the 'mailman' command.
|
||||
bin_dir: /usr/lib/mailman3/bin
|
||||
# All list-specific data.
|
||||
list_data_dir: $var_dir/lists
|
||||
# Directory where log files go.
|
||||
log_dir: /var/log/mailman3
|
||||
# Directory for system-wide locks.
|
||||
lock_dir: $var_dir/locks
|
||||
# Directory for system-wide data.
|
||||
data_dir: $var_dir/data
|
||||
# Cache files.
|
||||
cache_dir: $var_dir/cache
|
||||
# Directory for configuration files and such.
|
||||
etc_dir: /etc/mailman3
|
||||
# Directory containing Mailman plugins.
|
||||
ext_dir: $var_dir/ext
|
||||
# Directory where the default IMessageStore puts its messages.
|
||||
messages_dir: $var_dir/messages
|
||||
# Directory for archive backends to store their messages in. Archivers should
|
||||
# create a subdirectory in here to store their files.
|
||||
archive_dir: $var_dir/archives
|
||||
# Root directory for site-specific template override files.
|
||||
template_dir: $var_dir/templates
|
||||
# There are also a number of paths to specific file locations that can be
|
||||
# defined. For these, the directory containing the file must already exist,
|
||||
# or be one of the directories created by Mailman as per above.
|
||||
#
|
||||
# This is where PID file for the master runner is stored.
|
||||
pid_file: /run/mailman3/master.pid
|
||||
# Lock file.
|
||||
lock_file: $lock_dir/master.lck
|
||||
|
||||
|
||||
[database]
|
||||
# The class implementing the IDatabase.
|
||||
class: mailman.database.sqlite.SQLiteDatabase
|
||||
#class: mailman.database.mysql.MySQLDatabase
|
||||
#class: mailman.database.postgresql.PostgreSQLDatabase
|
||||
|
||||
# Use this to set the Storm database engine URL. You generally have one
|
||||
# primary database connection for all of Mailman. List data and most rosters
|
||||
# will store their data in this database, although external rosters may access
|
||||
# other databases in their own way. This string supports standard
|
||||
# 'configuration' substitutions.
|
||||
url: sqlite:///$DATA_DIR/mailman.db
|
||||
#url: mysql+pymysql://mailman3:mmpass@localhost/mailman3?charset=utf8&use_unicode=1
|
||||
#url: postgresql://mailman3:mmpass@localhost/mailman3
|
||||
|
||||
debug: no
|
||||
|
||||
|
||||
[logging.debian]
|
||||
# This defines various log settings. The options available are:
|
||||
#
|
||||
# - level -- Overrides the default level; this may be any of the
|
||||
# standard Python logging levels, case insensitive.
|
||||
# - format -- Overrides the default format string
|
||||
# - datefmt -- Overrides the default date format string
|
||||
# - path -- Overrides the default logger path. This may be a relative
|
||||
# path name, in which case it is relative to Mailman's LOG_DIR,
|
||||
# or it may be an absolute path name. You cannot change the
|
||||
# handler class that will be used.
|
||||
# - propagate -- Boolean specifying whether to propagate log message from this
|
||||
# logger to the root "mailman" logger. You cannot override
|
||||
# settings for the root logger.
|
||||
#
|
||||
# In this section, you can define defaults for all loggers, which will be
|
||||
# prefixed by 'mailman.'. Use subsections to override settings for specific
|
||||
# loggers. The names of the available loggers are:
|
||||
#
|
||||
# - archiver -- All archiver output
|
||||
# - bounce -- All bounce processing logs go here
|
||||
# - config -- Configuration issues
|
||||
# - database -- Database logging (SQLAlchemy and Alembic)
|
||||
# - debug -- Only used for development
|
||||
# - error -- All exceptions go to this log
|
||||
# - fromusenet -- Information related to the Usenet to Mailman gateway
|
||||
# - http -- Internal wsgi-based web interface
|
||||
# - locks -- Lock state changes
|
||||
# - mischief -- Various types of hostile activity
|
||||
# - runner -- Runner process start/stops
|
||||
# - smtp -- Successful SMTP activity
|
||||
# - smtp-failure -- Unsuccessful SMTP activity
|
||||
# - subscribe -- Information about leaves/joins
|
||||
# - vette -- Message vetting information
|
||||
format: %(asctime)s (%(process)d) %(message)s
|
||||
datefmt: %b %d %H:%M:%S %Y
|
||||
propagate: no
|
||||
level: info
|
||||
path: mailman.log
|
||||
|
||||
[webservice]
|
||||
# The hostname at which admin web service resources are exposed.
|
||||
hostname: localhost
|
||||
|
||||
# The port at which the admin web service resources are exposed.
|
||||
port: 8001
|
||||
|
||||
# Whether or not requests to the web service are secured through SSL.
|
||||
use_https: no
|
||||
|
||||
# Whether or not to show tracebacks in an HTTP response for a request that
|
||||
# raised an exception.
|
||||
show_tracebacks: yes
|
||||
|
||||
# The API version number for the current (highest) API.
|
||||
api_version: 3.1
|
||||
|
||||
# The administrative username.
|
||||
admin_user: restadmin
|
||||
|
||||
# The administrative password.
|
||||
admin_pass: ${api_password}
|
||||
|
||||
[mta]
|
||||
# The class defining the interface to the incoming mail transport agent.
|
||||
#incoming: mailman.mta.exim4.LMTP
|
||||
incoming: mailman.mta.postfix.LMTP
|
||||
|
||||
# The callable implementing delivery to the outgoing mail transport agent.
|
||||
# This must accept three arguments, the mailing list, the message, and the
|
||||
# message metadata dictionary.
|
||||
outgoing: mailman.mta.deliver.deliver
|
||||
|
||||
# How to connect to the outgoing MTA. If smtp_user and smtp_pass is given,
|
||||
# then Mailman will attempt to log into the MTA when making a new connection.
|
||||
smtp_host: 127.0.0.1
|
||||
smtp_port: 25
|
||||
smtp_user:
|
||||
smtp_pass:
|
||||
|
||||
# Where the LMTP server listens for connections. Use 127.0.0.1 instead of
|
||||
# localhost for Postfix integration, because Postfix only consults DNS
|
||||
# (e.g. not /etc/hosts).
|
||||
lmtp_host: 127.0.0.1
|
||||
lmtp_port: 8024
|
||||
|
||||
# Where can we find the mail server specific configuration file? The path can
|
||||
# be either a file system path or a Python import path. If the value starts
|
||||
# with python: then it is a Python import path, otherwise it is a file system
|
||||
# path. File system paths must be absolute since no guarantees are made about
|
||||
# the current working directory. Python paths should not include the trailing
|
||||
# .cfg, which the file must end with.
|
||||
#configuration: python:mailman.config.exim4
|
||||
configuration: python:mailman.config.postfix
|
||||
|
|
@ -1,53 +0,0 @@
|
|||
# See /usr/share/postfix/main.cf.dist for a commented, more complete version
|
||||
|
||||
# Debian specific: Specifying a file name will cause the first
|
||||
# line of that file to be used as the name. The Debian default
|
||||
# is /etc/mailname.
|
||||
#myorigin = /etc/mailname
|
||||
|
||||
smtpd_banner = $myhostname ESMTP $mail_name (Debian/GNU)
|
||||
biff = no
|
||||
|
||||
# appending .domain is the MUA's job.
|
||||
append_dot_mydomain = no
|
||||
|
||||
# Uncomment the next line to generate "delayed mail" warnings
|
||||
#delay_warning_time = 4h
|
||||
|
||||
readme_directory = no
|
||||
|
||||
# See http://www.postfix.org/COMPATIBILITY_README.html -- default to 3.6 on
|
||||
# fresh installs.
|
||||
compatibility_level = 3.6
|
||||
|
||||
# TLS parameters
|
||||
smtpd_tls_cert_file=/etc/ssl/certs/ssl-cert-snakeoil.pem
|
||||
smtpd_tls_key_file=/etc/ssl/private/ssl-cert-snakeoil.key
|
||||
smtpd_tls_security_level=may
|
||||
|
||||
smtp_tls_CApath=/etc/ssl/certs
|
||||
smtp_tls_security_level=may
|
||||
smtp_tls_session_cache_database = <%text>btree:${data_directory}/smtp_scache</%text>
|
||||
|
||||
smtpd_relay_restrictions = permit_mynetworks permit_sasl_authenticated defer_unauth_destination
|
||||
myhostname = ${hostname}
|
||||
alias_maps = hash:/etc/aliases
|
||||
alias_database = hash:/etc/aliases
|
||||
mydestination = $myhostname, localhost, localhost.localdomain, ${hostname}
|
||||
relayhost =
|
||||
mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128
|
||||
mailbox_size_limit = 0
|
||||
recipient_delimiter = +
|
||||
inet_interfaces = all
|
||||
#inet_protocols = all
|
||||
inet_protocols = ipv4
|
||||
|
||||
unknown_local_recipient_reject_code = 550
|
||||
owner_request_special = no
|
||||
|
||||
transport_maps =
|
||||
hash:/var/lib/mailman3/data/postfix_lmtp
|
||||
local_recipient_maps =
|
||||
hash:/var/lib/mailman3/data/postfix_lmtp
|
||||
relay_domains =
|
||||
hash:/var/lib/mailman3/data/postfix_domains
|
||||
Some files were not shown because too many files have changed in this diff Show more
Loading…
Reference in a new issue