WordPress – OperationsGuide
WordPress 6.9 on K3s (AlmaLinux 9) · Last updated: May 2026
Overview
WordPress runs on a single-node K3s cluster on AlmaLinux 9.
The entire infrastructure is built and managed with an Ansible playbook
(blog.yml).
The playbook is idempotent – it can be re-run at any time
without modifying data or unnecessarily interrupting services.
/srv/wordpress-db).
There is no high availability. Updates cause a brief downtime
(typically: 30–60 seconds).
Architecture
Overall Overview
Why This Approach?
| Component | Where does it run? | Rationale |
|---|---|---|
| WordPress PHP-FPM + nginx | K3s Pod Container | Simple updates by swapping images; no php-fpm on the host |
| MariaDB | K3s Pod Container | Unlike Nextcloud, the DB runs in a container here; data on HostPath |
| nginx-ingress (F5) | K3s DaemonSet hostNetwork | Binds directly to port 80/443 on the host; no external load balancer needed |
| cert-manager | K3s Container | Automatic Let's Encrypt certificates; replaces certbot |
| WP-Cron | K3s CronJob hostNetwork | Replaces WordPress's internal pseudo-cron; runs reliably every 5 minutes |
WordPress Pod in Detail
Container 1: wordpress-fpm
The official wordpress:6.9-fpm image. Runs PHP-FPM on port 9000.
On first start, the Docker entrypoint automatically installs WordPress based on
the environment variables (DB host, DB name, DB user, DB password).
WordPress files are placed in /var/www/html on the shared
HostPath volume.
Container 2: nginx
nginx sidecar (nginx:1.30-alpine) on port 80. Serves
static requests directly from the shared volume; PHP requests are forwarded via
FastCGI to port 9000. TLS is already terminated by nginx-ingress (F5)
– this nginx only speaks plain HTTP.
127.0.0.1:9000
without needing a separate Kubernetes service. In addition, both
containers read WordPress files from the same volume – nginx for static assets,
PHP-FPM for PHP scripts.
Storage & Data
| Path on Host | Mount Point in Container | Contents |
|---|---|---|
/srv/wordpress-www |
/var/www/html |
WordPress PHP core, wp-content (themes, plugins, uploads), wp-config.php |
/srv/wordpress-db |
/var/lib/mysql |
MariaDB database files (InnoDB, system tables) |
/srv/wordpress-www/wp-content/uploads/.
This directory must be included in backups – it grows over time
and is not included in the container image.
Why are app and database separate volumes?
- Database data (
/srv/wordpress-db) can be backed up independently - WordPress files are supplemented during an update by the new image (the entrypoint updates core files in-place)
- Both volumes survive a pod restart or image update unharmed
Ownership and Permissions
WordPress-FPM runs as www-data (UID 33 in the container).
The HostPath directory must be writable by this user:
# Set ownership for WordPress files (from the host):
chown -R 33:33 /srv/wordpress-www
# Database data belongs to root (MariaDB in the container uses root internally):
ls -la /srv/wordpress-db
Network & TLS
IP Address Ranges
| Range | Purpose | Relevant for |
|---|---|---|
10.42.0.0/16 |
K3s Pod CIDR (Flannel) | iptables rules, set_real_ip_from in the nginx sidecar |
10.43.0.0/16 |
K3s Service CIDR | ClusterIP addresses of K8s services |
217.154.101.78 |
Public host IP | DNS, MariaDB service endpoint |
Required DNS Records
The DNS record must be created at the domain registrar before the first playbook run. cert-manager needs it for the Let's Encrypt HTTP-01 challenge – without a valid record the certificate issuance will fail.
| Name (Hostname) | Type | Value | Purpose |
|---|---|---|---|
www.apt-upgrade.me |
A | Public IP of the server (217.154.101.78) |
WordPress blog, Let's Encrypt TLS |
fw_ipv6.sh) are already configured.
TLS and Proxy Chain
Internet
→ nginx-ingress F5 (Port 443, TLS terminated, sets X-Forwarded-For / -Proto)
→ nginx sidecar (Port 80, plain HTTP)
→ PHP-FPM (Port 9000, FastCGI)
nginx-ingress forwards requests from its pod namespace (10.42.x.x).
The nginx sidecar is configured to read the real client IP from the
X-Forwarded-For header:
# nginx ConfigMap (nginx-cm.yml.j2):
set_real_ip_from 10.42.0.0/16; # Pod CIDR – ingress packets originate from here
real_ip_header X-Forwarded-For;
real_ip_recursive on;
Rate Limiting and Security at the Ingress
Security measures are defined directly in the Ingress via nginx.org/ annotations:
| Measure | Annotation / Configuration | Value |
|---|---|---|
| HTTPS redirect | nginx.org/ssl-redirect |
true – HTTP is redirected to HTTPS |
| Block xmlrpc.php | nginx.org/server-snippets |
deny all; return 403; – prevents XML-RPC brute-force |
| wp-login.php rate limit | nginx.org/server-snippets |
5 r/s per IP, burst 5 – prevents login brute-force |
| Security headers | nginx.org/location-snippets |
HSTS, X-Frame-Options, X-Content-Type-Options, Referrer-Policy |
| Upload limit | nginx.org/client-max-body-size |
500m – for media uploads via the WP backend |
wp_login is defined in the Helm values of the nginx-ingress controller
(http-snippets) and is thus globally available for all ingress resources.
The actual application of the zone is done via the
server-snippets annotation in the WordPress ingress.
Important Files in the Repository
Inventory & Variables
| File | Contents |
|---|---|
inventory/group_vars/all.yml |
Global settings: container image tags, Helm chart versions, monitoring versions, K3s CIDRs |
inventory/host_vars/www.apt-upgrade.me/vars.yml |
Host-specific: hostname, IP, DB name, DB user, paths, plugins, mail config |
inventory/host_vars/www.apt-upgrade.me/vault.yml |
Encrypted! Passwords: DB root, DB user, WP admin, SMTP.
Encrypt before git commit:
ansible-vault encrypt inventory/host_vars/www.apt-upgrade.me/vault.yml |
Ansible Roles (WordPress-specific)
| Role | Task |
|---|---|
blog_packages | Install packages (e.g. SELinux tools) |
blog_selinux | SELinux enforcing, container_file_t for HostPath directories |
common_k3s | K3s, Helm, cert-manager, nginx-ingress (F5) – common base |
blog_k3s_deploy | K8s manifests: namespace, secrets, ConfigMaps, deployments, ingress, CronJob |
blog_wordpress | Post-deploy: WP-CLI commands (admin, plugins, settings) |
Templates (generate the K8s manifests)
| Template | Generates | Important because |
|---|---|---|
wordpress-deployment.yml.j2 |
Deployment + Service for WordPress | Image tags, volumes, env variables (DB credentials as secret ref) |
mariadb-deployment.yml.j2 |
Deployment + Service for MariaDB | Database initialization on first start, HostPath volume |
nginx-cm.yml.j2 |
ConfigMap wordpress-nginx-config |
nginx routing for WordPress, PHP-FastCGI, real IP from X-Forwarded-For |
secret.yml.j2 |
Secret wordpress-secrets |
DB host, DB name, DB user, DB password – as env variables in the pod |
ingress.yml.j2 |
Ingress wordpress |
TLS, rate limiting, xmlrpc.php block, Grafana sub-path, security headers |
cronjob.yml.j2 |
K8s CronJob wordpress-cron |
Runs wp-cron.php every 5 minutes; replaces WP's internal pseudo-cron |
Variables & Secrets
Where Credentials Are Stored
All passwords are stored in inventory/host_vars/www.apt-upgrade.me/vault.yml,
encrypted with Ansible Vault. This file must never be committed unencrypted
to git.
# Encrypt before committing:
ansible-vault encrypt inventory/host_vars/www.apt-upgrade.me/vault.yml
# Open directly in editor (preferred):
ansible-vault edit inventory/host_vars/www.apt-upgrade.me/vault.yml
Typical vault.yml Contents
blog_db_root_pass: "..." # MariaDB root password
blog_db_pass: "..." # WordPress DB user password
blog_admin_pass: "..." # WordPress admin password
mail_smtppass: "..." # SMTP password for mail delivery
grafana_admin_pass: "..." # Grafana admin password
Secrets in the K8s Cluster
The vault variables flow into the K8s secret
wordpress-secrets in the namespace wordpress during the playbook run.
# Show secret (decoded):
k3s kubectl get secret wordpress-secrets -n wordpress \
-o jsonpath='{.data}' | python3 -c \
"import sys,json,base64; [print(k,base64.b64decode(v).decode()) \
for k,v in json.load(sys.stdin).items()]"
Container Images & Versions
All image tags are centrally defined in inventory/group_vars/all.yml.
K3s automatically pulls a new image only when the tag in the manifest has changed
(imagePullPolicy: IfNotPresent).
# inventory/group_vars/all.yml
blog_image_wordpress: "wordpress:6.9-fpm" # WordPress PHP-FPM
blog_image_mariadb: "mariadb:10.11" # MariaDB LTS
blog_image_nginx: "nginx:1.30-alpine" # nginx Sidecar
Tag Strategy
| Tag Format | Meaning | Example |
|---|---|---|
6.9-fpm |
Follows patch updates in WordPress 6.9.x automatically on pull | wordpress:6.9-fpm |
10.11 |
MariaDB LTS branch; patch updates come automatically, major upgrade requires migration | mariadb:10.11 (LTS until June 2028) |
1.30-alpine |
nginx stable branch; minor patches come automatically | nginx:1.30-alpine |
Check for New Versions
| Component | URL |
|---|---|
| WordPress | https://hub.docker.com/_/wordpress/tags (filter: *-fpm) |
| MariaDB | https://hub.docker.com/_/mariadb/tags |
| nginx | https://hub.docker.com/_/nginx/tags (filter: *-alpine) |
| nginx-ingress (F5) Helm | https://artifacthub.io/packages/helm/nginx-stable/nginx-ingress |
| cert-manager Helm | https://artifacthub.io/packages/helm/cert-manager/cert-manager |
wordpress:6.9-fpm to 6.9.1,
nothing happens automatically on the server. The tag is cached locally.
Update manually:
crictl pull docker.io/library/wordpress:6.9-fpm
k3s kubectl rollout restart deployment/wordpress -n wordpress
WordPress Configuration
wp-config.php
The wp-config.php is automatically generated by the WordPress Docker entrypoint
from the environment variables on first start. Important generated entries:
define('DB_HOST', 'mariadb'); // K8s service name in the namespace
define('DB_NAME', 'wordpress_db');
define('DB_USER', 'wordpress');
define('DB_PASSWORD', '...'); // from K8s Secret
// WordPress's own cron is disabled (K8s CronJob takes over):
define('DISABLE_WP_CRON', true);
// URL settings (set by Ansible):
define('WP_HOME', 'https://www.apt-upgrade.me');
define('WP_SITEURL', 'https://www.apt-upgrade.me');
DISABLE_WP_CRON = true), because a
dedicated K8s CronJob calls wp-cron.php every 5 minutes.
This is more reliable and does not burden the normal page request.
Plugins (managed via Ansible)
Plugins are defined as a list in inventory/host_vars/www.apt-upgrade.me/vars.yml
and automatically installed and activated by the blog_wordpress playbook:
blog_wordpress_plugins:
- all-in-one-wp-migration # Import/export for backups
- classic-editor # Classic editor instead of Gutenberg
- limit-login-attempts-reloaded # Additional brute-force protection
Query the Database Directly
# In the MariaDB pod (no root password needed, socket auth):
k3s kubectl exec -n wordpress deployment/mariadb -- \
mysql -u wordpress -p wordpress_db
# Or with explicit password:
k3s kubectl exec -n wordpress deployment/mariadb -- \
mysql -u wordpress -p'<password>' wordpress_db -e "SHOW TABLES;"
Run Playbook
Full Initial Run (new server installation)
-
Set SSH port to
22ininventory/host_vars/www.apt-upgrade.me/vars.yml:ansible_port: 22 -
Enter vault passwords in
vault.ymland encrypt -
Install collections:
ansible-galaxy collection install -r requirements.yml -
Run playbook:
ansible-playbook blog.yml --limit www.apt-upgrade.me --ask-vault-pass -
Change SSH port in
vars.ymlto10022(after SSH hardening bycommon_ssh)
Idempotent Re-run (e.g. after config change)
ansible-playbook blog.yml --limit www.apt-upgrade.me --ask-vault-pass
Run Only Specific Roles
# Update only K8s deployments
ansible-playbook blog.yml --limit www.apt-upgrade.me --tags blog_k3s_deploy --ask-vault-pass
# Only WordPress configuration (plugins, admin, settings)
ansible-playbook blog.yml --limit www.apt-upgrade.me --tags blog_wordpress --ask-vault-pass
Important Operations Commands
Pod Status
# All pods in the wordpress namespace
k3s kubectl get pods -n wordpress
# Detailed status
k3s kubectl describe pod -l app=wordpress -n wordpress
# Logs WordPress-FPM
k3s kubectl logs deployment/wordpress -c wordpress-fpm -n wordpress --tail=50
# Logs nginx sidecar
k3s kubectl logs deployment/wordpress -c nginx -n wordpress --tail=50
# Logs MariaDB
k3s kubectl logs deployment/mariadb -n wordpress --tail=50
WP-CLI Commands
# Prefix for all WP-CLI commands:
k3s kubectl exec -n wordpress deployment/wordpress -c wordpress-fpm -- \
wp --allow-root <COMMAND>
# Examples:
... wp core version
... wp core check-update
... wp plugin list
... wp plugin update --all
... wp theme list
... wp cache flush
... wp cron event list
... wp user list
... wp db check
... wp search-replace 'http://www.apt-upgrade.me' 'https://www.apt-upgrade.me' --dry-run
Restart Pod
# WordPress
k3s kubectl rollout restart deployment/wordpress -n wordpress
k3s kubectl rollout status deployment/wordpress -n wordpress --timeout=300s
# MariaDB (Caution: brief DB downtime!)
k3s kubectl rollout restart deployment/mariadb -n wordpress
Namespace Overview
k3s kubectl get all -n wordpress
Check Ingress and Certificate
# Ingress resource
k3s kubectl get ingress -n wordpress
k3s kubectl describe ingress wordpress -n wordpress
# TLS certificate
k3s kubectl get certificate -n wordpress
k3s kubectl describe certificate wordpress-tls -n wordpress
Monitoring
The monitoring stack (Prometheus + Grafana + Node Exporter + mysqld_exporter + php-fpm_exporter)
runs partly as host services, partly as sidecar containers in K3s pods.
Grafana is accessible via the ingress at the blog hostname under /grafana/.
https://www.apt-upgrade.me/grafana/
Dashboards: Node Exporter Full (1860), MySQL Overview (7362), PHP-FPM (4912)
Login: Grafana admin password from vault.yml
127.0.0.1:9090
Locally accessible only
Retention: 30 days
127.0.0.1:9100CPU, RAM, Disk, Network
127.0.0.1:30104 (NodePort)
Sidecar in the MariaDB pod; Prometheus scrapes via NodePort 30104
Prometheus job: mysqld_wordpress
127.0.0.1:30253 (NodePort)
Sidecar in the WordPress pod; Prometheus scrapes via NodePort 30253
Prometheus job: php_fpm_wordpress
MariaDB Slow Query Log
The MariaDB pod writes slow queries (≥ 1 second) to
/var/lib/mysql/slow.log – this resides on the HostPath volume and is readable
on the host at /srv/wordpress-db/slow.log.
(Background: /dev/stderr is not writable in the MariaDB container under K3s –
the mysql process runs as uid 999 and has no access to the fd.)
# Read slow queries on the host
tail -f /srv/wordpress-db/slow.log
# Or directly in the pod
k3s kubectl exec -n wordpress deployment/mariadb -c mariadb -- \
tail -f /var/lib/mysql/slow.log
Configuration: MariaDB pod args --slow-query-log=1 --slow-query-log-file=/var/lib/mysql/slow.log --long-query-time=1
PHP-FPM Slow Log
PHP-FPM logs requests that take longer than 5 seconds to /proc/1/fd/2
(corresponding to stderr of the pod's main process). Since ptrace is not
available in containers, no stack traces are written – only the request timestamp and
execution time.
# Monitor slow FPM requests
k3s kubectl logs -n wordpress deployment/wordpress -c wordpress-fpm -f | grep -i "slow"
Configuration: ConfigMap wordpress-fpm-pool-config →
pm.status_path = /fpm-status, slowlog = /proc/1/fd/2,
request_slowlog_timeout = 5s
Grafana Dashboards
Node Exporter Full (ID 1860) – Host Metrics
Shows the health of the entire server in real time.
| Panel | What you read |
|---|---|
| CPU Usage | Total utilization and breakdown per core (user / system / iowait). iowait > 20 % indicates a disk bottleneck. |
| Load Average (1/5/15 min) | System load relative to the number of CPU cores. Consistently above the core count → server overloaded. |
| RAM / Memory | Used RAM, buffers, cache and swap. Swap usage > 0 indicates RAM shortage – WordPress/MariaDB may need more memory. |
| Disk I/O | Read and write rate per device, latency.
High latency on /srv/wordpress-www or /srv/wordpress-db slows the site down. |
| Disk Space | Filesystem utilization. /srv/wordpress-www (uploads) grows over time –
alert when > 80 % used. |
| Network Traffic | Bytes in/out per interface. Unusual spikes can indicate attacks or data leaks. |
MySQL Overview (ID 7362) – MariaDB Database Metrics
Shows the performance of the WordPress database (MariaDB sidecar in K3s pod, NodePort 30104).
| Panel | What you read |
|---|---|
| Queries per Second (QPS) | Database activity. Sudden spikes can indicate inefficient plugins or attacks. |
| Slow Queries | Queries that take longer than 1 s. Consistently > 0 → missing indexes (common with poorly written plugins). |
| InnoDB Buffer Pool | Cache utilization and hit ratio. Hit ratio < 95 % → buffer pool too small, many read accesses go to disk (slow). |
| Connections | Active and maximum connections. Typically low for WordPress (< 10). |
| Table Locks / Threads running | Lock contention. Poorly optimized plugins in WordPress can cause lock problems. |
| Aborted Connections | Interrupted connections. Expected during WordPress pod restarts; persistently elevated = problem. |
PHP-FPM (ID 4912) – PHP Process Pool
Shows how well PHP-FPM processes WordPress requests (K3s pod, NodePort 30253).
| Panel | What you read |
|---|---|
| Active Processes | Concurrently processing PHP workers. If this panel consistently reaches
pm.max_children → increase the pool or optimize the plugin. |
| Idle Processes | Free workers. Always 0 + queue > 0 = PHP-FPM is overloaded. |
| Request Queue | Waiting requests. Any value > 0 means a noticeable load time for the site visitor. |
| Requests per Second | Throughput. Useful for identifying traffic spikes (e.g. after a post). |
| Slow Requests | PHP requests > 5 s. Common causes: poorly optimized plugins, external API calls, or missing indexes. |
| Max Active (Peak) | Highest value since FPM start. Helps with sizing pm.max_children. |
__inputs section.
The variables ds_prometheus, job, nodename and node
are therefore set after each import via a Python script by the common_grafana task.
Without this step, all panels show "No data". If a dashboard is empty:
re-run the playbook or manually select the dropdown variables at the top of the dashboard in Grafana.
mysqld_exporter – Configuration Note
DATA_SOURCE_NAME is no longer supported. The exporter reads
credentials exclusively from a .my.cnf file (--config.my-cnf).
In the WordPress setup the credentials are stored as key mysqld-exporter-cnf in
K8s secret wordpress-secrets and mounted into the mysqld_exporter sidecar container
with defaultMode: 0444
(the exporter runs as user nobody – 0600 would be permission denied).
Prometheus Scrape Targets
# Check active scrape targets (in browser or curl)
curl -s http://127.0.0.1:9090/api/v1/targets | python3 -m json.tool | grep -E '"job"|"state"|"scrapeUrl"'
WP-Cron
WordPress requires a regular cron call for scheduled tasks: plugin updates, email delivery, publishing scheduled posts, etc. In this setup a Kubernetes CronJob handles this task.
wp-cron.php
without kubectl workarounds. A K8s CronJob is the clean approach: it starts a
temporary pod in the same namespace every 5 minutes, with direct access to
the WordPress volume and database.
Check CronJob Status
# Status and last run
k3s kubectl get cronjob wordpress-cron -n wordpress
# The last 5 jobs (completed pods)
k3s kubectl get jobs -n wordpress --sort-by=.metadata.creationTimestamp | tail -5
# Logs of a CronJob pod (ID from previous command):
k3s kubectl logs -n wordpress -l job-name=wordpress-cron-<id>
Manual Execution (for testing)
k3s kubectl exec -n wordpress deployment/wordpress -c wordpress-fpm -- \
php /var/www/html/wp-cron.php
Updates
Operating System (AlmaLinux 9)
Security updates are applied automatically daily by dnf-automatic. For a full system update:
-
ssh -p 10022 root@217.154.101.78 dnf upgrade -y -
Check whether a reboot is necessary:
needs-restarting -r -
If needed: reboot the server. K3s starts automatically, all pods come back up on their own.
reboot # Then check: k3s kubectl get pods -n wordpress
WordPress Update: Patch Within the Same Tag
When Docker Hub updates wordpress:6.9-fpm to 6.9.1
and the tag in Ansible remains unchanged:
- Manually pull the new image:
crictl pull docker.io/library/wordpress:6.9-fpm - Restart the pod:
k3s kubectl rollout restart deployment/wordpress -n wordpress k3s kubectl rollout status deployment/wordpress -n wordpress --timeout=300s
WordPress Update: Minor Version Jump (e.g. 6.9 → 7.0)
- Change the tag in
inventory/group_vars/all.yml:blog_image_wordpress: "wordpress:7.0-fpm" - Run the playbook – K3s pulls the new image and rolls out the pod:
ansible-playbook blog.yml --limit www.apt-upgrade.me --ask-vault-pass - Check for WordPress DB update (sometimes needed for major versions):
k3s kubectl exec -n wordpress deployment/wordpress -c wordpress-fpm -- \ wp --allow-root core update-db
MariaDB Update
Patch updates within 10.11:
crictl pull docker.io/library/mariadb:10.11
k3s kubectl rollout restart deployment/mariadb -n wordpress
Backup & Restore
What Is Backed Up
| What | Path (Server) | Destination (local) | Method |
|---|---|---|---|
| WordPress files + uploads | /srv/wordpress-www |
/data/wordpress/www.apt-upgrade.me/wordpress/ |
rsync |
| Database | MariaDB pod wordpress_db |
/data/wordpress/www.apt-upgrade.me/db.sql |
mysqldump via kubectl exec |
kubectl exec in the mariadb container.
The root credentials come from the MARIADB_ROOT_PASSWORD environment variable,
injected by the K8s secret – no password needed in the script.
Run Backup
The backup script has no environment argument – it always backs up
www.apt-upgrade.me to /data/wordpress/www.apt-upgrade.me/.
# Call from the repository directory:
cd ~/www_k3s
./scripts/wordpress-backup.sh
# Process:
# 1. WP-CLI maintenance-mode activate (in the wordpress-fpm container)
# 2. kubectl exec → mysqldump (in the mariadb container, password from K8s secret)
# 3. rsync /srv/wordpress-www/ → /data/wordpress/www.apt-upgrade.me/wordpress/
# 4. WP-CLI maintenance-mode deactivate
Restore Workflow
-
Set up server (infrastructure):
ansible-playbook blog.yml --limit www.apt-upgrade.me --ask-vault-pass -
Run restore script:
cd ~/www_k3s ./scripts/wordpress-restore.shProcess: maintenance on → stop WordPress + MariaDB → rsync WordPress files → clear MariaDB data dir → start MariaDB (re-initialization from K8s secret) → import SQL dump → fix ownership → start WordPress → maintenance off
-
Run Ansible again (ensures plugin list and WP config):
ansible-playbook blog.yml --limit www.apt-upgrade.me --ask-vault-pass
/srv/wordpress-db/ so that MariaDB
initializes its database fresh from the K8s secret environment variables on the next start
(MARIADB_DATABASE, MARIADB_USER, MARIADB_PASSWORD).
The SQL dump is then imported. Without this step an old, incompatible
database state would be loaded.
Manual Individual Commands (Reference)
# mysqldump from the MariaDB pod (MARIADB_ROOT_PASSWORD from K8s secret)
k3s kubectl exec -n wordpress deployment/mariadb -c mariadb -- \
sh -c 'mysqldump --single-transaction wordpress_db \
-u root -p"$MARIADB_ROOT_PASSWORD"'
# Import SQL dump into a running MariaDB pod
k3s kubectl exec -n wordpress deployment/mariadb -c mariadb -- \
sh -c 'mysql wordpress_db -u root -p"$MARIADB_ROOT_PASSWORD" < /tmp/dump.sql'
# WP-CLI Maintenance
k3s kubectl exec -n wordpress deployment/wordpress -c wordpress-fpm -- \
php /var/www/html/wp-cli.phar maintenance-mode activate --path=/var/www/html --allow-root
Troubleshooting
WordPress Pod Does Not Start / CrashLoopBackOff
k3s kubectl describe pod -l app=wordpress -n wordpress
k3s kubectl logs deployment/wordpress -c wordpress-fpm -n wordpress --previous
Common causes:
- Database not reachable: MariaDB pod is not running.
Check
k3s kubectl get pods -n wordpress. - Wrong DB password: Secret
wordpress-secretsdoes not match the MariaDB user. Re-run the playbook. - Volume permissions:
chown -R 33:33 /srv/wordpress-wwwand restart the pod.
MariaDB Pod Does Not Start
k3s kubectl logs deployment/mariadb -n wordpress --previous
- Database files corrupted: MariaDB attempts a recovery. In the worst case the data must be restored from backup.
- Volume permissions:
MariaDB expects
/srv/wordpress-dbto be writable by the internal mysql user (UID 999). Ansible sets the permissions automatically.
WordPress Site Shows 502 Bad Gateway
nginx sidecar cannot reach PHP-FPM. Check:
# Is PHP-FPM running?
k3s kubectl logs deployment/wordpress -c wordpress-fpm -n wordpress --tail=20
# Are both containers in the pod Ready?
k3s kubectl get pod -l app=wordpress -n wordpress
WordPress Admin Backend: "Database Update Required"
k3s kubectl exec -n wordpress deployment/wordpress -c wordpress-fpm -- \
wp --allow-root core update-db
Certificate Is Not Being Issued
# cert-manager logs:
k3s kubectl logs -n cert-manager deployment/cert-manager --tail=50
# Certificate object:
k3s kubectl describe certificate wordpress-tls -n wordpress
# ClusterIssuer status:
k3s kubectl describe clusterissuer letsencrypt-prod
Common cause: port 80 is blocked by iptables or the provider. cert-manager requires port 80 for the HTTP-01 challenge (even if the site should only be reachable via HTTPS).
Rate Limiting Too Aggressive (Own IP Blocked)
# Check ingress logs (nginx-ingress controller):
k3s kubectl logs -n ingress-nginx daemonset/ingress-nginx-nginx-ingress-controller --tail=50
# Temporarily: redeploy ingress without rate limit (for emergencies):
# → remove nginx.org/server-snippets annotation in the ingress and run the playbook
WP-Cron Is Not Running
# CronJob status:
k3s kubectl get cronjob wordpress-cron -n wordpress
# Last job pod (should be "Completed"):
k3s kubectl get pods -n wordpress --sort-by=.metadata.creationTimestamp | tail -5
# Trigger manually (for testing):
k3s kubectl exec -n wordpress deployment/wordpress -c wordpress-fpm -- \
php /var/www/html/wp-cron.php
Security
Multi-Layered Protection Strategy
| Layer | Measure | Role |
|---|---|---|
| Host Network | iptables IPv4/IPv6 – default policy DROP, port scan blocker, ICMP rate limit, only 80/443/10022 open | common_firewall |
| SSH | Port 10022, key-only, hardened sshd_config, Fail2Ban | common_ssh, common_fail2ban |
| HTTP – Rate Limiting | wp-login.php: 5 r/s per IP, burst 5 | common_k3s (nginx-ingress Helm values + ingress annotations) |
| HTTP – Endpoint Block | xmlrpc.php → 403 (prevents XML-RPC brute-force) | blog_k3s_deploy (ingress server-snippets) |
| HTTP – Security Headers | HSTS, X-Frame-Options, X-Content-Type-Options, Referrer-Policy | blog_k3s_deploy (ingress location-snippets) |
| WordPress | Limit Login Attempts plugin, plugin updates via Ansible | blog_wordpress |
| SELinux | Enforcing, container_file_t for HostPath volumes |
blog_selinux |
| Audit | auditd with hardening rules (sudo, SSH, cron, kernel modules) | common_auditd |
| Rootkits | rkhunter daily scan at 03:15 | common_rkhunter |
| Patches | dnf-automatic: security updates applied automatically daily | common_dnf_automatic |
| Secrets | Ansible Vault, K8s secrets (Opaque), no_log on sensitive tasks | all roles |
Firewall in Detail
The script /root/fw/fw_ipv4.sh (from common_firewall) is applied via
an @reboot cron job after every restart. K3s then inserts its own
KUBE-* chains at the top of the INPUT/OUTPUT chains.
| Feature | Detail |
|---|---|
| Default Policy DROP | INPUT, OUTPUT and FORWARD are immediately set to DROP after the flush – no window in which unprotected traffic passes through |
| Port Scan Blocker | Every packet that does not match an ACCEPT rule enters its source IP into
/proc/net/xt_recent/portscan. On the next packet from that IP
--rcheck triggers at the top → 24-hour block.
Check: cat /proc/net/xt_recent/portscan |
| ICMP Rate Limit | Echo-request max. 5/s, burst 10 – protects against ICMP flood from many sources. Existing ping sessions continue via ESTABLISHED. |
| No server_ipv4 whitelist | The former rule -s server_ipv4 -j ACCEPT was removed –
external packets with spoofed server IP were thereby accepted.
Local traffic runs safely via loopback (-i lo -j ACCEPT). |
| K3s OUTPUT CIDR | OUTPUT explicitly allows traffic to pod CIDR (10.42.0.0/16) and service CIDR (10.43.0.0/16) – required for kubelet probes, kube-proxy, metrics server. Without these rules K3s-internal port 10250 breaks. |
| IPv6 DROP Policy | ip6tables starts directly with policy DROP, ICMPv6 rate-limited,
echo-reply only for ESTABLISHED/RELATED |
# Monitor dropped packets live
journalctl -k -f --grep "IPTables-Dropped"
# Show port scan blocklist
cat /proc/net/xt_recent/portscan | awk '{print $1}'
# Reload firewall rules (happens automatically after reboot)
/root/fw/fw_ipv4.sh && /root/fw/fw_ipv6.sh
Important Security Notes
git commit verify:
ansible-vault encrypt inventory/host_vars/www.apt-upgrade.me/vault.yml
wp plugin update --all regularly or check in the admin panel.