feat: initial import os-upgrade-automation enterprise setup

This commit is contained in:
Automation Admin 2025-08-07 22:24:32 +00:00
commit 15c9b4f1e4
30 changed files with 1467 additions and 0 deletions

7
.ansible-lint Normal file
View File

@ -0,0 +1,7 @@
---
rules: {}
parser:
ansible: true
warn_list: []
skip_list:
- yaml

27
.github-workflows-ci.yml Normal file
View File

@ -0,0 +1,27 @@
name: CI
on:
push:
branches: [ main ]
pull_request:
branches: [ main ]
jobs:
lint:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-python@v5
with:
python-version: '3.11'
- run: pip install ansible ansible-lint
- run: ansible-lint -v
dryrun:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-python@v5
with:
python-version: '3.11'
- run: pip install ansible
- run: ansible-playbook playbook/playbook.yml --check --list-tasks

8
.github/CODEOWNERS vendored Normal file
View File

@ -0,0 +1,8 @@
# CODEOWNERS definiert die Reviewer/Verantwortlichen für Verzeichnisse und Dateien
# Syntax: Pfad @owner1 @owner2
* @linux-admins @devops-team
playbook/roles/** @linux-admins
playbook/group_vars/** @linux-admins
docs/** @tech-writer
scripts/** @devops-team

20
.gitlab-ci.yml Normal file
View File

@ -0,0 +1,20 @@
image: python:3.11
stages:
- lint
- dryrun
variables:
PIP_DISABLE_PIP_VERSION_CHECK: '1'
lint:
stage: lint
script:
- pip install ansible ansible-lint
- ansible-lint -v
dryrun:
stage: dryrun
script:
- pip install ansible
- ansible-playbook playbook/playbook.yml --check --list-tasks

11
.pre-commit-config.yaml Normal file
View File

@ -0,0 +1,11 @@
repos:
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v4.5.0
hooks:
- id: trailing-whitespace
- id: end-of-file-fixer
- id: check-yaml
- repo: https://github.com/ansible-community/ansible-lint
rev: v6.22.1
hooks:
- id: ansible-lint

24
CONTRIBUTING.md Normal file
View File

@ -0,0 +1,24 @@
# CONTRIBUTING
Danke für deinen Beitrag! Bitte beachte folgende Richtlinien:
## Branch/Commit
- Feature-Branches: `feature/<kurzbeschreibung>`
- Fix-Branches: `fix/<kurzbeschreibung>`
- Commit Messages nach Conventional Commits (feat:, fix:, docs:, chore:, refactor:, perf:, test:)
## Code-Style
- Ansible: ansible-lint muss grün sein (`make lint`)
- YAML: 2 Spaces, keine Tabs
## Tests
- Dry-Run: `ansible-playbook playbook/playbook.yml --check --list-tasks`
- App-spezifisch: `-l <app-gruppe>`
## Security
- Keine Secrets im Klartext, nutze `make vault-encrypt`
- PRs werden automatisch per CI geprüft (Linting, Dry-Run)
## Review
- CODEOWNERS regelt Reviewer
- Mindestens 1 Approval erforderlich

18
Makefile Normal file
View File

@ -0,0 +1,18 @@
SHELL := /bin/bash
.PHONY: deps run lint vault-encrypt vault-decrypt
deps:
./scripts/install_collections.sh
run:
./scripts/run_patch.sh $(APP) $(CLM) "$(EXTRA)"
lint:
ansible-lint -v
vault-encrypt:
ansible-vault encrypt group_vars/vault.yml
vault-decrypt:
ansible-vault decrypt group_vars/vault.yml

18
ansible.cfg Normal file
View File

@ -0,0 +1,18 @@
[defaults]
inventory = playbook/servicenow_inventory.yml
roles_path = playbook/roles
collections_paths = ~/.ansible/collections:./collections
host_key_checking = False
retry_files_enabled = False
stdout_callback = yaml
bin_ansible_callbacks = True
forks = 25
interpreter_python = auto_silent
fact_caching = jsonfile
fact_caching_connection = .ansible_facts_cache
fact_caching_timeout = 86400
callbacks_enabled = profile_tasks, timer
[ssh_connection]
pipelining = True
ssh_args = -o ControlMaster=auto -o ControlPersist=60s

6
docs/CHANGELOG.md Normal file
View File

@ -0,0 +1,6 @@
# CHANGELOG
Alle Änderungen und Patchläufe werden hier automatisch dokumentiert.
Beispiel:
2024-06-01T12:00:00Z: Patch/Upgrade auf pdp-portal-server1.example.com (FQDN: pdp-portal-server1.example.com) durchgeführt. Ergebnis: OK

135
docs/README.md Normal file
View File

@ -0,0 +1,135 @@
# Enterprise Auto-Upgrade Playbook für SLES & RHEL
## Übersicht
Dieses Projekt bietet ein modulares, Enterprise-taugliches Ansible-Playbook für automatisierte Upgrades und Patch-Management von SLES (SUSE Linux Enterprise Server) und RHEL (Red Hat Enterprise Linux) Systemen. Es unterstützt:
- Automatische OS-Erkennung
- Upgrade nach Hersteller-Best-Practice
- **VMware-Snapshots für Backup/Rollback**
- Logging
- Mail-Benachrichtigung (lokal & extern via mailx)
- Dynamische Zuweisung von CLM-Channels via SUSE Manager API
## Verzeichnisstruktur
```
playbook/
├── group_vars/
│ ├── all.yml # Globale Variablen
│ └── vault.yml # Verschlüsselte Zugangsdaten (Vault)
├── host_vars/ # (Optional) Host-spezifische Variablen
├── inventories/ # (Optional) Inventare
├── playbook.yml # Haupt-Playbook
├── README.md # Diese Datei
└── roles/
├── common/ # Gemeinsame Tasks (z.B. Logging, mailx)
├── rhel_upgrade/ # RHEL-spezifische Upgrade-Tasks
├── sles_upgrade/ # SLES-spezifische Upgrade-Tasks
├── post_upgrade/ # Reboot etc.
├── suma_api_assign_clm/ # SUSE Manager API Integration
└── vmware_snapshot/ # VMware Snapshot Handling
```
## Voraussetzungen
- Ansible >= 2.9
- python3-pyvmomi auf dem Ansible-Host (für VMware)
- Zielsysteme: SLES oder RHEL, angebunden an SUSE Manager (ggf. venv-salt-minion)
- Zugang zur SUSE Manager API (XML-RPC, meist Port 443)
- Zugang zum vCenter (API)
- Optional: Ansible Vault für sichere Zugangsdaten
## Nutzung
1. **Zugangsdaten für SUSE Manager & vCenter sicher speichern**
- Erstelle eine Datei `vault.yml` in `group_vars`:
```yaml
suma_api_url: "https://susemanager.example.com/rpc/api"
suma_api_user: "admin"
suma_api_pass: "geheim"
vcenter_hostname: "vcenter.example.com"
vcenter_user: "administrator@vsphere.local"
vcenter_password: "dein_passwort"
vcenter_datacenter: "DeinDatacenter"
vcenter_folder: "/"
```
- Verschlüssele die Datei mit Ansible Vault:
```bash
ansible-vault encrypt playbook/group_vars/vault.yml
```
- Passe `group_vars/all.yml` an:
```yaml
# ... bestehende Variablen ...
suma_api_url: "{{ vault_suma_api_url }}"
suma_api_user: "{{ vault_suma_api_user }}"
suma_api_pass: "{{ vault_suma_api_pass }}"
vcenter_hostname: "{{ vault_vcenter_hostname }}"
vcenter_user: "{{ vault_vcenter_user }}"
vcenter_password: "{{ vault_vcenter_password }}"
vcenter_datacenter: "{{ vault_vcenter_datacenter }}"
vcenter_folder: "{{ vault_vcenter_folder }}"
```
- Lade die Vault-Datei im Playbook:
```yaml
vars_files:
- group_vars/all.yml
- group_vars/vault.yml
```
- Playbook-Aufruf mit Vault-Passwort:
```bash
ansible-playbook playbook.yml --ask-vault-pass -e "target_clm_version=prod-2024-06"
```
2. **VMware Snapshot Handling**
- Vor jedem Upgrade wird automatisch ein Snapshot erstellt.
- Bei aktiviertem Rollback (Variable `rollback: true`) wird die VM auf den Snapshot zurückgesetzt.
- Die Snapshot-Tasks laufen auf dem Ansible-Host (`delegate_to: localhost`).
3. **Upgrade auf bestimmte CLM-Version**
- Beim Playbook-Aufruf die gewünschte Version angeben:
```bash
ansible-playbook playbook.yml -e "target_clm_version=prod-2024-06"
```
- Das System wird per SUSE Manager API dem passenden Channel zugewiesen.
4. **Rollback aktivieren (optional)**
- In `group_vars/all.yml`:
```yaml
rollback: true
```
- Das Playbook setzt die VM dann auf den Snapshot zurück.
5. **Mail-Benachrichtigung konfigurieren**
- Lokale Mail: `mail_to: "root@localhost"`
- Externer SMTP:
```yaml
mail_smtp_host: "smtp.example.com"
mail_smtp_port: 587
mail_smtp_user: "user@example.com"
mail_smtp_pass: "dein_passwort"
```
## Wichtige Variablen (group_vars/all.yml)
- `upgrade_dry_run`: true/false (Simulation)
- `reboot_after_upgrade`: true/false
- `log_dir`: Logverzeichnis
- `rollback`: true/false
- `mail_to`: Empfängeradresse
- `mail_smtp_*`: SMTP-Parameter (optional)
- `target_clm_version`: Ziel-CLM-Channel (z.B. prod-2024-06)
- `suma_api_url`, `suma_api_user`, `suma_api_pass`: SUSE Manager API (empfohlen: Vault)
- `vcenter_hostname`, `vcenter_user`, `vcenter_password`, `vcenter_datacenter`, `vcenter_folder`: VMware/vCenter (empfohlen: Vault)
## Sicherheitshinweis
**Lege Zugangsdaten (API, Mail, vCenter) niemals im Klartext ab!** Nutze immer Ansible Vault für sensible Daten.
## Erweiterungsideen
- Integration mit Monitoring/Alerting
- Approval-Workflows
- Reporting
- Zusätzliche OS-Unterstützung
## Support & Doku
- [SUSE Manager API Doku](https://documentation.suse.com/suma/)
- [Red Hat Upgrade Guide](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/)
- [SLES Upgrade Guide](https://documentation.suse.com/sles/)
- [Ansible VMware Doku](https://docs.ansible.com/ansible/latest/collections/community/vmware/vmware_guest_snapshot_module.html)
---
**Fragen oder Wünsche? Einfach melden!**

View File

@ -0,0 +1,28 @@
# Self-Service Runbook für App-Owner
## Ziel
Dieses Runbook beschreibt, wie App-Owner das Auto-Upgrade-Playbook für ihre Systeme selbstständig ausführen können.
## Voraussetzungen
- Zugang zum Ansible-Server (SSH)
- Berechtigung für die gewünschte App-Gruppe im Inventory
- Vault-Passwort für verschlüsselte Variablen
## Schritt-für-Schritt-Anleitung
1. **Login auf dem Ansible-Server**
2. **Playbook für die eigene App-Gruppe ausführen:**
```bash
ansible-playbook -i inventory_apps playbook.yml -l <app-gruppe> --ask-vault-pass -e "target_clm_version=<channel>"
```
Beispiel für pdp-portal:
```bash
ansible-playbook -i inventory_apps playbook.yml -l pdp-portal --ask-vault-pass -e "target_clm_version=prod-2024-06"
```
3. **Ergebnis prüfen:**
- E-Mail-Benachrichtigung abwarten
- Logfiles im angegebenen Verzeichnis prüfen
## Hinweise
- Bei Fehlern wird automatisch ein Failsafe-Mail an die App- und Linux-Admins gesendet.
- Bei kritischen Fehlern erfolgt ein automatischer Rollback (VMware-Snapshot).
- Für Fragen oder Freigaben bitte an die Linux-Admins wenden.

View File

@ -0,0 +1,33 @@
upgrade_dry_run: false
reboot_after_upgrade: true
log_dir: /var/log/auto-upgrade
rollback: false
mail_to: "root@localhost"
# SMTP-Konfiguration für mailx (optional)
mail_smtp_host: "smtp.example.com"
mail_smtp_port: 587
mail_smtp_user: "user@example.com"
mail_smtp_pass: "dein_passwort"
vcenter_hostname: "vcenter.example.com"
vcenter_user: "administrator@vsphere.local"
vcenter_password: "dein_passwort"
vcenter_datacenter: "DeinDatacenter"
vcenter_folder: "/"
linux_admins_mail: "linux-admins@example.com"
maintenance_window_start: "22:00"
maintenance_window_end: "04:00"
# Upgrade-Optionen
upgrade_security_only: false # true = nur Security-Updates
# Skip-Flags für optionale Schritte
skip_smoke_tests: false
skip_compliance: false
skip_self_healing: false
skip_vmware_snapshot: false
skip_suma_api: false
skip_post_upgrade: false

View File

@ -0,0 +1,32 @@
# ServiceNow API
servicenow_instance: "https://mycompany.service-now.com"
servicenow_user: "ansible_api"
servicenow_pass: "SuperSicheresServiceNowPasswort123!"
# SUSE Manager API
suma_api_url: "https://susemanager.example.com/rpc/api"
suma_api_user: "suma_admin"
suma_api_pass: "NochSichereresSumaPasswort456!"
# vCenter/VMware
vcenter_hostname: "vcenter.example.com"
vcenter_user: "administrator@vsphere.local"
vcenter_password: "MegaSicheresVcenterPasswort789!"
vcenter_datacenter: "Datacenter1"
vcenter_folder: "/"
# Mail/SMTP
mail_smtp_host: "smtp.example.com"
mail_smtp_port: 587
mail_smtp_user: "mailuser@example.com"
mail_smtp_pass: "MailPasswort123!"
# Slack
slack_token: "xoxb-1234567890-abcdefghijklmnopqrstuvwx"
slack_enabled: true
# Datenbank Smoke-Tests
smoke_test_db_host: "db.example.com"
smoke_test_db_user: "dbuser"
smoke_test_db_pass: "DBPasswort456!"
smoke_test_db_name: "appdb"

29
playbook/inventory_apps Normal file
View File

@ -0,0 +1,29 @@
[pdp-portal]
pdp-portal-server1.example.com ansible_host=10.0.1.11 ansible_user=deploy host_email=admin-pdp@example.com
pdp-portal-server2.example.com ansible_host=10.0.1.12 host_email=admin-pdp2@example.com
[pdp-portal:vars]
app_mail=pdp-portal-app@example.com
[confluence]
confluence-server1.example.com ansible_host=10.0.2.21 ansible_user=confluence host_email=confluence-admin@example.com
confluence-server2.example.com ansible_host=10.0.2.22 host_email=confluence-admin2@example.com
[confluence:vars]
app_mail=confluence-app@example.com
[git]
git-server1.example.com ansible_host=10.0.3.31 ansible_user=gitadmin host_email=git-admin@example.com
git-server2.example.com ansible_host=10.0.3.32 host_email=git-admin2@example.com
[git:vars]
app_mail=git-app@example.com
# Optional: Gruppen für Umgebungen
#[dev:children]
#pdp-portal
git
#[prod:children]
#confluence
#pdp-portal

81
playbook/playbook.yml Normal file
View File

@ -0,0 +1,81 @@
---
- name: Enterprise Auto-Upgrade für SLES und RHEL
hosts: all
gather_facts: false
become: yes
serial: 5
vars_files:
- group_vars/all.yml
- group_vars/vault.yml
vars:
target_clm_version: "" # Kann beim Aufruf überschrieben werden
debug_mode: false # Kann beim Aufruf überschrieben werden
skip_smoke_tests: false
skip_compliance: false
skip_self_healing: false
skip_vmware_snapshot: false
skip_suma_api: false
skip_post_upgrade: false
pre_tasks:
- name: Sammle gezielt Netzwerk- und Hardware-Fakten
setup:
gather_subset:
- network
- hardware
tags: always
- name: ServiceNow Change öffnen (optional)
import_role:
name: servicenow_tickets
tags: snow
- name: Preflight-Check: Prüfe Diskspace, Erreichbarkeit, Channel, Snapshots
import_role:
name: preflight_check
tags: preflight
- name: Setze Ziel-CLM-Version falls übergeben
set_fact:
target_clm_version: "{{ target_clm_version | default('') }}"
tags: always
- name: Debug: Zeige alle relevanten Variablen und Fakten
debug:
msg:
inventory_hostname: "{{ inventory_hostname }}"
ansible_os_family: "{{ ansible_facts['os_family'] }}"
ansible_distribution: "{{ ansible_facts['distribution'] }}"
ansible_distribution_version: "{{ ansible_facts['distribution_version'] }}"
target_clm_version: "{{ target_clm_version }}"
rollback: "{{ rollback }}"
mail_to: "{{ mail_to }}"
vcenter_hostname: "{{ vcenter_hostname }}"
suma_api_url: "{{ suma_api_url }}"
when: debug_mode | bool
tags: debug
- name: Erstelle VMware Snapshot vor Upgrade (optional)
import_role:
name: vmware_snapshot
when: not skip_vmware_snapshot
tags: snapshot
- name: Weise System per SUSE Manager API dem gewünschten CLM-Channel zu (optional)
import_role:
name: suma_api_assign_clm
when: target_clm_version != "" and not skip_suma_api
tags: suma
roles:
- role: common
tags: common
- role: rhel_upgrade
when: ansible_facts['os_family'] == "RedHat"
tags: rhel
- role: sles_upgrade
when: ansible_facts['os_family'] == "Suse"
tags: sles
- role: post_upgrade
when: not skip_post_upgrade
tags: post

View File

@ -0,0 +1,4 @@
collections:
- name: community.vmware
- name: servicenow.servicenow
- name: community.general

View File

@ -0,0 +1,160 @@
---
- name: Prüfe OS-Typ und Version
debug:
msg: "OS: {{ ansible_facts['os_family'] }} Version: {{ ansible_facts['distribution_version'] }}"
- name: Erstelle Log-Verzeichnis
file:
path: "{{ log_dir }}"
state: directory
mode: '0755'
register: logdir_result
ignore_errors: true
- name: Breche ab, wenn Log-Verzeichnis nicht erstellt werden kann
fail:
msg: "Log-Verzeichnis konnte nicht erstellt werden: {{ logdir_result.msg | default('Unbekannter Fehler') }}"
when: logdir_result is failed
- name: Konfiguriere mailx (Absender)
lineinfile:
path: /etc/mail.rc
line: "set from=auto-upgrade@{{ inventory_hostname }}"
create: yes
state: present
become: true
register: mailx_from_result
ignore_errors: true
- name: Logge Fehler bei mailx-Konfiguration (Absender)
copy:
content: "mailx-Konfigurations-Fehler: {{ mailx_from_result.msg | default('Unbekannter Fehler') }}"
dest: "{{ log_dir }}/mailx_error_{{ inventory_hostname }}.log"
when: mailx_from_result is failed
- name: Konfiguriere mailx für externen SMTP-Server (optional)
blockinfile:
path: /etc/mail.rc
block: |
set smtp=smtp://{{ mail_smtp_host }}:{{ mail_smtp_port }}
set smtp-auth=login
set smtp-auth-user={{ mail_smtp_user }}
set smtp-auth-password={{ mail_smtp_pass }}
set ssl-verify=ignore
set nss-config-dir=/etc/pki/nssdb
when: mail_smtp_host is defined and mail_smtp_user is defined and mail_smtp_pass is defined
become: true
register: mailx_smtp_result
ignore_errors: true
- name: Logge Fehler bei mailx-Konfiguration (SMTP)
copy:
content: "mailx-SMTP-Konfigurations-Fehler: {{ mailx_smtp_result.msg | default('Unbekannter Fehler') }}"
dest: "{{ log_dir }}/mailx_error_{{ inventory_hostname }}.log"
when: mailx_smtp_result is failed
- name: Sende Failsafe-Mail an app_mail und host_email bei Fehler
mail:
host: "localhost"
port: 25
to: |
{{ app_mail | default('') }}{{ ',' if app_mail is defined and app_mail != '' else '' }}{{ host_email | default(mail_to) }}
subject: "[FAILSAFE] Fehler beim Patch/Upgrade auf {{ inventory_hostname }}"
body: |
Es ist ein Fehler beim Patch/Upgrade auf {{ inventory_hostname }} (FQDN: {{ ansible_fqdn }}) aufgetreten.
Siehe Log-Verzeichnis: {{ log_dir }}
Zeit: {{ ansible_date_time.iso8601 }}
when: (ansible_failed_result is defined and ansible_failed_result is not none) or (rollback is defined and rollback)
ignore_errors: true
- name: Extrahiere Log-Summary für Admin-Mail
shell: |
tail -n 20 {{ log_dir }}/rhel_upgrade_check.log 2>/dev/null; tail -n 20 {{ log_dir }}/sles_upgrade_check.log 2>/dev/null; tail -n 20 {{ log_dir }}/rhel_upgrade_error_{{ inventory_hostname }}.log 2>/dev/null; tail -n 20 {{ log_dir }}/sles_upgrade_error_{{ inventory_hostname }}.log 2>/dev/null
register: log_summary
changed_when: false
ignore_errors: true
- name: Setze dynamische Liste der Log-Attachments
set_fact:
log_attachments: >-
{{
[
log_dir + '/rhel_upgrade_check.log',
log_dir + '/sles_upgrade_check.log',
log_dir + '/rhel_upgrade_error_' + inventory_hostname + '.log',
log_dir + '/sles_upgrade_error_' + inventory_hostname + '.log',
log_dir + '/snapshot_error_' + inventory_hostname + '.log',
log_dir + '/suma_api_error_' + inventory_hostname + '.log',
log_dir + '/mailx_error_' + inventory_hostname + '.log',
log_dir + '/package_report_' + inventory_hostname + '.log'
] | select('fileexists') | list
}}
- name: Sende Log an Linux-Admins (immer, mit Anhang und Summary)
mail:
host: "localhost"
port: 25
to: "{{ linux_admins_mail }}"
subject: "[LOG] Patch/Upgrade-Log für {{ inventory_hostname }} am {{ ansible_date_time.iso8601 }}"
body: |
Patch/Upgrade-Log für {{ inventory_hostname }} (FQDN: {{ ansible_fqdn }})
Zeit: {{ ansible_date_time.iso8601 }}
---
Log-Summary:
{{ log_summary.stdout | default('Keine Logdaten gefunden.') }}
---
Siehe Anhang für Details.
attach: "{{ log_attachments }}"
ignore_errors: true
- name: Slack-Benachrichtigung bei kritischen Fehlern (optional)
slack:
token: "{{ slack_token | default('xoxb-...') }}"
msg: "[CRITICAL] Fehler beim Patch/Upgrade auf {{ inventory_hostname }}: {{ ansible_failed_result.msg | default('Unbekannter Fehler') }}"
channel: "#linux-admins"
when: slack_enabled | default(false) and (ansible_failed_result is defined and ansible_failed_result is not none)
ignore_errors: true
- name: Dokumentiere Änderung im CHANGELOG
lineinfile:
path: "{{ playbook_dir }}/../CHANGELOG.md"
line: "{{ ansible_date_time.iso8601 }}: Patch/Upgrade auf {{ inventory_hostname }} (FQDN: {{ ansible_fqdn }}) durchgeführt. Ergebnis: {{ 'OK' if (ansible_failed_result is not defined or ansible_failed_result is none) else 'FEHLER' }}"
create: yes
delegate_to: localhost
ignore_errors: true
- name: Erfasse installierte Paketversionen (RHEL)
shell: rpm -qa --qf '%{NAME}-%{VERSION}-%{RELEASE}.%{ARCH}\n'
register: rpm_list
when: ansible_facts['os_family'] == 'RedHat'
changed_when: false
ignore_errors: true
- name: Erfasse installierte Paketversionen (SLES)
shell: rpm -qa --qf '%{NAME}-%{VERSION}-%{RELEASE}.%{ARCH}\n'
register: rpm_list
when: ansible_facts['os_family'] == 'Suse'
changed_when: false
ignore_errors: true
- name: Schreibe Paket-Report ins Log
copy:
content: "{{ rpm_list.stdout | default('Keine Paketdaten gefunden.') }}"
dest: "{{ log_dir }}/package_report_{{ inventory_hostname }}.log"
when: rpm_list is defined
ignore_errors: true
- name: Sende Paket-Report an Linux-Admins
mail:
host: "localhost"
port: 25
to: "{{ linux_admins_mail }}"
subject: "[REPORT] Paketversionen nach Patch für {{ inventory_hostname }} am {{ ansible_date_time.iso8601 }}"
body: |
Paket-Report für {{ inventory_hostname }} (FQDN: {{ ansible_fqdn }})
Zeit: {{ ansible_date_time.iso8601 }}
Siehe Anhang für Details.
attach:
- "{{ log_dir }}/package_report_{{ inventory_hostname }}.log"
when: rpm_list is defined
ignore_errors: true

View File

@ -0,0 +1,29 @@
---
- name: Compliance-Check: Führe OpenSCAP-Scan durch (sofern installiert)
shell: oscap xccdf eval --profile xccdf_org.ssgproject.content_profile_standard --results {{ log_dir }}/oscap_result_{{ inventory_hostname }}.xml /usr/share/xml/scap/ssg/content/ssg-$(lsb_release -si | tr '[:upper:]' '[:lower:]')-ds.xml
register: oscap_result
ignore_errors: true
changed_when: false
- name: Compliance-Check: Führe Lynis-Scan durch (sofern installiert)
shell: lynis audit system --quiet --logfile {{ log_dir }}/lynis_{{ inventory_hostname }}.log
register: lynis_result
ignore_errors: true
changed_when: false
- name: Sende Compliance-Report an Linux-Admins
mail:
host: "localhost"
port: 25
to: "{{ linux_admins_mail }}"
subject: "[COMPLIANCE] Report für {{ inventory_hostname }} am {{ ansible_date_time.iso8601 }}"
body: |
Compliance-Report für {{ inventory_hostname }} (FQDN: {{ ansible_fqdn }})
Zeit: {{ ansible_date_time.iso8601 }}
OpenSCAP-Exit: {{ oscap_result.rc | default('N/A') }}
Lynis-Exit: {{ lynis_result.rc | default('N/A') }}
Siehe Anhang für Details.
attach:
- "{{ log_dir }}/oscap_result_{{ inventory_hostname }}.xml"
- "{{ log_dir }}/lynis_{{ inventory_hostname }}.log"
ignore_errors: true

View File

@ -0,0 +1,39 @@
---
- name: Reboot nach Upgrade (optional)
reboot:
msg: "Reboot nach Auto-Upgrade"
pre_reboot_delay: 60
when: reboot_after_upgrade
tags: reboot
- name: Health-Check: Prüfe, ob kritische Dienste laufen
service_facts:
tags: health
- name: Prüfe Status der kritischen Dienste
assert:
that:
- "(services[item].state == 'running') or (services[item].state == 'started')"
fail_msg: "Kritischer Dienst {{ item }} läuft nicht!"
success_msg: "Dienst {{ item }} läuft."
loop: "{{ critical_services | default(['sshd','cron']) }}"
when: item in services
tags: health
- name: Führe automatisierte Smoke-Tests durch (optional)
import_role:
name: smoke_tests
when: not skip_smoke_tests
tags: smoke
- name: Führe Self-Healing/Remediation durch (optional)
import_role:
name: self_healing
when: not skip_self_healing
tags: selfheal
- name: Führe Compliance-Checks durch (optional)
import_role:
name: compliance_check
when: not skip_compliance
tags: compliance

View File

@ -0,0 +1,131 @@
---
- name: Prüfe, ob aktueller Zeitpunkt im Maintenance-Window liegt
set_fact:
now: "{{ lookup('pipe', 'date +%H:%M') }}"
window_start: "{{ maintenance_window_start }}"
window_end: "{{ maintenance_window_end }}"
changed_when: false
tags: preflight
- name: Maintenance-Window-Check (Abbruch, wenn außerhalb)
fail:
msg: "Aktuelle Zeit {{ now }} liegt außerhalb des Maintenance-Windows ({{ window_start }} - {{ window_end }}). Upgrade wird abgebrochen!"
when: >-
(
(window_start < window_end and (now < window_start or now > window_end))
or
(window_start > window_end and (now < window_start and now > window_end))
)
tags: preflight
- name: Prüfe freien Speicherplatz auf / (mind. 5GB empfohlen)
stat:
path: /
register: root_stat
tags: preflight
- name: Warnung bei zu wenig Speicherplatz
assert:
that:
- root_stat.stat.avail_bytes > 5368709120
fail_msg: "Wenig freier Speicherplatz auf /: {{ root_stat.stat.avail_bytes | human_readable }} (mind. 5GB empfohlen)"
success_msg: "Genügend Speicherplatz auf /: {{ root_stat.stat.avail_bytes | human_readable }}"
tags: preflight
- name: Prüfe Erreichbarkeit von SUSE Manager
uri:
url: "{{ suma_api_url }}"
method: GET
validate_certs: no
timeout: 10
register: suma_reachable
ignore_errors: true
retries: 3
delay: 5
tags: preflight
- name: Warnung, wenn SUSE Manager nicht erreichbar
assert:
that:
- suma_reachable.status is defined and suma_reachable.status == 200
fail_msg: "SUSE Manager API nicht erreichbar!"
success_msg: "SUSE Manager API erreichbar."
tags: preflight
- name: Prüfe, ob VMware-Snapshot-Modul verfügbar ist
shell: "python3 -c 'import pyVmomi'"
register: pyvmomi_check
ignore_errors: true
changed_when: false
tags: preflight
- name: Warnung, wenn pyVmomi nicht installiert ist
assert:
that:
- pyvmomi_check.rc == 0
fail_msg: "pyVmomi (VMware-Modul) nicht installiert!"
success_msg: "pyVmomi ist installiert."
tags: preflight
- name: Prüfe, ob aktueller SUSE Manager Channel synchronisiert ist
uri:
url: "{{ suma_api_url }}"
method: POST
body_format: json
headers:
Content-Type: application/json
body: |
{
"method": "auth.login",
"params": ["{{ suma_api_user }}", "{{ suma_api_pass }}"],
"id": 1
}
validate_certs: no
timeout: 20
register: suma_api_login
ignore_errors: true
retries: 3
delay: 10
async: 60
poll: 0
tags: preflight
- name: Hole Channel-Details für Ziel-CLM-Version
uri:
url: "{{ suma_api_url }}"
method: POST
body_format: json
headers:
Content-Type: application/json
body: |
{
"method": "channel.software.getDetails",
"params": ["{{ suma_api_login.json.result }}", "{{ target_clm_version }}"],
"id": 2
}
validate_certs: no
timeout: 20
register: suma_channel_details
ignore_errors: true
retries: 3
delay: 10
async: 60
poll: 0
tags: preflight
- name: Prüfe Channel-Sync-Status
assert:
that:
- suma_channel_details.json.result.last_sync is defined
fail_msg: "Channel {{ target_clm_version }} ist nicht synchronisiert!"
success_msg: "Channel {{ target_clm_version }} wurde zuletzt synchronisiert am {{ suma_channel_details.json.result.last_sync }}."
tags: preflight
- name: Slack-Benachrichtigung bei kritischen Fehlern (Beispiel)
slack:
token: "{{ slack_token | default('xoxb-...') }}"
msg: "[CRITICAL] Fehler beim Preflight-Check auf {{ inventory_hostname }}: {{ ansible_failed_result.msg | default('Unbekannter Fehler') }}"
channel: "#linux-admins"
when: slack_enabled | default(false) and (ansible_failed_result is defined and ansible_failed_result is not none)
ignore_errors: true
tags: preflight

View File

@ -0,0 +1,81 @@
---
- name: Prüfe, ob dnf verfügbar ist (RHEL 8+)
stat:
path: /usr/bin/dnf
register: dnf_exists
- name: Pre-Upgrade-Check (yum/dnf)
shell: |
if [ -x /usr/bin/dnf ]; then
dnf check-update || true
else
yum check-update || true
fi
register: rhel_check
changed_when: false
- name: Kernel-Version vor Upgrade sichern
shell: uname -r
register: kernel_before
changed_when: false
- name: Upgrade durchführen (dnf/yum, security-only optional)
package:
name: "*"
state: latest
register: upgrade_result
when: not upgrade_dry_run and not upgrade_security_only
ignore_errors: true
- name: Upgrade durchführen (dnf/yum, nur Security-Updates)
dnf:
name: "*"
state: latest
security: yes
register: upgrade_result
when: not upgrade_dry_run and upgrade_security_only and dnf_exists.stat.exists
ignore_errors: true
- name: Upgrade durchführen (yum-plugin-security Fallback)
command: yum -y --security update
register: upgrade_result
when: not upgrade_dry_run and upgrade_security_only and not dnf_exists.stat.exists
ignore_errors: true
- name: Logge Fehler beim Upgrade (RHEL)
copy:
content: "Upgrade-Fehler: {{ upgrade_result.stderr | default(upgrade_result.msg | default('Unbekannter Fehler')) }}"
dest: "{{ log_dir }}/rhel_upgrade_error_{{ inventory_hostname }}.log"
when: upgrade_result is failed
- name: Setze Rollback-Flag, falls Upgrade fehlschlägt
set_fact:
rollback: true
when: upgrade_result is failed
- name: Breche Playbook ab, wenn Upgrade fehlschlägt
fail:
msg: "Upgrade fehlgeschlagen, Rollback wird empfohlen! Siehe Log: {{ log_dir }}/rhel_upgrade_error_{{ inventory_hostname }}.log"
when: upgrade_result is failed
- name: Logge Upgrade-Output (RHEL)
copy:
content: "{{ rhel_check.stdout }}"
dest: "{{ log_dir }}/rhel_upgrade_check.log"
when: upgrade_result is not failed
- name: Kernel-Version nach Upgrade sichern
shell: uname -r
register: kernel_after
changed_when: false
when: upgrade_result is not failed
- name: Prüfe, ob Kernel-Upgrade erfolgt ist und setze Reboot nötig
set_fact:
reboot_after_upgrade: true
when: upgrade_result is not failed and (kernel_before.stdout != kernel_after.stdout)
- name: Hinweis auf EUS/Leapp (nur RHEL 7/8)
debug:
msg: "Für Major Upgrades (z.B. 7->8) empfiehlt Red Hat das Tool 'leapp' oder EUS-Strategien. Siehe https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html-single/upgrading_from_rhel_7_to_rhel_8/index.html"
when: ansible_facts['distribution_major_version']|int >= 7

View File

@ -0,0 +1,56 @@
---
- name: Self-Healing: Starte kritische Dienste neu, falls sie nicht laufen
service:
name: "{{ item }}"
state: restarted
register: restart_result
loop: "{{ critical_services | default(['sshd','cron']) }}"
when: item in services and (services[item].state != 'running' and services[item].state != 'started')
ignore_errors: true
- name: Prüfe, ob Restart erfolgreich war
service_facts:
- name: Logge Self-Healing-Resultate
copy:
content: |
Self-Healing-Report für {{ inventory_hostname }}
Zeit: {{ ansible_date_time.iso8601 }}
Kritische Dienste: {{ critical_services | default(['sshd','cron']) }}
Restart-Resultate: {{ restart_result.results | default(restart_result) | to_nice_json }}
Service-Status nach Restart:
{% for item in critical_services | default(['sshd','cron']) %}
- {{ item }}: {{ services[item].state | default('unbekannt') }}
{% endfor %}
dest: "{{ log_dir }}/self_healing_{{ inventory_hostname }}.log"
ignore_errors: true
- name: Eskaliere per Mail, wenn Restart fehlschlägt
mail:
host: "localhost"
port: 25
to: "{{ linux_admins_mail }}"
subject: "[SELF-HEALING-FAIL] Dienst konnte nicht neu gestartet werden auf {{ inventory_hostname }}"
body: |
Self-Healing konnte einen oder mehrere kritische Dienste nicht erfolgreich neu starten!
Siehe Log: {{ log_dir }}/self_healing_{{ inventory_hostname }}.log
Zeit: {{ ansible_date_time.iso8601 }}
when: >-
restart_result is defined and (
(restart_result.results is defined and (restart_result.results | selectattr('state', 'ne', 'running') | list | length > 0))
or
(restart_result.state is defined and restart_result.state != 'running')
)
ignore_errors: true
- name: Self-Healing: Bereinige /tmp, /var/tmp, /var/log/alt bei wenig Speicherplatz
shell: rm -rf /tmp/* /var/tmp/* /var/log/alt/*
when: ansible_mounts[0].size_available < 10737418240 # <10GB frei
ignore_errors: true
- name: Self-Healing: Netzwerkdienst neu starten bei Netzwerkproblemen
service:
name: network
state: restarted
when: ansible_default_ipv4 is not defined or ansible_default_ipv4['address'] is not defined
ignore_errors: true

View File

@ -0,0 +1,54 @@
---
- name: Erstelle/aktualisiere Change in ServiceNow (vor Patch)
community.general.snow_record:
instance: "{{ servicenow_instance }}"
username: "{{ servicenow_user }}"
password: "{{ servicenow_pass }}"
state: present
table: change_request
data:
short_description: "OS Patch/Upgrade {{ inventory_hostname }}"
description: "Automatisiertes Upgrade via Ansible"
category: "Software"
risk: "2"
impact: "2"
priority: "3"
work_start: "{{ ansible_date_time.iso8601 }}"
requested_by: "{{ servicenow_requested_by | default('ansible_automation') }}"
register: snow_change
ignore_errors: true
- name: Dokumentiere Change-Nummer
debug:
msg: "ServiceNow Change: {{ snow_change.record.number | default('N/A') }}"
- name: Erstelle Incident bei Fehlern (optional)
community.general.snow_record:
instance: "{{ servicenow_instance }}"
username: "{{ servicenow_user }}"
password: "{{ servicenow_pass }}"
state: present
table: incident
data:
short_description: "Patch/Upgrade FAILED auf {{ inventory_hostname }}"
description: "Siehe Logs unter {{ log_dir }}. Zeitpunkt: {{ ansible_date_time.iso8601 }}"
severity: "2"
urgency: "2"
impact: "2"
when: ansible_failed_result is defined and ansible_failed_result is not none
ignore_errors: true
- name: Aktualisiere Change (Abschluss)
community.general.snow_record:
instance: "{{ servicenow_instance }}"
username: "{{ servicenow_user }}"
password: "{{ servicenow_pass }}"
state: present
table: change_request
number: "{{ snow_change.record.number | default(omit) }}"
data:
work_end: "{{ ansible_date_time.iso8601 }}"
close_notes: "Upgrade abgeschlossen auf {{ inventory_hostname }}"
state: "3"
when: snow_change is defined and snow_change.record is defined
ignore_errors: true

View File

@ -0,0 +1,62 @@
---
- name: Pre-Upgrade-Check (zypper)
shell: zypper list-updates || true
register: sles_check
changed_when: false
- name: Kernel-Version vor Upgrade sichern
shell: uname -r
register: kernel_before
changed_when: false
- name: Upgrade durchführen (zypper, full)
zypper:
name: '*'
state: latest
extra_args: '--non-interactive'
register: upgrade_result
when: not upgrade_dry_run and not upgrade_security_only
ignore_errors: true
- name: Upgrade durchführen (zypper, nur Security)
command: zypper --non-interactive patch --category security
register: upgrade_result
when: not upgrade_dry_run and upgrade_security_only
ignore_errors: true
- name: Logge Fehler beim Upgrade (SLES)
copy:
content: "Upgrade-Fehler: {{ upgrade_result.stderr | default(upgrade_result.msg | default('Unbekannter Fehler')) }}"
dest: "{{ log_dir }}/sles_upgrade_error_{{ inventory_hostname }}.log"
when: upgrade_result is failed
- name: Setze Rollback-Flag, falls Upgrade fehlschlägt
set_fact:
rollback: true
when: upgrade_result is failed
- name: Breche Playbook ab, wenn Upgrade fehlschlägt
fail:
msg: "Upgrade fehlgeschlagen, Rollback wird empfohlen! Siehe Log: {{ log_dir }}/sles_upgrade_error_{{ inventory_hostname }}.log"
when: upgrade_result is failed
- name: Logge Upgrade-Output (SLES)
copy:
content: "{{ sles_check.stdout }}"
dest: "{{ log_dir }}/sles_upgrade_check.log"
when: upgrade_result is not failed
- name: Kernel-Version nach Upgrade sichern
shell: uname -r
register: kernel_after
changed_when: false
when: upgrade_result is not failed
- name: Prüfe, ob Kernel-Upgrade erfolgt ist und setze Reboot nötig
set_fact:
reboot_after_upgrade: true
when: upgrade_result is not failed and (kernel_before.stdout != kernel_after.stdout)
- name: Hinweis auf SLE-Upgrade-Tool
debug:
msg: "Für Major Upgrades (z.B. SLES 12->15) empfiehlt SUSE das Tool 'SUSEConnect' und 'zypper migration'. Siehe https://documentation.suse.com/sles/15-SP4/html/SLES-all/cha-upgrade.html"

View File

@ -0,0 +1,109 @@
---
- name: Prüfe, ob HTTP-Service installiert ist (Apache/Nginx)
stat:
path: /usr/sbin/httpd
register: apache_check
ignore_errors: true
- name: Prüfe, ob Nginx installiert ist
stat:
path: /usr/sbin/nginx
register: nginx_check
ignore_errors: true
- name: Smoke-Test: Prüfe HTTP-Endpoint (nur wenn Web-Server installiert)
uri:
url: "{{ smoke_test_url | default('http://localhost') }}"
status_code: 200
return_content: no
register: http_result
ignore_errors: true
when: apache_check.stat.exists or nginx_check.stat.exists
- name: Smoke-Test: Prüfe offenen Port (optional)
wait_for:
port: "{{ smoke_test_port | default(80) }}"
host: "{{ smoke_test_host | default('localhost') }}"
timeout: 5
register: port_result
ignore_errors: true
- name: Prüfe, ob MySQL/MariaDB installiert ist
stat:
path: /usr/bin/mysql
register: mysql_check
ignore_errors: true
- name: Smoke-Test: Prüfe Datenbankverbindung (nur wenn MySQL installiert)
shell: "echo 'select 1' | mysql -h {{ smoke_test_db_host | default('localhost') }} -u {{ smoke_test_db_user | default('root') }} --password={{ smoke_test_db_pass | default('') }} {{ smoke_test_db_name | default('') }}"
register: db_result
ignore_errors: true
when: mysql_check.stat.exists and smoke_test_db_host is defined
- name: Prüfe, ob Oracle installiert ist
stat:
path: /u01/app/oracle/product
register: oracle_check
ignore_errors: true
- name: Oracle DB: Finde alle Oracle SIDs (nur wenn Oracle installiert)
shell: |
ps -ef | grep -E "ora_pmon_[A-Z0-9_]+" | grep -v grep | awk '{print $NF}' | sed 's/ora_pmon_//'
register: oracle_sids
changed_when: false
ignore_errors: true
when: oracle_check.stat.exists
- name: Oracle DB: Finde alle Oracle Listener (nur wenn Oracle installiert)
shell: |
ps -ef | grep -E "tnslsnr" | grep -v grep | awk '{print $NF}' | sed 's/tnslsnr//'
register: oracle_listeners
changed_when: false
ignore_errors: true
when: oracle_check.stat.exists
- name: Oracle DB: Prüfe alle gefundenen SIDs (nur wenn Oracle installiert)
shell: |
export ORACLE_HOME={{ item.oracle_home | default('/u01/app/oracle/product/19.0.0/dbhome_1') }}
export PATH=$ORACLE_HOME/bin:$PATH
export ORACLE_SID={{ item.sid }}
sqlplus -S / as sysdba <<EOF
select 'OK' as status from dual;
exit;
EOF
register: oracle_sid_check
loop: "{{ oracle_sids.stdout_lines | map('regex_replace', '^(.+)$', '{\"sid\": \"\\1\", \"oracle_home\": \"/u01/app/oracle/product/19.0.0/dbhome_1\"}') | map('from_json') | list }}"
ignore_errors: true
when: oracle_check.stat.exists and oracle_sids.stdout_lines | length > 0
- name: Oracle DB: Prüfe alle gefundenen Listener (nur wenn Oracle installiert)
shell: |
export ORACLE_HOME={{ item.oracle_home | default('/u01/app/oracle/product/19.0.0/dbhome_1') }}
export PATH=$ORACLE_HOME/bin:$PATH
lsnrctl status {{ item.listener }}
register: oracle_listener_check
loop: "{{ oracle_listeners.stdout_lines | map('regex_replace', '^(.+)$', '{\"listener\": \"\\1\", \"oracle_home\": \"/u01/app/oracle/product/19.0.0/dbhome_1\"}') | map('from_json') | list }}"
ignore_errors: true
when: oracle_check.stat.exists and oracle_listeners.stdout_lines | length > 0
- name: Oracle DB: Logge Oracle-Check-Ergebnisse (nur wenn Oracle installiert)
copy:
content: |
Oracle DB Check für {{ inventory_hostname }}
Zeit: {{ ansible_date_time.iso8601 }}
Gefundene SIDs: {{ oracle_sids.stdout_lines | default([]) }}
Gefundene Listener: {{ oracle_listeners.stdout_lines | default([]) }}
SID-Check-Resultate: {{ oracle_sid_check.results | default([]) | to_nice_json }}
Listener-Check-Resultate: {{ oracle_listener_check.results | default([]) | to_nice_json }}
dest: "{{ log_dir }}/oracle_check_{{ inventory_hostname }}.log"
ignore_errors: true
when: oracle_check.stat.exists
- name: Smoke-Test Ergebnis zusammenfassen
debug:
msg:
- "HTTP-Test: {{ http_result.status | default('NOT INSTALLED') }}"
- "Port-Test: {{ port_result.state | default('FAILED') }}"
- "DB-Test: {{ db_result.rc | default('NOT INSTALLED') }}"
- "Oracle SIDs gefunden: {{ oracle_sids.stdout_lines | length | default(0) if oracle_check.stat.exists else 'NOT INSTALLED' }}"
- "Oracle Listener gefunden: {{ oracle_listeners.stdout_lines | length | default(0) if oracle_check.stat.exists else 'NOT INSTALLED' }}"

View File

@ -0,0 +1,151 @@
---
- name: Setze Variablen für SUSE Manager API
set_fact:
suma_api_url: "{{ suma_api_url }}"
suma_api_user: "{{ suma_api_user }}"
suma_api_pass: "{{ suma_api_pass }}"
- name: Hole System-ID aus SUSE Manager
uri:
url: "{{ suma_api_url }}"
method: POST
body_format: json
headers:
Content-Type: application/json
body: |
{
"method": "auth.login",
"params": ["{{ suma_api_user }}", "{{ suma_api_pass }}"],
"id": 1
}
validate_certs: no
register: suma_api_login
ignore_errors: true
- name: Logge Fehler bei API-Login
copy:
content: "API-Login-Fehler: {{ suma_api_login.msg | default('Unbekannter Fehler') }}"
dest: "{{ log_dir }}/suma_api_error_{{ inventory_hostname }}.log"
when: suma_api_login.failed
- name: Breche Playbook ab, wenn API-Login fehlschlägt
fail:
msg: "SUSE Manager API-Login fehlgeschlagen! Siehe Log: {{ log_dir }}/suma_api_error_{{ inventory_hostname }}.log"
when: suma_api_login.failed
- name: Setze Session-ID
set_fact:
suma_session: "{{ suma_api_login.json.result }}"
- name: Suche System-ID anhand Hostname
uri:
url: "{{ suma_api_url }}"
method: POST
body_format: json
headers:
Content-Type: application/json
body: |
{
"method": "system.getId",
"params": ["{{ suma_session }}", "{{ inventory_hostname }}"],
"id": 2
}
validate_certs: no
register: suma_system_id
ignore_errors: true
- name: Logge Fehler bei System-ID-Suche
copy:
content: "System-ID-Fehler: {{ suma_system_id.msg | default('Unbekannter Fehler') }}"
dest: "{{ log_dir }}/suma_api_error_{{ inventory_hostname }}.log"
when: suma_system_id.failed
- name: Breche Playbook ab, wenn System-ID nicht gefunden
fail:
msg: "System-ID nicht gefunden! Siehe Log: {{ log_dir }}/suma_api_error_{{ inventory_hostname }}.log"
when: suma_system_id.failed
- name: Suche Channel-ID anhand Ziel-CLM-Version
uri:
url: "{{ suma_api_url }}"
method: POST
body_format: json
headers:
Content-Type: application/json
body: |
{
"method": "channel.software.listAllChannels",
"params": ["{{ suma_session }}"],
"id": 3
}
validate_certs: no
register: suma_channels
ignore_errors: true
- name: Logge Fehler bei Channel-Suche
copy:
content: "Channel-Such-Fehler: {{ suma_channels.msg | default('Unbekannter Fehler') }}"
dest: "{{ log_dir }}/suma_api_error_{{ inventory_hostname }}.log"
when: suma_channels.failed
- name: Breche Playbook ab, wenn Channel-Suche fehlschlägt
fail:
msg: "Channel-Suche fehlgeschlagen! Siehe Log: {{ log_dir }}/suma_api_error_{{ inventory_hostname }}.log"
when: suma_channels.failed
- name: Finde Channel-ID für Ziel-CLM-Version
set_fact:
target_channel_label: "{{ item.label }}"
loop: "{{ suma_channels.json.result }}"
when: item.name is search(target_clm_version)
loop_control:
label: "{{ item.label }}"
- name: Breche ab, wenn kein passender Channel gefunden wurde
fail:
msg: "Kein passender CLM-Channel für '{{ target_clm_version }}' gefunden!"
when: target_channel_label is not defined
- name: Weise System dem Channel zu
uri:
url: "{{ suma_api_url }}"
method: POST
body_format: json
headers:
Content-Type: application/json
body: |
{
"method": "system.setBaseChannel",
"params": ["{{ suma_session }}", {{ suma_system_id.json.result[0].id }}, "{{ target_channel_label }}"],
"id": 4
}
validate_certs: no
register: suma_assign_result
ignore_errors: true
- name: Logge Fehler bei Channel-Zuweisung
copy:
content: "Channel-Zuweisungs-Fehler: {{ suma_assign_result.msg | default('Unbekannter Fehler') }}"
dest: "{{ log_dir }}/suma_api_error_{{ inventory_hostname }}.log"
when: suma_assign_result.failed
- name: Breche Playbook ab, wenn Channel-Zuweisung fehlschlägt
fail:
msg: "Channel-Zuweisung fehlgeschlagen! Siehe Log: {{ log_dir }}/suma_api_error_{{ inventory_hostname }}.log"
when: suma_assign_result.failed
- name: Logout von der SUSE Manager API
uri:
url: "{{ suma_api_url }}"
method: POST
body_format: json
headers:
Content-Type: application/json
body: |
{
"method": "auth.logout",
"params": ["{{ suma_session }}"],
"id": 5
}
validate_certs: no
when: suma_session is defined

View File

@ -0,0 +1,65 @@
---
- name: Erstelle VMware Snapshot vor Upgrade
vmware_guest_snapshot:
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_user }}"
password: "{{ vcenter_password }}"
validate_certs: no
datacenter: "{{ vcenter_datacenter }}"
folder: "{{ vcenter_folder | default('/') }}"
name: "{{ inventory_hostname }}"
state: present
snapshot_name: "pre-upgrade-{{ inventory_hostname }}-{{ ansible_date_time.iso8601_basic }}"
description: "Snapshot vor Auto-Upgrade"
memory: yes
quiesce: yes
delegate_to: localhost
register: snapshot_result
failed_when: snapshot_result.failed is defined and snapshot_result.failed
retries: 3
delay: 10
- name: Logge Fehler bei Snapshot-Erstellung
copy:
content: "Snapshot-Fehler: {{ snapshot_result.msg | default('Unbekannter Fehler') }}"
dest: "{{ log_dir }}/snapshot_error_{{ inventory_hostname }}.log"
when: snapshot_result is failed
- name: Setze Rollback-Flag, falls Snapshot-Erstellung fehlschlägt
set_fact:
rollback: true
when: snapshot_result is failed
- name: Breche Playbook ab, wenn Snapshot-Erstellung fehlschlägt
fail:
msg: "Snapshot-Erstellung fehlgeschlagen, Upgrade wird abgebrochen! Siehe Log: {{ log_dir }}/snapshot_error_{{ inventory_hostname }}.log"
when: snapshot_result is failed
- name: Rollback: Setze VM auf Snapshot zurück (nur bei Fehler und wenn aktiviert)
vmware_guest_snapshot:
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_user }}"
password: "{{ vcenter_password }}"
validate_certs: no
datacenter: "{{ vcenter_datacenter }}"
folder: "{{ vcenter_folder | default('/') }}"
name: "{{ inventory_hostname }}"
state: revert
snapshot_name: "pre-upgrade-{{ inventory_hostname }}-{{ ansible_date_time.iso8601_basic }}"
when: rollback is defined and rollback
delegate_to: localhost
- name: Lösche VMware Snapshot nach erfolgreichem Patchlauf (optional)
vmware_guest_snapshot:
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_user }}"
password: "{{ vcenter_password }}"
validate_certs: no
datacenter: "{{ vcenter_datacenter }}"
folder: "{{ vcenter_folder | default('/') }}"
name: "{{ inventory_hostname }}"
state: absent
snapshot_name: "pre-upgrade-{{ inventory_hostname }}-{{ ansible_date_time.iso8601_basic }}"
delegate_to: localhost
when: (upgrade_result is defined and upgrade_result is not failed) and (snapshot_cleanup | default(true))
ignore_errors: true

View File

@ -0,0 +1,19 @@
plugin: servicenow.servicenow.now
instance: "{{ servicenow_instance }}"
username: "{{ servicenow_user }}"
password: "{{ servicenow_pass }}"
table: 'cmdb_ci_server'
fields:
- fqdn
- name
- u_app_group
- u_app_mail
- u_host_email
keyed_groups:
- key: u_app_group
prefix: ''
separator: ''
compose:
ansible_host: fqdn
app_mail: u_app_mail
host_email: u_host_email

4
scripts/install_collections.sh Executable file
View File

@ -0,0 +1,4 @@
#!/usr/bin/env bash
set -euo pipefail
cd "$(dirname "$0")/.."
ansible-galaxy collection install -r playbook/requirements.yml --force

26
scripts/run_patch.sh Executable file
View File

@ -0,0 +1,26 @@
#!/usr/bin/env bash
set -euo pipefail
APP_GROUP=${1:-}
TARGET_CLM=${2:-}
EXTRA_VARS=${3:-}
if [[ -z "$APP_GROUP" ]]; then
echo "Usage: $0 <app-group> [target_clm_version] [extra_vars]" >&2
exit 1
fi
CMD=(ansible-playbook playbook/playbook.yml -l "$APP_GROUP" --ask-vault-pass)
if [[ -n "$TARGET_CLM" ]]; then
CMD+=( -e "target_clm_version=$TARGET_CLM" )
fi
if [[ -n "$EXTRA_VARS" ]]; then
CMD+=( -e "$EXTRA_VARS" )
fi
# Tags können bei Bedarf angepasst werden, z.B. nur preflight+upgrade
# CMD+=( --tags preflight,common,rhel,sles,post )
exec "${CMD[@]}"