Dieser Commit ist enthalten in:
2025-06-18 20:19:29 +02:00
Ursprung 024a3f04d9
Commit 18c556a790
15 geänderte Dateien mit 0 neuen und 3742 gelöschten Zeilen

Datei anzeigen

@@ -1,77 +0,0 @@
# Critical Database Field Fixes Required
## Summary
Found 62 field name inconsistencies across 6 Python files that need to be fixed to match the database schema.
## Most Critical Issues
### 1. **Sessions Table Field References**
The sessions table has duplicate columns that are causing confusion:
| Database Column | Code References (Wrong) | Files Affected |
|----------------|------------------------|----------------|
| `hardware_id` | `device_id` | api_routes.py, session_routes.py, export_routes.py |
| `is_active` | `active` | api_routes.py, session_routes.py, export_routes.py, batch_routes.py |
| `started_at` | `login_time` | session_routes.py, export_routes.py |
| `last_heartbeat` | `last_activity` | session_routes.py, export_routes.py |
| `ended_at` | `logout_time` | api_routes.py, session_routes.py, export_routes.py |
| `started_at` | `start_time` | models.py |
### 2. **Device Registration Issues**
- `device_registrations` table uses `device_id` column (line 324 in api_routes.py)
- But sessions table uses `hardware_id`
- This creates a mismatch when joining tables
## Immediate Action Required
### Option 1: Fix Code (Recommended)
Update all Python files to use the correct column names from the database schema.
### Option 2: Add Compatibility Columns (Temporary)
```sql
-- Add missing columns to sessions for backward compatibility
ALTER TABLE sessions ADD COLUMN IF NOT EXISTS device_id VARCHAR(100);
UPDATE sessions SET device_id = hardware_id WHERE device_id IS NULL;
-- Update device_registrations to use hardware_id
ALTER TABLE device_registrations RENAME COLUMN device_id TO hardware_id;
```
## Files That Need Updates
1. **routes/session_routes.py** (15 occurrences)
- Lines: 84, 134, 325 (device_id)
- Lines: 85, 109, 112, 119, 124, 135, 150 (login_time)
- Lines: 86, 119, 136, 202, 248 (logout_time)
- Lines: 87, 137 (last_activity)
- Lines: 88, 124, 138, 192, 202, 236, 248, 249, 328, 340, 361 (active)
2. **routes/api_routes.py** (12 occurrences)
- Lines: 203, 214, 345, 861 (device_id)
- Lines: 204, 344, 345, 453, 455, 457, 862 (active)
- Line: 344 (logout_time)
3. **routes/export_routes.py** (11 occurrences)
- Lines: 47, 72, 224, 244 (device_id)
- Lines: 46, 71, 228, 233, 248 (active)
- Lines: 225, 234, 245, 253, 254 (login_time)
- Lines: 226, 246 (logout_time)
- Lines: 227, 247 (last_activity)
4. **models.py** (2 occurrences)
- Line: 167 (start_time)
- Line: 177 (active - but this is just in error message)
5. **routes/batch_routes.py** (2 occurrences)
- Lines: 212, 213 (active)
6. **routes/customer_routes.py** (1 occurrence)
- Line: 392 (comment mentions correction already made)
## Testing After Fixes
1. Test all session-related functionality
2. Verify device registration/deregistration works
3. Check session history displays correctly
4. Ensure exports contain correct data
5. Validate batch operations still function

Datei anzeigen

@@ -1,124 +0,0 @@
# Database Field Name Inconsistencies Report
## Overview
This report documents all database field name inconsistencies found between the database schema (init.sql) and Python code usage in the v2_adminpanel application.
## 1. Sessions Table - Duplicate/Alias Fields
### Issue
The sessions table contains multiple duplicate columns that serve as aliases, causing confusion and inconsistent usage:
```sql
-- Current schema has these duplicate fields:
is_active BOOLEAN DEFAULT TRUE,
active BOOLEAN DEFAULT TRUE -- Alias for is_active
started_at TIMESTAMP WITH TIME ZONE DEFAULT CURRENT_TIMESTAMP,
login_time TIMESTAMP WITH TIME ZONE DEFAULT CURRENT_TIMESTAMP, -- Alias for started_at
last_heartbeat TIMESTAMP WITH TIME ZONE DEFAULT CURRENT_TIMESTAMP,
last_activity TIMESTAMP WITH TIME ZONE DEFAULT CURRENT_TIMESTAMP, -- Alias for last_heartbeat
ended_at TIMESTAMP WITH TIME ZONE,
logout_time TIMESTAMP WITH TIME ZONE, -- Alias for ended_at
```
### Code Usage Examples
- `routes/session_routes.py`:
- Line 32: `WHERE s.is_active = TRUE`
- Line 88: `s.active`
- Line 192: `WHERE active = true`
- Line 201: `SET active = false`
## 2. Device ID vs Hardware ID Mismatch
### Issue
The database schema uses `hardware_id` but the code references `device_id`:
```sql
-- Database schema:
sessions.hardware_id VARCHAR(100)
device_registrations.hardware_id TEXT NOT NULL
```
### Code Usage Examples
- `routes/session_routes.py`:
- Line 84: `s.device_id`
- Line 134: `'device_id': row[3]`
- Line 190: `SELECT license_key, username, device_id`
- Line 326: `COUNT(DISTINCT s.device_id) as unique_devices`
## 3. Time Field Inconsistencies
### Issue
Mixed usage of time field names:
### Code Usage Examples
- `models.py` line 167: `ORDER BY s.start_time DESC` (but schema has `started_at`)
- Session history queries mix `login_time` and `started_at`
## 4. Field Naming Patterns
### Consistent Patterns Found
- ✅ All tables use `created_at` (not `created`)
- ✅ Most tables use `is_active` pattern
- ✅ Foreign keys use `_id` suffix consistently
### Inconsistent Patterns
- ❌ Sessions table has both `is_active` and `active`
- ❌ Time fields have multiple aliases
## Migration Scripts
### Step 1: Fix device_id references in code
```sql
-- Create view for backward compatibility
CREATE OR REPLACE VIEW sessions_compat AS
SELECT
*,
hardware_id as device_id -- Alias for compatibility
FROM sessions;
```
### Step 2: Remove duplicate columns (after code update)
```sql
-- Remove duplicate columns from sessions table
ALTER TABLE sessions
DROP COLUMN IF EXISTS active,
DROP COLUMN IF EXISTS login_time,
DROP COLUMN IF EXISTS last_activity,
DROP COLUMN IF EXISTS logout_time;
```
### Step 3: Update indexes if needed
```sql
-- Recreate any indexes that used the dropped columns
-- (Check existing indexes first)
```
## Recommended Code Changes
### 1. Update session_routes.py
Replace all occurrences of:
- `device_id``hardware_id`
- `active``is_active`
- `login_time``started_at`
- `last_activity``last_heartbeat`
- `logout_time``ended_at`
### 2. Update models.py
- Line 167: Change `start_time` to `started_at`
### 3. Create database migration script
Create a migration that:
1. Updates all code references
2. Creates compatibility views
3. Removes duplicate columns
4. Updates any affected indexes
## Testing Checklist
- [ ] All session queries work correctly
- [ ] Session history displays properly
- [ ] Active session count is accurate
- [ ] Device tracking works correctly
- [ ] All time-based queries function properly

Datei anzeigen

@@ -1,71 +0,0 @@
# Zusammenfassung der Datenbankfeld-Korrekturen
## Durchgeführte Änderungen
### 1. Automatische Fixes (35 Ersetzungen)
Das Script `fix_database_fields.py` hat folgende Dateien korrigiert:
- **routes/session_routes.py** - 16 Fixes
- **routes/api_routes.py** - 10 Fixes
- **routes/export_routes.py** - 5 Fixes
- **routes/batch_routes.py** - 1 Fix
- **routes/customer_routes.py** - 1 Fix
- **models.py** - 2 Fixes
### 2. Manuelle Nachkorrekturen
- **api_routes.py**:
- Zeile 275: `device_id``hardware_id` in SQL-Parameter
- Zeile 301: `device_id``hardware_id` in Audit-Log
- **batch_routes.py**:
- Zeile 212: SQL `"active = %s"``"is_active = %s"`
### 3. Erstellte Kompatibilitäts-View
File: `create_compatibility_views.sql`
- Erstellt eine `sessions_compat` View mit Aliasen für alte Feldnamen
- Ermöglicht sanfte Migration ohne Breaking Changes
- Inkludiert INSERT/UPDATE Trigger für bidirektionale Kompatibilität
## Korrigierte Feldnamen
| Alte Bezeichnung | Neue Bezeichnung | Betroffene Tabelle |
|-----------------|------------------|-------------------|
| `device_id` | `hardware_id` | sessions |
| `active` | `is_active` | sessions, licenses |
| `login_time` | `started_at` | sessions |
| `last_activity` | `last_heartbeat` | sessions |
| `logout_time` | `ended_at` | sessions |
| `start_time` | `started_at` | sessions |
## Nächste Schritte
1. **Datenbank-Migration ausführen**:
```bash
psql -U postgres -d accountforge -f create_compatibility_views.sql
```
2. **Anwendung testen**:
- Sessions-Verwaltung
- Lizenz-Status-Änderungen
- Export-Funktionen
- Batch-Updates
- Device-Registrierung
3. **Nach erfolgreichem Test**:
- Doppelte Spalten aus sessions-Tabelle entfernen
- Kompatibilitäts-View entfernen
- Code auf direkte Tabellenzugriffe umstellen
## Backup-Dateien
Alle geänderten Dateien haben `.backup` Kopien:
- routes/session_routes.py.backup
- routes/api_routes.py.backup
- routes/export_routes.py.backup
- routes/batch_routes.py.backup
- routes/customer_routes.py.backup
- models.py.backup
## Wichtige Hinweise
- Die sessions-Tabelle hat noch immer doppelte Spalten in der DB
- Die Kompatibilitäts-View sollte zuerst erstellt werden
- Alle Änderungen sind rückgängig machbar über die .backup Dateien
- Der Status-Toggle-Bug sollte jetzt behoben sein

Datei anzeigen

@@ -1,74 +0,0 @@
# Verbleibende Datenbankfeld-Inkonsistenzen
## Status der Bereinigung
### ✅ Vollständig behoben:
1. **sessions Tabelle**:
- `device_id``hardware_id` (alle Referenzen korrigiert)
- `active``is_active` (alle Referenzen korrigiert)
- `login_time``started_at` (alle Referenzen korrigiert)
- `logout_time``ended_at` (alle Referenzen korrigiert)
- `last_activity``last_heartbeat` (alle Referenzen korrigiert)
- `start_time``started_at` (korrigiert in models.py)
2. **licenses Tabelle**:
- `active``is_active` (alle Referenzen korrigiert)
### ⚠️ Strukturelle Probleme in device_registrations:
Die `device_registrations` Tabelle hat folgende Inkonsistenzen zwischen Schema und Code:
| Datenbankschema | Code erwartet | Problem |
|-----------------|---------------|---------|
| `license_id` (FK) | `license_key` | Code nutzt license_key direkt |
| `first_seen` | `registration_date` | Unterschiedliche Benennung |
| - | `device_type` | Spalte fehlt in DB |
| - | `license_key` | Spalte fehlt in DB |
### 🔧 Durchgeführte Anpassungen:
1. **API-Routes korrigiert**:
- JOINs eingefügt um license_key über license_id zu erhalten
- `first_seen as registration_date` Aliase hinzugefügt
- INSERT nutzt jetzt `license_id` statt `license_key`
2. **Export-Routes korrigiert**:
- Letzte `device_id` Referenzen zu `hardware_id` geändert
3. **Session-Routes korrigiert**:
- Statistik-Queries nutzen jetzt korrekte Feldnamen
### 📋 Noch zu erledigen:
1. **Datenbank-Migration ausführen**:
```sql
-- Sessions Kompatibilitäts-View
psql -f create_compatibility_views.sql
-- Device Registrations Fixes
psql -f fix_device_registrations.sql
```
2. **Fehlende Spalten hinzufügen** (optional):
```sql
ALTER TABLE device_registrations
ADD COLUMN device_type VARCHAR(50) DEFAULT 'unknown';
```
3. **Doppelte Spalten entfernen** (nach erfolgreichen Tests):
```sql
ALTER TABLE sessions
DROP COLUMN active,
DROP COLUMN login_time,
DROP COLUMN last_activity,
DROP COLUMN logout_time;
```
## Zusammenfassung
- **83 Inkonsistenzen** wurden automatisch behoben
- **8 zusätzliche manuelle Fixes** wurden durchgeführt
- **Alle kritischen Feldnamen** sind jetzt konsistent
- **device_registrations** benötigt noch strukturelle Anpassungen
Die Anwendung sollte jetzt funktionieren, da alle Code-Referenzen korrigiert wurden. Die Datenbank-Migrationen sind optional für langfristige Konsistenz.

Datei anzeigen

@@ -1,122 +0,0 @@
-- Compatibility Views für sanfte Migration
-- Diese Views ermöglichen es, dass der Code weiterhin funktioniert,
-- während wir schrittweise die Feldnamen korrigieren
-- 1. Sessions Compatibility View
-- Mappt alle falschen Feldnamen auf die korrekten
CREATE OR REPLACE VIEW sessions_compat AS
SELECT
id,
license_id,
license_key,
session_id,
username,
computer_name,
hardware_id,
hardware_id as device_id, -- Alias für Kompatibilität
ip_address,
user_agent,
app_version,
started_at,
started_at as login_time, -- Alias für Kompatibilität
started_at as start_time, -- Alias für models.py
last_heartbeat,
last_heartbeat as last_activity, -- Alias für Kompatibilität
ended_at,
ended_at as logout_time, -- Alias für Kompatibilität
is_active,
is_active as active -- Alias für Kompatibilität
FROM sessions;
-- Grant permissions
GRANT SELECT, INSERT, UPDATE, DELETE ON sessions_compat TO PUBLIC;
-- 2. Trigger für INSERT auf sessions_compat
CREATE OR REPLACE FUNCTION insert_sessions_compat() RETURNS TRIGGER AS $$
BEGIN
-- Map compatibility fields back to real columns
INSERT INTO sessions (
license_id, license_key, session_id, username, computer_name,
hardware_id, ip_address, user_agent, app_version,
started_at, last_heartbeat, ended_at, is_active
) VALUES (
NEW.license_id,
NEW.license_key,
NEW.session_id,
NEW.username,
NEW.computer_name,
COALESCE(NEW.hardware_id, NEW.device_id), -- Accept both
NEW.ip_address,
NEW.user_agent,
NEW.app_version,
COALESCE(NEW.started_at, NEW.login_time), -- Accept both
COALESCE(NEW.last_heartbeat, NEW.last_activity), -- Accept both
COALESCE(NEW.ended_at, NEW.logout_time), -- Accept both
COALESCE(NEW.is_active, NEW.active) -- Accept both
);
RETURN NEW;
END;
$$ LANGUAGE plpgsql;
CREATE TRIGGER trigger_insert_sessions_compat
INSTEAD OF INSERT ON sessions_compat
FOR EACH ROW EXECUTE FUNCTION insert_sessions_compat();
-- 3. Trigger für UPDATE auf sessions_compat
CREATE OR REPLACE FUNCTION update_sessions_compat() RETURNS TRIGGER AS $$
BEGIN
UPDATE sessions SET
license_id = NEW.license_id,
license_key = NEW.license_key,
session_id = NEW.session_id,
username = NEW.username,
computer_name = NEW.computer_name,
hardware_id = COALESCE(NEW.hardware_id, NEW.device_id),
ip_address = NEW.ip_address,
user_agent = NEW.user_agent,
app_version = NEW.app_version,
started_at = COALESCE(NEW.started_at, NEW.login_time),
last_heartbeat = COALESCE(NEW.last_heartbeat, NEW.last_activity),
ended_at = COALESCE(NEW.ended_at, NEW.logout_time),
is_active = COALESCE(NEW.is_active, NEW.active)
WHERE id = NEW.id;
RETURN NEW;
END;
$$ LANGUAGE plpgsql;
CREATE TRIGGER trigger_update_sessions_compat
INSTEAD OF UPDATE ON sessions_compat
FOR EACH ROW EXECUTE FUNCTION update_sessions_compat();
-- 4. Sync existing duplicate columns (one-time sync)
-- This ensures data consistency before we start using the view
UPDATE sessions SET
hardware_id = COALESCE(hardware_id, device_id),
started_at = COALESCE(started_at, login_time),
last_heartbeat = COALESCE(last_heartbeat, last_activity),
ended_at = COALESCE(ended_at, logout_time),
is_active = COALESCE(is_active, active)
WHERE hardware_id IS NULL
OR started_at IS NULL
OR last_heartbeat IS NULL
OR is_active IS NULL;
-- 5. Verification Query
SELECT
'Sessions with NULL hardware_id' as check_name,
COUNT(*) as count
FROM sessions WHERE hardware_id IS NULL
UNION ALL
SELECT
'Sessions with NULL started_at' as check_name,
COUNT(*) as count
FROM sessions WHERE started_at IS NULL
UNION ALL
SELECT
'Sessions with NULL is_active' as check_name,
COUNT(*) as count
FROM sessions WHERE is_active IS NULL;
-- 6. Create index on hardware_id if not exists
CREATE INDEX IF NOT EXISTS idx_sessions_hardware_id ON sessions(hardware_id);
CREATE INDEX IF NOT EXISTS idx_sessions_is_active ON sessions(is_active);

Datei anzeigen

@@ -1,178 +0,0 @@
#!/usr/bin/env python3
"""
Script to fix database field name inconsistencies in the v2_adminpanel codebase.
This script identifies and optionally fixes incorrect field references.
"""
import os
import re
from typing import Dict, List, Tuple
import argparse
# Define the field mappings (incorrect -> correct)
FIELD_MAPPINGS = {
# Sessions table
'device_id': 'hardware_id',
'active': 'is_active',
'login_time': 'started_at',
'last_activity': 'last_heartbeat',
'logout_time': 'ended_at',
'start_time': 'started_at'
}
# Files to check
FILES_TO_CHECK = [
'routes/session_routes.py',
'routes/api_routes.py',
'routes/export_routes.py',
'routes/batch_routes.py',
'routes/customer_routes.py',
'models.py'
]
# Patterns to identify database field usage
PATTERNS = [
# SQL queries
(r'SELECT\s+.*?(\w+).*?FROM\s+sessions', 'SQL SELECT'),
(r'WHERE\s+.*?(\w+)\s*=', 'SQL WHERE'),
(r'SET\s+(\w+)\s*=', 'SQL SET'),
(r'ORDER\s+BY\s+.*?(\w+)', 'SQL ORDER BY'),
# Dictionary/JSON access
(r'\[[\'"](device_id|active|login_time|last_activity|logout_time|start_time)[\'"]\]', 'Dict access'),
(r'\.get\([\'"](device_id|active|login_time|last_activity|logout_time|start_time)[\'"]', 'Dict get'),
# Row access patterns
(r'row\[\d+\]\s*#.*?(device_id|active|login_time|last_activity|logout_time|start_time)', 'Row comment'),
# Column references in queries
(r's\.(device_id|active|login_time|last_activity|logout_time|start_time)', 'Table alias')
]
def find_field_usage(file_path: str) -> List[Tuple[int, str, str, str]]:
"""Find all incorrect field usages in a file."""
issues = []
try:
with open(file_path, 'r', encoding='utf-8') as f:
lines = f.readlines()
for line_num, line in enumerate(lines, 1):
for field_old, field_new in FIELD_MAPPINGS.items():
# Simple pattern matching for field names
if field_old in line:
# Check if it's in a string or actual code
if f"'{field_old}'" in line or f'"{field_old}"' in line:
issues.append((line_num, line.strip(), field_old, field_new))
elif f'.{field_old}' in line or f' {field_old} ' in line:
issues.append((line_num, line.strip(), field_old, field_new))
elif f'[{field_old}]' in line:
issues.append((line_num, line.strip(), field_old, field_new))
except Exception as e:
print(f"Error reading {file_path}: {e}")
return issues
def generate_fix_report(base_path: str) -> Dict[str, List[Tuple[int, str, str, str]]]:
"""Generate a report of all field usage issues."""
report = {}
for file_name in FILES_TO_CHECK:
file_path = os.path.join(base_path, file_name)
if os.path.exists(file_path):
issues = find_field_usage(file_path)
if issues:
report[file_name] = issues
return report
def apply_fixes(base_path: str, report: Dict[str, List[Tuple[int, str, str, str]]], dry_run: bool = True):
"""Apply fixes to the files."""
for file_name, issues in report.items():
file_path = os.path.join(base_path, file_name)
if dry_run:
print(f"\n--- DRY RUN: Would fix {file_name} ---")
for line_num, line, old_field, new_field in issues:
print(f" Line {line_num}: {old_field} -> {new_field}")
continue
# Read the file
with open(file_path, 'r', encoding='utf-8') as f:
content = f.read()
# Apply replacements
original_content = content
replacements_made = 0
for old_field, new_field in FIELD_MAPPINGS.items():
# Replace in strings
patterns = [
(f"'{old_field}'", f"'{new_field}'"),
(f'"{old_field}"', f'"{new_field}"'),
(f'.{old_field}', f'.{new_field}'),
(f' {old_field} ', f' {new_field} '),
(f' {old_field},', f' {new_field},'),
(f' {old_field}\n', f' {new_field}\n'),
]
for pattern_old, pattern_new in patterns:
if pattern_old in content:
content = content.replace(pattern_old, pattern_new)
replacements_made += 1
# Write back only if changes were made
if content != original_content:
# Create backup
backup_path = f"{file_path}.backup"
with open(backup_path, 'w', encoding='utf-8') as f:
f.write(original_content)
# Write fixed content
with open(file_path, 'w', encoding='utf-8') as f:
f.write(content)
print(f"\nFixed {file_name} ({replacements_made} replacements made)")
print(f" Backup saved to: {backup_path}")
def main():
parser = argparse.ArgumentParser(description='Fix database field name inconsistencies')
parser.add_argument('--apply', action='store_true', help='Apply fixes (default is dry run)')
parser.add_argument('--path', default='.', help='Base path of the project')
args = parser.parse_args()
print("Database Field Name Fixer")
print("=" * 50)
# Generate report
report = generate_fix_report(args.path)
if not report:
print("No issues found!")
return
# Display report
total_issues = 0
for file_name, issues in report.items():
print(f"\n{file_name}: {len(issues)} issues")
total_issues += len(issues)
for line_num, line, old_field, new_field in issues[:5]: # Show first 5
print(f" Line {line_num}: {old_field} -> {new_field}")
print(f" {line[:80]}...")
if len(issues) > 5:
print(f" ... and {len(issues) - 5} more")
print(f"\nTotal issues found: {total_issues}")
# Apply fixes if requested
if args.apply:
print("\nApplying fixes...")
apply_fixes(args.path, report, dry_run=False)
print("\nFixes applied! Please test all functionality.")
else:
print("\nRun with --apply to fix these issues.")
apply_fixes(args.path, report, dry_run=True)
if __name__ == '__main__':
main()

Datei anzeigen

@@ -1,29 +0,0 @@
-- Fixes für device_registrations Tabelle
-- Diese Spalten fehlen und werden im Code verwendet
-- 1. Füge fehlende Spalten hinzu
ALTER TABLE device_registrations
ADD COLUMN IF NOT EXISTS device_type VARCHAR(50) DEFAULT 'unknown',
ADD COLUMN IF NOT EXISTS license_key VARCHAR(60);
-- 2. Füge registration_date als Alias für first_seen hinzu
-- (Oder nutze first_seen im Code)
-- 3. Fülle license_key aus licenses Tabelle
UPDATE device_registrations dr
SET license_key = l.license_key
FROM licenses l
WHERE dr.license_id = l.id
AND dr.license_key IS NULL;
-- 4. Erstelle Index für license_key
CREATE INDEX IF NOT EXISTS idx_device_license_key ON device_registrations(license_key);
-- 5. View für Kompatibilität
CREATE OR REPLACE VIEW device_registrations_compat AS
SELECT
dr.*,
dr.first_seen as registration_date,
l.license_key as computed_license_key
FROM device_registrations dr
LEFT JOIN licenses l ON dr.license_id = l.id;

Datei anzeigen

@@ -1,171 +0,0 @@
-- Migration Script: Fix Database Field Name Inconsistencies
-- Created: 2025-06-18
-- Purpose: Standardize field names and remove duplicate columns
-- =====================================================
-- STEP 1: Create backup of affected data
-- =====================================================
-- Create backup table for sessions data
CREATE TABLE IF NOT EXISTS sessions_backup_20250618 AS
SELECT * FROM sessions;
-- =====================================================
-- STEP 2: Create compatibility views (temporary)
-- =====================================================
-- Drop existing view if exists
DROP VIEW IF EXISTS sessions_compat;
-- Create compatibility view for smooth transition
CREATE VIEW sessions_compat AS
SELECT
id,
license_id,
license_key,
session_id,
username,
computer_name,
hardware_id,
hardware_id as device_id, -- Compatibility alias
ip_address,
user_agent,
app_version,
started_at,
started_at as login_time, -- Compatibility alias
last_heartbeat,
last_heartbeat as last_activity, -- Compatibility alias
ended_at,
ended_at as logout_time, -- Compatibility alias
is_active,
is_active as active -- Compatibility alias
FROM sessions;
-- =====================================================
-- STEP 3: Update data in duplicate columns
-- =====================================================
-- Sync data from primary columns to alias columns (safety measure)
UPDATE sessions SET
login_time = COALESCE(login_time, started_at),
last_activity = COALESCE(last_activity, last_heartbeat),
logout_time = COALESCE(logout_time, ended_at),
active = COALESCE(active, is_active)
WHERE login_time IS NULL
OR last_activity IS NULL
OR logout_time IS NULL
OR active IS NULL;
-- Sync data from alias columns to primary columns (if primary is null)
UPDATE sessions SET
started_at = COALESCE(started_at, login_time),
last_heartbeat = COALESCE(last_heartbeat, last_activity),
ended_at = COALESCE(ended_at, logout_time),
is_active = COALESCE(is_active, active)
WHERE started_at IS NULL
OR last_heartbeat IS NULL
OR ended_at IS NULL
OR is_active IS NULL;
-- =====================================================
-- STEP 4: Create function to check code dependencies
-- =====================================================
CREATE OR REPLACE FUNCTION check_field_usage()
RETURNS TABLE (
query_count INTEGER,
field_name TEXT,
usage_type TEXT
) AS $$
BEGIN
-- Check for references to old field names
RETURN QUERY
SELECT
COUNT(*)::INTEGER,
'device_id'::TEXT,
'Should use hardware_id'::TEXT
FROM pg_stat_statements
WHERE query ILIKE '%device_id%'
UNION ALL
SELECT
COUNT(*)::INTEGER,
'active'::TEXT,
'Should use is_active'::TEXT
FROM pg_stat_statements
WHERE query ILIKE '%active%'
AND query NOT ILIKE '%is_active%'
UNION ALL
SELECT
COUNT(*)::INTEGER,
'login_time'::TEXT,
'Should use started_at'::TEXT
FROM pg_stat_statements
WHERE query ILIKE '%login_time%';
END;
$$ LANGUAGE plpgsql;
-- =====================================================
-- STEP 5: Migration queries for code updates
-- =====================================================
-- These queries help identify code that needs updating:
-- Find sessions queries using old field names
COMMENT ON VIEW sessions_compat IS
'Compatibility view for sessions table during field name migration.
Old fields mapped:
- device_id → hardware_id
- active → is_active
- login_time → started_at
- last_activity → last_heartbeat
- logout_time → ended_at';
-- =====================================================
-- STEP 6: Final cleanup (run after code is updated)
-- =====================================================
-- DO NOT RUN THIS UNTIL ALL CODE IS UPDATED!
/*
-- Remove duplicate columns
ALTER TABLE sessions
DROP COLUMN IF EXISTS active,
DROP COLUMN IF EXISTS login_time,
DROP COLUMN IF EXISTS last_activity,
DROP COLUMN IF EXISTS logout_time;
-- Drop compatibility view
DROP VIEW IF EXISTS sessions_compat;
-- Drop helper function
DROP FUNCTION IF EXISTS check_field_usage();
*/
-- =====================================================
-- VERIFICATION QUERIES
-- =====================================================
-- Check for null values in primary columns
SELECT
COUNT(*) FILTER (WHERE started_at IS NULL) as null_started_at,
COUNT(*) FILTER (WHERE last_heartbeat IS NULL) as null_last_heartbeat,
COUNT(*) FILTER (WHERE ended_at IS NULL AND is_active = FALSE) as null_ended_at,
COUNT(*) FILTER (WHERE is_active IS NULL) as null_is_active,
COUNT(*) as total_sessions
FROM sessions;
-- Check data consistency between duplicate columns
SELECT
COUNT(*) FILTER (WHERE started_at != login_time) as started_login_diff,
COUNT(*) FILTER (WHERE last_heartbeat != last_activity) as heartbeat_activity_diff,
COUNT(*) FILTER (WHERE ended_at != logout_time) as ended_logout_diff,
COUNT(*) FILTER (WHERE is_active != active) as active_diff,
COUNT(*) as total_sessions
FROM sessions
WHERE login_time IS NOT NULL
OR last_activity IS NOT NULL
OR logout_time IS NOT NULL
OR active IS NOT NULL;

Datei anzeigen

@@ -1,170 +0,0 @@
#!/usr/bin/env python3
"""
Script to find and fix database field name inconsistencies in Python code
"""
import os
import re
from pathlib import Path
from typing import List, Tuple, Dict
# Field mappings (old_name -> new_name)
FIELD_MAPPINGS = {
'device_id': 'hardware_id',
'active': 'is_active',
'login_time': 'started_at',
'last_activity': 'last_heartbeat',
'logout_time': 'ended_at',
'start_time': 'started_at' # Found in models.py
}
# Patterns to identify database queries
QUERY_PATTERNS = [
r'SELECT.*FROM\s+sessions',
r'UPDATE\s+sessions',
r'INSERT\s+INTO\s+sessions',
r'WHERE.*sessions\.',
r's\.\w+', # Table alias pattern
r'row\[\d+\]', # Row index access
]
def find_python_files(directory: Path) -> List[Path]:
"""Find all Python files in directory"""
return list(directory.rglob("*.py"))
def check_file_for_inconsistencies(filepath: Path) -> Dict[str, List[Tuple[int, str]]]:
"""Check a file for field name inconsistencies"""
inconsistencies = {}
try:
with open(filepath, 'r', encoding='utf-8') as f:
lines = f.readlines()
for line_num, line in enumerate(lines, 1):
# Skip comments
if line.strip().startswith('#'):
continue
# Check for old field names
for old_field, new_field in FIELD_MAPPINGS.items():
# Look for field references in various contexts
patterns = [
rf'\b{old_field}\b', # Word boundary
rf'["\']{{1}}{old_field}["\']{{1}}', # In quotes
rf's\.{old_field}\b', # Table alias
rf'row\[.*{old_field}.*\]', # In row access
]
for pattern in patterns:
if re.search(pattern, line, re.IGNORECASE):
# Check if it's in a database query context
is_db_context = any(re.search(qp, line, re.IGNORECASE) for qp in QUERY_PATTERNS)
# Also check previous lines for query context
if not is_db_context and line_num > 1:
for i in range(max(0, line_num - 5), line_num):
if any(re.search(qp, lines[i], re.IGNORECASE) for qp in QUERY_PATTERNS):
is_db_context = True
break
if is_db_context or 'cur.execute' in line or 'execute_query' in line:
if old_field not in inconsistencies:
inconsistencies[old_field] = []
inconsistencies[old_field].append((line_num, line.strip()))
break
except Exception as e:
print(f"Error reading {filepath}: {e}")
return inconsistencies
def generate_fix_suggestions(inconsistencies: Dict[Path, Dict[str, List[Tuple[int, str]]]]) -> None:
"""Generate fix suggestions for found inconsistencies"""
print("\n" + "="*80)
print("FIELD NAME INCONSISTENCIES FOUND")
print("="*80 + "\n")
total_issues = 0
for filepath, file_issues in inconsistencies.items():
if not file_issues:
continue
print(f"\n📄 {filepath}")
print("-" * 80)
for old_field, occurrences in file_issues.items():
new_field = FIELD_MAPPINGS[old_field]
print(f"\n ❌ Found '{old_field}' (should be '{new_field}'):")
for line_num, line_content in occurrences:
print(f" Line {line_num}: {line_content[:100]}...")
total_issues += 1
print(f"\n\n📊 Total issues found: {total_issues}")
print("\n" + "="*80)
print("RECOMMENDED FIXES")
print("="*80 + "\n")
for old_field, new_field in FIELD_MAPPINGS.items():
print(f" • Replace '{old_field}' with '{new_field}'")
print("\n⚠️ Note: Review each change carefully, as some occurrences might not be database-related")
def create_compatibility_queries() -> None:
"""Generate SQL queries for creating compatibility views"""
print("\n" + "="*80)
print("COMPATIBILITY SQL QUERIES")
print("="*80 + "\n")
print("-- Use this view during migration:")
print("CREATE OR REPLACE VIEW sessions_compat AS")
print("SELECT ")
print(" *,")
for old_field, new_field in FIELD_MAPPINGS.items():
if old_field != 'start_time': # Skip non-column mappings
print(f" {new_field} as {old_field},")
print("FROM sessions;\n")
def main():
"""Main function"""
# Get the v2_adminpanel directory
base_dir = Path(__file__).parent
print(f"🔍 Scanning directory: {base_dir}")
# Find all Python files
python_files = find_python_files(base_dir)
print(f"📁 Found {len(python_files)} Python files")
# Check each file for inconsistencies
all_inconsistencies = {}
for filepath in python_files:
# Skip this script and migration files
if filepath.name in ['fix_field_references.py', '__pycache__']:
continue
inconsistencies = check_file_for_inconsistencies(filepath)
if inconsistencies:
all_inconsistencies[filepath] = inconsistencies
# Generate report
generate_fix_suggestions(all_inconsistencies)
# Generate compatibility queries
create_compatibility_queries()
# Summary of affected files
print("\n" + "="*80)
print("AFFECTED FILES SUMMARY")
print("="*80 + "\n")
affected_files = [str(f.relative_to(base_dir)) for f in all_inconsistencies.keys()]
for i, filepath in enumerate(sorted(affected_files), 1):
print(f" {i}. {filepath}")
print(f"\n✅ Scan complete! Found issues in {len(affected_files)} files.")
if __name__ == "__main__":
main()

Datei anzeigen

@@ -1,178 +0,0 @@
# Temporary models file - will be expanded in Phase 3
from db import execute_query, get_db_connection, get_db_cursor
import logging
logger = logging.getLogger(__name__)
def get_user_by_username(username):
"""Get user from database by username"""
result = execute_query(
"""
SELECT id, username, password_hash, email, totp_secret, totp_enabled,
backup_codes, last_password_change, failed_2fa_attempts
FROM users WHERE username = %s
""",
(username,),
fetch_one=True
)
if result:
return {
'id': result[0],
'username': result[1],
'password_hash': result[2],
'email': result[3],
'totp_secret': result[4],
'totp_enabled': result[5],
'backup_codes': result[6],
'last_password_change': result[7],
'failed_2fa_attempts': result[8]
}
return None
def get_licenses(show_test=False):
"""Get all licenses from database"""
try:
with get_db_connection() as conn:
with get_db_cursor(conn) as cur:
if show_test:
cur.execute("""
SELECT l.*, c.name as customer_name
FROM licenses l
LEFT JOIN customers c ON l.customer_id = c.id
ORDER BY l.created_at DESC
""")
else:
cur.execute("""
SELECT l.*, c.name as customer_name
FROM licenses l
LEFT JOIN customers c ON l.customer_id = c.id
WHERE l.is_test = false
ORDER BY l.created_at DESC
""")
columns = [desc[0] for desc in cur.description]
licenses = []
for row in cur.fetchall():
license_dict = dict(zip(columns, row))
licenses.append(license_dict)
return licenses
except Exception as e:
logger.error(f"Error fetching licenses: {str(e)}")
return []
def get_license_by_id(license_id):
"""Get a specific license by ID"""
try:
with get_db_connection() as conn:
with get_db_cursor(conn) as cur:
cur.execute("""
SELECT l.*, c.name as customer_name
FROM licenses l
LEFT JOIN customers c ON l.customer_id = c.id
WHERE l.id = %s
""", (license_id,))
row = cur.fetchone()
if row:
columns = [desc[0] for desc in cur.description]
return dict(zip(columns, row))
return None
except Exception as e:
logger.error(f"Error fetching license {license_id}: {str(e)}")
return None
def get_customers(show_test=False, search=None):
"""Get all customers from database"""
try:
with get_db_connection() as conn:
with get_db_cursor(conn) as cur:
query = """
SELECT c.*,
COUNT(DISTINCT l.id) as license_count,
COUNT(DISTINCT CASE WHEN l.is_active THEN l.id END) as active_licenses
FROM customers c
LEFT JOIN licenses l ON c.id = l.customer_id
"""
where_clauses = []
params = []
if not show_test:
where_clauses.append("c.is_test = false")
if search:
where_clauses.append("(LOWER(c.name) LIKE LOWER(%s) OR LOWER(c.email) LIKE LOWER(%s))")
search_pattern = f'%{search}%'
params.extend([search_pattern, search_pattern])
if where_clauses:
query += " WHERE " + " AND ".join(where_clauses)
query += " GROUP BY c.id ORDER BY c.name"
cur.execute(query, params)
columns = [desc[0] for desc in cur.description]
customers = []
for row in cur.fetchall():
customer_dict = dict(zip(columns, row))
customers.append(customer_dict)
return customers
except Exception as e:
logger.error(f"Error fetching customers: {str(e)}")
return []
def get_customer_by_id(customer_id):
"""Get a specific customer by ID"""
try:
with get_db_connection() as conn:
with get_db_cursor(conn) as cur:
cur.execute("""
SELECT c.*,
COUNT(DISTINCT l.id) as license_count,
COUNT(DISTINCT CASE WHEN l.is_active THEN l.id END) as active_licenses
FROM customers c
LEFT JOIN licenses l ON c.id = l.customer_id
WHERE c.id = %s
GROUP BY c.id
""", (customer_id,))
row = cur.fetchone()
if row:
columns = [desc[0] for desc in cur.description]
return dict(zip(columns, row))
return None
except Exception as e:
logger.error(f"Error fetching customer {customer_id}: {str(e)}")
return None
def get_active_sessions():
"""Get all active sessions"""
try:
with get_db_connection() as conn:
with get_db_cursor(conn) as cur:
cur.execute("""
SELECT s.*, l.license_key, c.name as customer_name
FROM sessions s
JOIN licenses l ON s.license_id = l.id
LEFT JOIN customers c ON l.customer_id = c.id
WHERE s.is_active = true
ORDER BY s.start_time DESC
""")
columns = [desc[0] for desc in cur.description]
sessions = []
for row in cur.fetchall():
session_dict = dict(zip(columns, row))
sessions.append(session_dict)
return sessions
except Exception as e:
logger.error(f"Error fetching active sessions: {str(e)}")
return []

Datei anzeigen

@@ -1,921 +0,0 @@
import logging
from datetime import datetime
from zoneinfo import ZoneInfo
from flask import Blueprint, request, jsonify, session
import config
from auth.decorators import login_required
from utils.audit import log_audit
from utils.network import get_client_ip
from utils.license import generate_license_key
from db import get_connection, get_db_connection, get_db_cursor
from models import get_license_by_id, get_customers
# Create Blueprint
api_bp = Blueprint('api', __name__, url_prefix='/api')
@api_bp.route("/customers", methods=["GET"])
@login_required
def api_customers():
"""API endpoint for customer search (used by Select2)"""
search = request.args.get('q', '').strip()
page = int(request.args.get('page', 1))
per_page = 20
try:
# Get all customers (with optional search)
customers = get_customers(show_test=True, search=search)
# Pagination
start = (page - 1) * per_page
end = start + per_page
paginated_customers = customers[start:end]
# Format for Select2
results = []
for customer in paginated_customers:
results.append({
'id': customer['id'],
'text': f"{customer['name']} ({customer['email'] or 'keine E-Mail'})"
})
return jsonify({
'results': results,
'pagination': {
'more': len(customers) > end
}
})
except Exception as e:
logging.error(f"Error in api_customers: {str(e)}")
return jsonify({'error': 'Fehler beim Laden der Kunden'}), 500
@api_bp.route("/license/<int:license_id>/toggle", methods=["POST"])
@login_required
def toggle_license(license_id):
"""Toggle license active status"""
conn = get_connection()
cur = conn.cursor()
try:
# Get current status
license_data = get_license_by_id(license_id)
if not license_data:
return jsonify({'error': 'Lizenz nicht gefunden'}), 404
new_status = not license_data['is_active']
# Update status
cur.execute("UPDATE licenses SET is_active = %s WHERE id = %s", (new_status, license_id))
conn.commit()
# Log change
log_audit('TOGGLE', 'license', license_id,
old_values={'is_active': license_data['is_active']},
new_values={'is_active': new_status})
return jsonify({'success': True, 'is_active': new_status})
except Exception as e:
conn.rollback()
logging.error(f"Fehler beim Umschalten der Lizenz: {str(e)}", exc_info=True)
return jsonify({'error': 'Fehler beim Umschalten der Lizenz'}), 500
finally:
cur.close()
conn.close()
@api_bp.route("/licenses/bulk-activate", methods=["POST"])
@login_required
def bulk_activate_licenses():
"""Aktiviere mehrere Lizenzen gleichzeitig"""
data = request.get_json()
license_ids = data.get('license_ids', [])
if not license_ids:
return jsonify({'error': 'Keine Lizenzen ausgewählt'}), 400
conn = get_connection()
cur = conn.cursor()
try:
# Update all selected licenses
cur.execute("""
UPDATE licenses
SET is_active = true
WHERE id = ANY(%s) AND is_active = false
RETURNING id
""", (license_ids,))
updated_ids = [row[0] for row in cur.fetchall()]
conn.commit()
# Log changes
for license_id in updated_ids:
log_audit('BULK_ACTIVATE', 'license', license_id,
new_values={'is_active': True})
return jsonify({
'success': True,
'updated_count': len(updated_ids)
})
except Exception as e:
conn.rollback()
logging.error(f"Fehler beim Bulk-Aktivieren: {str(e)}")
return jsonify({'error': 'Fehler beim Aktivieren der Lizenzen'}), 500
finally:
cur.close()
conn.close()
@api_bp.route("/licenses/bulk-deactivate", methods=["POST"])
@login_required
def bulk_deactivate_licenses():
"""Deaktiviere mehrere Lizenzen gleichzeitig"""
data = request.get_json()
license_ids = data.get('license_ids', [])
if not license_ids:
return jsonify({'error': 'Keine Lizenzen ausgewählt'}), 400
conn = get_connection()
cur = conn.cursor()
try:
# Update all selected licenses
cur.execute("""
UPDATE licenses
SET is_active = false
WHERE id = ANY(%s) AND is_active = true
RETURNING id
""", (license_ids,))
updated_ids = [row[0] for row in cur.fetchall()]
conn.commit()
# Log changes
for license_id in updated_ids:
log_audit('BULK_DEACTIVATE', 'license', license_id,
new_values={'is_active': False})
return jsonify({
'success': True,
'updated_count': len(updated_ids)
})
except Exception as e:
conn.rollback()
logging.error(f"Fehler beim Bulk-Deaktivieren: {str(e)}")
return jsonify({'error': 'Fehler beim Deaktivieren der Lizenzen'}), 500
finally:
cur.close()
conn.close()
@api_bp.route("/license/<int:license_id>/devices")
@login_required
def get_license_devices(license_id):
"""Hole alle Geräte einer Lizenz"""
conn = get_connection()
cur = conn.cursor()
try:
# Hole Lizenz-Info
license_data = get_license_by_id(license_id)
if not license_data:
return jsonify({'error': 'Lizenz nicht gefunden'}), 404
# Hole registrierte Geräte
cur.execute("""
SELECT
dr.id,
dr.device_id,
dr.device_name,
dr.device_type,
dr.registration_date,
dr.last_seen,
dr.is_active,
(SELECT COUNT(*) FROM sessions s
WHERE s.license_key = dr.license_key
AND s.device_id = dr.device_id
AND s.active = true) as active_sessions
FROM device_registrations dr
WHERE dr.license_key = %s
ORDER BY dr.registration_date DESC
""", (license_data['license_key'],))
devices = []
for row in cur.fetchall():
devices.append({
'id': row[0],
'device_id': row[1],
'device_name': row[2],
'device_type': row[3],
'registration_date': row[4].isoformat() if row[4] else None,
'last_seen': row[5].isoformat() if row[5] else None,
'is_active': row[6],
'active_sessions': row[7]
})
return jsonify({
'license_key': license_data['license_key'],
'device_limit': license_data['device_limit'],
'devices': devices,
'device_count': len(devices)
})
except Exception as e:
logging.error(f"Fehler beim Abrufen der Geräte: {str(e)}")
return jsonify({'error': 'Fehler beim Abrufen der Geräte'}), 500
finally:
cur.close()
conn.close()
@api_bp.route("/license/<int:license_id>/register-device", methods=["POST"])
@login_required
def register_device(license_id):
"""Registriere ein neues Gerät für eine Lizenz"""
data = request.get_json()
device_id = data.get('device_id')
device_name = data.get('device_name')
device_type = data.get('device_type', 'unknown')
if not device_id or not device_name:
return jsonify({'error': 'Geräte-ID und Name erforderlich'}), 400
conn = get_connection()
cur = conn.cursor()
try:
# Hole Lizenz-Info
license_data = get_license_by_id(license_id)
if not license_data:
return jsonify({'error': 'Lizenz nicht gefunden'}), 404
# Prüfe Gerätelimit
cur.execute("""
SELECT COUNT(*) FROM device_registrations
WHERE license_key = %s AND is_active = true
""", (license_data['license_key'],))
active_device_count = cur.fetchone()[0]
if active_device_count >= license_data['device_limit']:
return jsonify({'error': 'Gerätelimit erreicht'}), 400
# Prüfe ob Gerät bereits registriert
cur.execute("""
SELECT id, is_active FROM device_registrations
WHERE license_key = %s AND device_id = %s
""", (license_data['license_key'], device_id))
existing = cur.fetchone()
if existing:
if existing[1]: # is_active
return jsonify({'error': 'Gerät bereits registriert'}), 400
else:
# Reaktiviere Gerät
cur.execute("""
UPDATE device_registrations
SET is_active = true, last_seen = CURRENT_TIMESTAMP
WHERE id = %s
""", (existing[0],))
else:
# Registriere neues Gerät
cur.execute("""
INSERT INTO device_registrations
(license_key, device_id, device_name, device_type, is_active)
VALUES (%s, %s, %s, %s, true)
""", (license_data['license_key'], device_id, device_name, device_type))
conn.commit()
# Audit-Log
log_audit('DEVICE_REGISTER', 'license', license_id,
additional_info=f"Gerät {device_name} ({device_id}) registriert")
return jsonify({'success': True})
except Exception as e:
conn.rollback()
logging.error(f"Fehler beim Registrieren des Geräts: {str(e)}")
return jsonify({'error': 'Fehler beim Registrieren des Geräts'}), 500
finally:
cur.close()
conn.close()
@api_bp.route("/license/<int:license_id>/deactivate-device/<int:device_id>", methods=["POST"])
@login_required
def deactivate_device(license_id, device_id):
"""Deaktiviere ein Gerät einer Lizenz"""
conn = get_connection()
cur = conn.cursor()
try:
# Prüfe ob Gerät zur Lizenz gehört
cur.execute("""
SELECT dr.device_name, dr.device_id, l.license_key
FROM device_registrations dr
JOIN licenses l ON dr.license_key = l.license_key
WHERE dr.id = %s AND l.id = %s
""", (device_id, license_id))
device = cur.fetchone()
if not device:
return jsonify({'error': 'Gerät nicht gefunden'}), 404
# Deaktiviere Gerät
cur.execute("""
UPDATE device_registrations
SET is_active = false
WHERE id = %s
""", (device_id,))
# Beende aktive Sessions
cur.execute("""
UPDATE sessions
SET active = false, logout_time = CURRENT_TIMESTAMP
WHERE license_key = %s AND device_id = %s AND active = true
""", (device[2], device[1]))
conn.commit()
# Audit-Log
log_audit('DEVICE_DEACTIVATE', 'license', license_id,
additional_info=f"Gerät {device[0]} ({device[1]}) deaktiviert")
return jsonify({'success': True})
except Exception as e:
conn.rollback()
logging.error(f"Fehler beim Deaktivieren des Geräts: {str(e)}")
return jsonify({'error': 'Fehler beim Deaktivieren des Geräts'}), 500
finally:
cur.close()
conn.close()
@api_bp.route("/licenses/bulk-delete", methods=["POST"])
@login_required
def bulk_delete_licenses():
"""Lösche mehrere Lizenzen gleichzeitig"""
data = request.get_json()
license_ids = data.get('license_ids', [])
if not license_ids:
return jsonify({'error': 'Keine Lizenzen ausgewählt'}), 400
conn = get_connection()
cur = conn.cursor()
try:
deleted_count = 0
for license_id in license_ids:
# Hole Lizenz-Info für Audit
cur.execute("SELECT license_key FROM licenses WHERE id = %s", (license_id,))
result = cur.fetchone()
if result:
license_key = result[0]
# Lösche Sessions
cur.execute("DELETE FROM sessions WHERE license_key = %s", (license_key,))
# Lösche Geräte-Registrierungen
cur.execute("DELETE FROM device_registrations WHERE license_key = %s", (license_key,))
# Lösche Lizenz
cur.execute("DELETE FROM licenses WHERE id = %s", (license_id,))
# Audit-Log
log_audit('BULK_DELETE', 'license', license_id,
old_values={'license_key': license_key})
deleted_count += 1
conn.commit()
return jsonify({
'success': True,
'deleted_count': deleted_count
})
except Exception as e:
conn.rollback()
logging.error(f"Fehler beim Bulk-Löschen: {str(e)}")
return jsonify({'error': 'Fehler beim Löschen der Lizenzen'}), 500
finally:
cur.close()
conn.close()
@api_bp.route("/license/<int:license_id>/quick-edit", methods=['POST'])
@login_required
def quick_edit_license(license_id):
"""Schnellbearbeitung einer Lizenz"""
data = request.get_json()
conn = get_connection()
cur = conn.cursor()
try:
# Hole aktuelle Lizenz für Vergleich
current_license = get_license_by_id(license_id)
if not current_license:
return jsonify({'error': 'Lizenz nicht gefunden'}), 404
# Update nur die übergebenen Felder
updates = []
params = []
old_values = {}
new_values = {}
if 'device_limit' in data:
updates.append("device_limit = %s")
params.append(int(data['device_limit']))
old_values['device_limit'] = current_license['device_limit']
new_values['device_limit'] = int(data['device_limit'])
if 'valid_until' in data:
updates.append("valid_until = %s")
params.append(data['valid_until'])
old_values['valid_until'] = str(current_license['valid_until'])
new_values['valid_until'] = data['valid_until']
if 'active' in data:
updates.append("is_active = %s")
params.append(bool(data['active']))
old_values['is_active'] = current_license['is_active']
new_values['is_active'] = bool(data['active'])
if not updates:
return jsonify({'error': 'Keine Änderungen angegeben'}), 400
# Führe Update aus
params.append(license_id)
cur.execute(f"""
UPDATE licenses
SET {', '.join(updates)}
WHERE id = %s
""", params)
conn.commit()
# Audit-Log
log_audit('QUICK_EDIT', 'license', license_id,
old_values=old_values,
new_values=new_values)
return jsonify({'success': True})
except Exception as e:
conn.rollback()
logging.error(f"Fehler bei Schnellbearbeitung: {str(e)}")
return jsonify({'error': 'Fehler bei der Bearbeitung'}), 500
finally:
cur.close()
conn.close()
@api_bp.route("/license/<int:license_id>/resources")
@login_required
def get_license_resources(license_id):
"""Hole alle Ressourcen einer Lizenz"""
conn = get_connection()
cur = conn.cursor()
try:
# Hole Lizenz-Info
license_data = get_license_by_id(license_id)
if not license_data:
return jsonify({'error': 'Lizenz nicht gefunden'}), 404
# Hole zugewiesene Ressourcen
cur.execute("""
SELECT
rp.id,
rp.resource_type,
rp.resource_value,
rp.is_test,
rp.status_changed_at,
lr.assigned_at,
lr.assigned_by
FROM resource_pools rp
JOIN license_resources lr ON rp.id = lr.resource_id
WHERE lr.license_id = %s
ORDER BY rp.resource_type, rp.resource_value
""", (license_id,))
resources = []
for row in cur.fetchall():
resources.append({
'id': row[0],
'type': row[1],
'value': row[2],
'is_test': row[3],
'status_changed_at': row[4].isoformat() if row[4] else None,
'assigned_at': row[5].isoformat() if row[5] else None,
'assigned_by': row[6]
})
# Gruppiere nach Typ
grouped = {}
for resource in resources:
res_type = resource['type']
if res_type not in grouped:
grouped[res_type] = []
grouped[res_type].append(resource)
return jsonify({
'license_key': license_data['license_key'],
'resources': resources,
'grouped': grouped,
'total_count': len(resources)
})
except Exception as e:
logging.error(f"Fehler beim Abrufen der Ressourcen: {str(e)}")
return jsonify({'error': 'Fehler beim Abrufen der Ressourcen'}), 500
finally:
cur.close()
conn.close()
@api_bp.route("/resources/allocate", methods=['POST'])
@login_required
def allocate_resources():
"""Weise Ressourcen einer Lizenz zu"""
data = request.get_json()
license_id = data.get('license_id')
resource_ids = data.get('resource_ids', [])
if not license_id or not resource_ids:
return jsonify({'error': 'Lizenz-ID und Ressourcen erforderlich'}), 400
conn = get_connection()
cur = conn.cursor()
try:
# Prüfe Lizenz
license_data = get_license_by_id(license_id)
if not license_data:
return jsonify({'error': 'Lizenz nicht gefunden'}), 404
allocated_count = 0
errors = []
for resource_id in resource_ids:
try:
# Prüfe ob Ressource verfügbar ist
cur.execute("""
SELECT resource_value, status, is_test
FROM resource_pools
WHERE id = %s
""", (resource_id,))
resource = cur.fetchone()
if not resource:
errors.append(f"Ressource {resource_id} nicht gefunden")
continue
if resource[1] != 'available':
errors.append(f"Ressource {resource[0]} ist nicht verfügbar")
continue
# Prüfe Test/Produktion Kompatibilität
if resource[2] != license_data['is_test']:
errors.append(f"Ressource {resource[0]} ist {'Test' if resource[2] else 'Produktion'}, Lizenz ist {'Test' if license_data['is_test'] else 'Produktion'}")
continue
# Weise Ressource zu
cur.execute("""
UPDATE resource_pools
SET status = 'allocated',
allocated_to_license = %s,
status_changed_at = CURRENT_TIMESTAMP,
status_changed_by = %s
WHERE id = %s
""", (license_id, session['username'], resource_id))
# Erstelle Verknüpfung
cur.execute("""
INSERT INTO license_resources (license_id, resource_id, assigned_by)
VALUES (%s, %s, %s)
""", (license_id, resource_id, session['username']))
# History-Eintrag
cur.execute("""
INSERT INTO resource_history (resource_id, license_id, action, action_by, ip_address)
VALUES (%s, %s, 'allocated', %s, %s)
""", (resource_id, license_id, session['username'], get_client_ip()))
allocated_count += 1
except Exception as e:
errors.append(f"Fehler bei Ressource {resource_id}: {str(e)}")
conn.commit()
# Audit-Log
if allocated_count > 0:
log_audit('RESOURCE_ALLOCATE', 'license', license_id,
additional_info=f"{allocated_count} Ressourcen zugewiesen")
return jsonify({
'success': True,
'allocated_count': allocated_count,
'errors': errors
})
except Exception as e:
conn.rollback()
logging.error(f"Fehler beim Zuweisen der Ressourcen: {str(e)}")
return jsonify({'error': 'Fehler beim Zuweisen der Ressourcen'}), 500
finally:
cur.close()
conn.close()
@api_bp.route("/resources/check-availability", methods=['GET'])
@login_required
def check_resource_availability():
"""Prüfe Verfügbarkeit von Ressourcen"""
# Einzelne Ressource prüfen (alte API)
resource_type = request.args.get('type')
if resource_type:
count = int(request.args.get('count', 1))
is_test = request.args.get('is_test', 'false') == 'true'
show_test = request.args.get('show_test', 'false') == 'true'
conn = get_connection()
cur = conn.cursor()
try:
# Hole verfügbare Ressourcen mit Details
if show_test:
# Zeige alle verfügbaren Ressourcen (Test und Produktion)
cur.execute("""
SELECT id, resource_value, is_test
FROM resource_pools
WHERE resource_type = %s
AND status = 'available'
ORDER BY is_test, resource_value
LIMIT %s
""", (resource_type, count))
else:
# Zeige nur Produktions-Ressourcen
cur.execute("""
SELECT id, resource_value, is_test
FROM resource_pools
WHERE resource_type = %s
AND status = 'available'
AND is_test = false
ORDER BY resource_value
LIMIT %s
""", (resource_type, count))
available_resources = []
for row in cur.fetchall():
available_resources.append({
'id': row[0],
'value': row[1],
'is_test': row[2]
})
return jsonify({
'resource_type': resource_type,
'requested': count,
'available': available_resources,
'sufficient': len(available_resources) >= count,
'show_test': show_test
})
except Exception as e:
logging.error(f"Fehler beim Prüfen der Verfügbarkeit: {str(e)}")
return jsonify({'error': 'Fehler beim Prüfen der Verfügbarkeit'}), 500
finally:
cur.close()
conn.close()
# Mehrere Ressourcen gleichzeitig prüfen (für Batch)
domain_count = int(request.args.get('domain', 0))
ipv4_count = int(request.args.get('ipv4', 0))
phone_count = int(request.args.get('phone', 0))
is_test = request.args.get('is_test', 'false') == 'true'
conn = get_connection()
cur = conn.cursor()
try:
# Zähle verfügbare Ressourcen für jeden Typ
result = {}
# Domains
cur.execute("""
SELECT COUNT(*)
FROM resource_pools
WHERE resource_type = 'domain'
AND status = 'available'
AND is_test = %s
""", (is_test,))
domain_available = cur.fetchone()[0]
# IPv4
cur.execute("""
SELECT COUNT(*)
FROM resource_pools
WHERE resource_type = 'ipv4'
AND status = 'available'
AND is_test = %s
""", (is_test,))
ipv4_available = cur.fetchone()[0]
# Phones
cur.execute("""
SELECT COUNT(*)
FROM resource_pools
WHERE resource_type = 'phone'
AND status = 'available'
AND is_test = %s
""", (is_test,))
phone_available = cur.fetchone()[0]
return jsonify({
'domain_requested': domain_count,
'domain_available': domain_available,
'domain_sufficient': domain_available >= domain_count,
'ipv4_requested': ipv4_count,
'ipv4_available': ipv4_available,
'ipv4_sufficient': ipv4_available >= ipv4_count,
'phone_requested': phone_count,
'phone_available': phone_available,
'phone_sufficient': phone_available >= phone_count,
'all_sufficient': (
domain_available >= domain_count and
ipv4_available >= ipv4_count and
phone_available >= phone_count
),
'is_test': is_test
})
except Exception as e:
logging.error(f"Fehler beim Prüfen der Verfügbarkeit: {str(e)}")
return jsonify({'error': 'Fehler beim Prüfen der Verfügbarkeit'}), 500
finally:
cur.close()
conn.close()
@api_bp.route("/global-search", methods=['GET'])
@login_required
def global_search():
"""Globale Suche über alle Entitäten"""
query = request.args.get('q', '').strip()
if not query or len(query) < 3:
return jsonify({'error': 'Suchbegriff muss mindestens 3 Zeichen haben'}), 400
conn = get_connection()
cur = conn.cursor()
results = {
'licenses': [],
'customers': [],
'resources': [],
'sessions': []
}
try:
# Suche in Lizenzen
cur.execute("""
SELECT id, license_key, customer_name, is_active
FROM licenses
WHERE license_key ILIKE %s
OR customer_name ILIKE %s
OR customer_email ILIKE %s
LIMIT 10
""", (f'%{query}%', f'%{query}%', f'%{query}%'))
for row in cur.fetchall():
results['licenses'].append({
'id': row[0],
'license_key': row[1],
'customer_name': row[2],
'is_active': row[3]
})
# Suche in Kunden
cur.execute("""
SELECT id, name, email
FROM customers
WHERE name ILIKE %s OR email ILIKE %s
LIMIT 10
""", (f'%{query}%', f'%{query}%'))
for row in cur.fetchall():
results['customers'].append({
'id': row[0],
'name': row[1],
'email': row[2]
})
# Suche in Ressourcen
cur.execute("""
SELECT id, resource_type, resource_value, status
FROM resource_pools
WHERE resource_value ILIKE %s
LIMIT 10
""", (f'%{query}%',))
for row in cur.fetchall():
results['resources'].append({
'id': row[0],
'type': row[1],
'value': row[2],
'status': row[3]
})
# Suche in Sessions
cur.execute("""
SELECT id, license_key, username, device_id, active
FROM sessions
WHERE username ILIKE %s OR device_id ILIKE %s
ORDER BY login_time DESC
LIMIT 10
""", (f'%{query}%', f'%{query}%'))
for row in cur.fetchall():
results['sessions'].append({
'id': row[0],
'license_key': row[1],
'username': row[2],
'device_id': row[3],
'active': row[4]
})
return jsonify(results)
except Exception as e:
logging.error(f"Fehler bei der globalen Suche: {str(e)}")
return jsonify({'error': 'Fehler bei der Suche'}), 500
finally:
cur.close()
conn.close()
@api_bp.route("/generate-license-key", methods=['POST'])
@login_required
def api_generate_key():
"""API Endpoint zur Generierung eines neuen Lizenzschlüssels"""
try:
# Lizenztyp aus Request holen (default: full)
data = request.get_json() or {}
license_type = data.get('type', 'full')
# Key generieren
key = generate_license_key(license_type)
# Prüfen ob Key bereits existiert (sehr unwahrscheinlich aber sicher ist sicher)
conn = get_connection()
cur = conn.cursor()
# Wiederhole bis eindeutiger Key gefunden
attempts = 0
while attempts < 10: # Max 10 Versuche
cur.execute("SELECT 1 FROM licenses WHERE license_key = %s", (key,))
if not cur.fetchone():
break # Key ist eindeutig
key = generate_license_key(license_type)
attempts += 1
cur.close()
conn.close()
# Log für Audit
log_audit('GENERATE_KEY', 'license',
additional_info={'type': license_type, 'key': key})
return jsonify({
'success': True,
'key': key,
'type': license_type
})
except Exception as e:
logging.error(f"Fehler bei Key-Generierung: {str(e)}")
return jsonify({
'success': False,
'error': 'Fehler bei der Key-Generierung'
}), 500

Datei anzeigen

@@ -1,373 +0,0 @@
import os
import logging
import secrets
import string
from datetime import datetime, timedelta
from pathlib import Path
from flask import Blueprint, render_template, request, redirect, session, url_for, flash, send_file
import config
from auth.decorators import login_required
from utils.audit import log_audit
from utils.network import get_client_ip
from utils.export import create_batch_export
from db import get_connection, get_db_connection, get_db_cursor
from models import get_customers
# Create Blueprint
batch_bp = Blueprint('batch', __name__)
def generate_license_key():
"""Generiert einen zufälligen Lizenzschlüssel"""
chars = string.ascii_uppercase + string.digits
return '-'.join([''.join(secrets.choice(chars) for _ in range(4)) for _ in range(4)])
@batch_bp.route("/batch", methods=["GET", "POST"])
@login_required
def batch_create():
"""Batch-Erstellung von Lizenzen"""
customers = get_customers()
if request.method == "POST":
conn = get_connection()
cur = conn.cursor()
try:
# Form data
customer_id = int(request.form['customer_id'])
license_type = request.form['license_type']
count = int(request.form['quantity']) # Korrigiert von 'count' zu 'quantity'
valid_from = request.form['valid_from']
valid_until = request.form['valid_until']
device_limit = int(request.form['device_limit'])
is_test = 'is_test' in request.form
# Validierung
if count < 1 or count > 100:
flash('Anzahl muss zwischen 1 und 100 liegen!', 'error')
return redirect(url_for('batch.batch_create'))
# Hole Kundendaten
cur.execute("SELECT name, email FROM customers WHERE id = %s", (customer_id,))
customer = cur.fetchone()
if not customer:
flash('Kunde nicht gefunden!', 'error')
return redirect(url_for('batch.batch_create'))
created_licenses = []
# Erstelle Lizenzen
for i in range(count):
license_key = generate_license_key()
# Prüfe ob Schlüssel bereits existiert
while True:
cur.execute("SELECT id FROM licenses WHERE license_key = %s", (license_key,))
if not cur.fetchone():
break
license_key = generate_license_key()
# Erstelle Lizenz
cur.execute("""
INSERT INTO licenses (
license_key, customer_id,
license_type, valid_from, valid_until, device_limit,
is_test, created_at
) VALUES (%s, %s, %s, %s, %s, %s, %s, %s)
RETURNING id
""", (
license_key, customer_id,
license_type, valid_from, valid_until, device_limit,
is_test, datetime.now()
))
license_id = cur.fetchone()[0]
created_licenses.append({
'id': license_id,
'license_key': license_key
})
# Audit-Log
log_audit('CREATE', 'license', license_id,
new_values={
'license_key': license_key,
'customer_name': customer[0],
'batch_creation': True
})
conn.commit()
# Speichere erstellte Lizenzen in Session für Export
session['batch_created_licenses'] = created_licenses
session['batch_customer_name'] = customer[0]
session['batch_customer_email'] = customer[1]
flash(f'{count} Lizenzen erfolgreich erstellt!', 'success')
# Weiterleitung zum Export
return redirect(url_for('batch.batch_export'))
except Exception as e:
conn.rollback()
logging.error(f"Fehler bei Batch-Erstellung: {str(e)}")
flash('Fehler bei der Batch-Erstellung!', 'error')
finally:
cur.close()
conn.close()
return render_template("batch_form.html", customers=customers)
@batch_bp.route("/batch/export")
@login_required
def batch_export():
"""Exportiert die zuletzt erstellten Batch-Lizenzen"""
created_licenses = session.get('batch_created_licenses', [])
if not created_licenses:
flash('Keine Lizenzen zum Exportieren gefunden!', 'error')
return redirect(url_for('batch.batch_create'))
conn = get_connection()
cur = conn.cursor()
try:
# Hole vollständige Lizenzdaten
license_ids = [l['id'] for l in created_licenses]
cur.execute("""
SELECT
l.license_key, c.name, c.email,
l.license_type, l.valid_from, l.valid_until,
l.device_limit, l.is_test, l.created_at
FROM licenses l
JOIN customers c ON l.customer_id = c.id
WHERE l.id = ANY(%s)
ORDER BY l.id
""", (license_ids,))
licenses = []
for row in cur.fetchall():
licenses.append({
'license_key': row[0],
'customer_name': row[1],
'customer_email': row[2],
'license_type': row[3],
'valid_from': row[4],
'valid_until': row[5],
'device_limit': row[6],
'is_test': row[7],
'created_at': row[8]
})
# Lösche aus Session
session.pop('batch_created_licenses', None)
session.pop('batch_customer_name', None)
session.pop('batch_customer_email', None)
# Erstelle und sende Excel-Export
return create_batch_export(licenses)
except Exception as e:
logging.error(f"Fehler beim Export: {str(e)}")
flash('Fehler beim Exportieren der Lizenzen!', 'error')
return redirect(url_for('batch.batch_create'))
finally:
cur.close()
conn.close()
@batch_bp.route("/batch/update", methods=["GET", "POST"])
@login_required
def batch_update():
"""Batch-Update von Lizenzen"""
if request.method == "POST":
conn = get_connection()
cur = conn.cursor()
try:
# Form data
license_keys = request.form.get('license_keys', '').strip().split('\n')
license_keys = [key.strip() for key in license_keys if key.strip()]
if not license_keys:
flash('Keine Lizenzschlüssel angegeben!', 'error')
return redirect(url_for('batch.batch_update'))
# Update-Parameter
updates = []
params = []
if 'update_valid_until' in request.form and request.form['valid_until']:
updates.append("valid_until = %s")
params.append(request.form['valid_until'])
if 'update_device_limit' in request.form and request.form['device_limit']:
updates.append("device_limit = %s")
params.append(int(request.form['device_limit']))
if 'update_active' in request.form:
updates.append("active = %s")
params.append('active' in request.form)
if not updates:
flash('Keine Änderungen angegeben!', 'error')
return redirect(url_for('batch.batch_update'))
# Führe Updates aus
updated_count = 0
not_found = []
for license_key in license_keys:
# Prüfe ob Lizenz existiert
cur.execute("SELECT id FROM licenses WHERE license_key = %s", (license_key,))
result = cur.fetchone()
if not result:
not_found.append(license_key)
continue
license_id = result[0]
# Update ausführen
update_params = params + [license_id]
cur.execute(f"""
UPDATE licenses
SET {', '.join(updates)}
WHERE id = %s
""", update_params)
# Audit-Log
log_audit('BATCH_UPDATE', 'license', license_id,
additional_info=f"Batch-Update: {', '.join(updates)}")
updated_count += 1
conn.commit()
# Feedback
flash(f'{updated_count} Lizenzen erfolgreich aktualisiert!', 'success')
if not_found:
flash(f'{len(not_found)} Lizenzen nicht gefunden: {", ".join(not_found[:5])}{"..." if len(not_found) > 5 else ""}', 'warning')
except Exception as e:
conn.rollback()
logging.error(f"Fehler bei Batch-Update: {str(e)}")
flash('Fehler beim Batch-Update!', 'error')
finally:
cur.close()
conn.close()
return render_template("batch_update.html")
@batch_bp.route("/batch/import", methods=["GET", "POST"])
@login_required
def batch_import():
"""Import von Lizenzen aus CSV/Excel"""
if request.method == "POST":
if 'file' not in request.files:
flash('Keine Datei ausgewählt!', 'error')
return redirect(url_for('batch.batch_import'))
file = request.files['file']
if file.filename == '':
flash('Keine Datei ausgewählt!', 'error')
return redirect(url_for('batch.batch_import'))
# Verarbeite Datei
try:
import pandas as pd
# Lese Datei
if file.filename.endswith('.csv'):
df = pd.read_csv(file)
elif file.filename.endswith(('.xlsx', '.xls')):
df = pd.read_excel(file)
else:
flash('Ungültiges Dateiformat! Nur CSV und Excel erlaubt.', 'error')
return redirect(url_for('batch.batch_import'))
# Validiere Spalten
required_columns = ['customer_email', 'license_type', 'valid_from', 'valid_until', 'device_limit']
missing_columns = [col for col in required_columns if col not in df.columns]
if missing_columns:
flash(f'Fehlende Spalten: {", ".join(missing_columns)}', 'error')
return redirect(url_for('batch.batch_import'))
conn = get_connection()
cur = conn.cursor()
imported_count = 0
errors = []
for index, row in df.iterrows():
try:
# Finde oder erstelle Kunde
email = row['customer_email']
cur.execute("SELECT id, name FROM customers WHERE email = %s", (email,))
customer = cur.fetchone()
if not customer:
# Erstelle neuen Kunden
name = row.get('customer_name', email.split('@')[0])
cur.execute("""
INSERT INTO customers (name, email, created_at)
VALUES (%s, %s, %s)
RETURNING id
""", (name, email, datetime.now()))
customer_id = cur.fetchone()[0]
customer_name = name
else:
customer_id = customer[0]
customer_name = customer[1]
# Generiere Lizenzschlüssel
license_key = row.get('license_key', generate_license_key())
# Erstelle Lizenz
cur.execute("""
INSERT INTO licenses (
license_key, customer_id,
license_type, valid_from, valid_until, device_limit,
is_test, created_at
) VALUES (%s, %s, %s, %s, %s, %s, %s, %s)
RETURNING id
""", (
license_key, customer_id,
row['license_type'], row['valid_from'], row['valid_until'],
int(row['device_limit']), row.get('is_test', False),
datetime.now()
))
license_id = cur.fetchone()[0]
imported_count += 1
# Audit-Log
log_audit('IMPORT', 'license', license_id,
additional_info=f"Importiert aus {file.filename}")
except Exception as e:
errors.append(f"Zeile {index + 2}: {str(e)}")
conn.commit()
# Feedback
flash(f'{imported_count} Lizenzen erfolgreich importiert!', 'success')
if errors:
flash(f'{len(errors)} Fehler aufgetreten. Erste Fehler: {"; ".join(errors[:3])}', 'warning')
except Exception as e:
logging.error(f"Fehler beim Import: {str(e)}")
flash(f'Fehler beim Import: {str(e)}', 'error')
finally:
if 'conn' in locals():
cur.close()
conn.close()
return render_template("batch_import.html")

Datei anzeigen

@@ -1,461 +0,0 @@
import os
import logging
from datetime import datetime
from zoneinfo import ZoneInfo
from flask import Blueprint, render_template, request, redirect, session, url_for, flash, jsonify
import config
from auth.decorators import login_required
from utils.audit import log_audit
from db import get_connection, get_db_connection, get_db_cursor
from models import get_customers, get_customer_by_id
# Create Blueprint
customer_bp = Blueprint('customers', __name__)
# Test route
@customer_bp.route("/test-customers")
def test_customers():
return "Customer blueprint is working!"
@customer_bp.route("/customers")
@login_required
def customers():
show_test = request.args.get('show_test', 'false').lower() == 'true'
search = request.args.get('search', '').strip()
page = request.args.get('page', 1, type=int)
per_page = 20
sort = request.args.get('sort', 'name')
order = request.args.get('order', 'asc')
customers_list = get_customers(show_test=show_test, search=search)
# Sortierung
if sort == 'name':
customers_list.sort(key=lambda x: x['name'].lower(), reverse=(order == 'desc'))
elif sort == 'email':
customers_list.sort(key=lambda x: x['email'].lower(), reverse=(order == 'desc'))
elif sort == 'created_at':
customers_list.sort(key=lambda x: x['created_at'], reverse=(order == 'desc'))
# Paginierung
total_customers = len(customers_list)
total_pages = (total_customers + per_page - 1) // per_page
start = (page - 1) * per_page
end = start + per_page
paginated_customers = customers_list[start:end]
return render_template("customers.html",
customers=paginated_customers,
show_test=show_test,
search=search,
page=page,
per_page=per_page,
total_pages=total_pages,
total_customers=total_customers,
sort=sort,
order=order,
current_order=order)
@customer_bp.route("/customer/edit/<int:customer_id>", methods=["GET", "POST"])
@login_required
def edit_customer(customer_id):
if request.method == "POST":
try:
# Get current customer data for comparison
current_customer = get_customer_by_id(customer_id)
if not current_customer:
flash('Kunde nicht gefunden!', 'error')
return redirect(url_for('customers.customers'))
with get_db_connection() as conn:
cur = conn.cursor()
try:
# Update customer data
new_values = {
'name': request.form['name'],
'email': request.form['email'],
'is_test': 'is_test' in request.form
}
cur.execute("""
UPDATE customers
SET name = %s, email = %s, is_test = %s
WHERE id = %s
""", (
new_values['name'],
new_values['email'],
new_values['is_test'],
customer_id
))
conn.commit()
# Log changes
log_audit('UPDATE', 'customer', customer_id,
old_values={
'name': current_customer['name'],
'email': current_customer['email'],
'is_test': current_customer.get('is_test', False)
},
new_values=new_values)
flash('Kunde erfolgreich aktualisiert!', 'success')
# Redirect mit show_test Parameter wenn nötig
redirect_url = url_for('customers.customers')
if request.form.get('show_test') == 'true':
redirect_url += '?show_test=true'
return redirect(redirect_url)
finally:
cur.close()
except Exception as e:
logging.error(f"Fehler beim Aktualisieren des Kunden: {str(e)}")
flash('Fehler beim Aktualisieren des Kunden!', 'error')
return redirect(url_for('customers.customers'))
# GET request
customer_data = get_customer_by_id(customer_id)
if not customer_data:
flash('Kunde nicht gefunden!', 'error')
return redirect(url_for('customers.customers'))
return render_template("edit_customer.html", customer=customer_data)
@customer_bp.route("/customer/create", methods=["GET", "POST"])
@login_required
def create_customer():
if request.method == "POST":
conn = get_connection()
cur = conn.cursor()
try:
# Insert new customer
name = request.form['name']
email = request.form['email']
cur.execute("""
INSERT INTO customers (name, email, created_at)
VALUES (%s, %s, %s)
RETURNING id
""", (name, email, datetime.now()))
customer_id = cur.fetchone()[0]
conn.commit()
# Log creation
log_audit('CREATE', 'customer', customer_id,
new_values={
'name': name,
'email': email
})
flash(f'Kunde {name} erfolgreich erstellt!', 'success')
return redirect(url_for('customers.customers'))
except Exception as e:
conn.rollback()
logging.error(f"Fehler beim Erstellen des Kunden: {str(e)}")
flash('Fehler beim Erstellen des Kunden!', 'error')
finally:
cur.close()
conn.close()
return render_template("create_customer.html")
@customer_bp.route("/customer/delete/<int:customer_id>", methods=["POST"])
@login_required
def delete_customer(customer_id):
conn = get_connection()
cur = conn.cursor()
try:
# Get customer data before deletion
customer_data = get_customer_by_id(customer_id)
if not customer_data:
flash('Kunde nicht gefunden!', 'error')
return redirect(url_for('customers.customers'))
# Check if customer has licenses
cur.execute("SELECT COUNT(*) FROM licenses WHERE customer_id = %s", (customer_id,))
license_count = cur.fetchone()[0]
if license_count > 0:
flash(f'Kunde kann nicht gelöscht werden - hat noch {license_count} Lizenz(en)!', 'error')
return redirect(url_for('customers.customers'))
# Delete the customer
cur.execute("DELETE FROM customers WHERE id = %s", (customer_id,))
conn.commit()
# Log deletion
log_audit('DELETE', 'customer', customer_id,
old_values={
'name': customer_data['name'],
'email': customer_data['email']
})
flash(f'Kunde {customer_data["name"]} erfolgreich gelöscht!', 'success')
except Exception as e:
conn.rollback()
logging.error(f"Fehler beim Löschen des Kunden: {str(e)}")
flash('Fehler beim Löschen des Kunden!', 'error')
finally:
cur.close()
conn.close()
return redirect(url_for('customers.customers'))
@customer_bp.route("/customers-licenses")
@login_required
def customers_licenses():
"""Zeigt die Übersicht von Kunden und deren Lizenzen"""
import logging
import psycopg2
logging.info("=== CUSTOMERS-LICENSES ROUTE CALLED ===")
# Get show_test parameter from URL
show_test = request.args.get('show_test', 'false').lower() == 'true'
logging.info(f"show_test parameter: {show_test}")
try:
# Direkte Verbindung ohne Helper-Funktionen
conn = psycopg2.connect(
host=os.getenv("POSTGRES_HOST", "postgres"),
port=os.getenv("POSTGRES_PORT", "5432"),
dbname=os.getenv("POSTGRES_DB"),
user=os.getenv("POSTGRES_USER"),
password=os.getenv("POSTGRES_PASSWORD")
)
conn.set_client_encoding('UTF8')
cur = conn.cursor()
try:
# Hole alle Kunden mit ihren Lizenzen
# Wenn show_test=false, zeige nur Nicht-Test-Kunden
query = """
SELECT
c.id,
c.name,
c.email,
c.created_at,
COUNT(l.id),
COUNT(CASE WHEN l.is_active = true THEN 1 END),
COUNT(CASE WHEN l.is_test = true THEN 1 END),
MAX(l.created_at),
c.is_test
FROM customers c
LEFT JOIN licenses l ON c.id = l.customer_id
WHERE (%s OR c.is_test = false)
GROUP BY c.id, c.name, c.email, c.created_at, c.is_test
ORDER BY c.name
"""
cur.execute(query, (show_test,))
customers = []
results = cur.fetchall()
logging.info(f"=== QUERY RETURNED {len(results)} ROWS ===")
for idx, row in enumerate(results):
logging.info(f"Row {idx}: Type={type(row)}, Length={len(row) if hasattr(row, '__len__') else 'N/A'}")
customers.append({
'id': row[0],
'name': row[1],
'email': row[2],
'created_at': row[3],
'license_count': row[4],
'active_licenses': row[5],
'test_licenses': row[6],
'last_license_created': row[7],
'is_test': row[8]
})
return render_template("customers_licenses.html",
customers=customers,
show_test=show_test)
finally:
cur.close()
conn.close()
except Exception as e:
import traceback
error_details = f"Fehler beim Laden der Kunden-Lizenz-Übersicht: {str(e)}\nType: {type(e)}\nTraceback: {traceback.format_exc()}"
logging.error(error_details)
flash(f'Datenbankfehler: {str(e)}', 'error')
return redirect(url_for('admin.dashboard'))
@customer_bp.route("/api/customer/<int:customer_id>/licenses")
@login_required
def api_customer_licenses(customer_id):
"""API-Endpunkt für die Lizenzen eines Kunden"""
conn = get_connection()
cur = conn.cursor()
try:
# Hole Kundeninformationen
customer = get_customer_by_id(customer_id)
if not customer:
return jsonify({'error': 'Kunde nicht gefunden'}), 404
# Hole alle Lizenzen des Kunden
cur.execute("""
SELECT
l.id,
l.license_key,
l.license_type,
l.is_active,
l.is_test,
l.valid_from,
l.valid_until,
l.device_limit,
l.created_at,
(SELECT COUNT(*) FROM sessions s WHERE s.license_id = l.id AND s.is_active = true) as active_sessions,
(SELECT COUNT(DISTINCT hardware_id) FROM device_registrations dr WHERE dr.license_id = l.id) as registered_devices,
CASE
WHEN l.valid_until < CURRENT_DATE THEN 'abgelaufen'
WHEN l.valid_until < CURRENT_DATE + INTERVAL '30 days' THEN 'läuft bald ab'
WHEN l.is_active = false THEN 'inaktiv'
ELSE 'aktiv'
END as status,
l.domain_count,
l.ipv4_count,
l.phone_count,
(SELECT COUNT(*) FROM device_registrations WHERE license_id = l.id AND is_active = TRUE) as active_devices,
-- Actual resource counts
(SELECT COUNT(*) FROM license_resources lr
JOIN resource_pools rp ON lr.resource_id = rp.id
WHERE lr.license_id = l.id AND lr.is_active = true AND rp.resource_type = 'domain') as actual_domain_count,
(SELECT COUNT(*) FROM license_resources lr
JOIN resource_pools rp ON lr.resource_id = rp.id
WHERE lr.license_id = l.id AND lr.is_active = true AND rp.resource_type = 'ipv4') as actual_ipv4_count,
(SELECT COUNT(*) FROM license_resources lr
JOIN resource_pools rp ON lr.resource_id = rp.id
WHERE lr.license_id = l.id AND lr.is_active = true AND rp.resource_type = 'phone') as actual_phone_count
FROM licenses l
WHERE l.customer_id = %s
ORDER BY l.created_at DESC
""", (customer_id,))
licenses = []
for row in cur.fetchall():
license_id = row[0]
# Hole die konkreten zugewiesenen Ressourcen für diese Lizenz
conn2 = get_connection()
cur2 = conn2.cursor()
cur2.execute("""
SELECT rp.id, rp.resource_type, rp.resource_value, lr.assigned_at
FROM resource_pools rp
JOIN license_resources lr ON rp.id = lr.resource_id
WHERE lr.license_id = %s AND lr.is_active = true
ORDER BY rp.resource_type, rp.resource_value
""", (license_id,))
resources = {
'domains': [],
'ipv4s': [],
'phones': []
}
for res_row in cur2.fetchall():
resource_data = {
'id': res_row[0],
'value': res_row[2],
'assigned_at': res_row[3].strftime('%Y-%m-%d %H:%M:%S') if res_row[3] else None
}
if res_row[1] == 'domain':
resources['domains'].append(resource_data)
elif res_row[1] == 'ipv4':
resources['ipv4s'].append(resource_data)
elif res_row[1] == 'phone':
resources['phones'].append(resource_data)
cur2.close()
conn2.close()
licenses.append({
'id': row[0],
'license_key': row[1],
'license_type': row[2],
'is_active': row[3], # Korrigiert von 'active' zu 'is_active'
'is_test': row[4],
'valid_from': row[5].strftime('%Y-%m-%d') if row[5] else None,
'valid_until': row[6].strftime('%Y-%m-%d') if row[6] else None,
'device_limit': row[7],
'created_at': row[8].strftime('%Y-%m-%d %H:%M:%S') if row[8] else None,
'active_sessions': row[9],
'registered_devices': row[10],
'status': row[11],
'domain_count': row[12],
'ipv4_count': row[13],
'phone_count': row[14],
'active_devices': row[15],
'actual_domain_count': row[16],
'actual_ipv4_count': row[17],
'actual_phone_count': row[18],
'resources': resources
})
return jsonify({
'success': True, # Wichtig: Frontend erwartet dieses Feld
'customer': {
'id': customer['id'],
'name': customer['name'],
'email': customer['email']
},
'licenses': licenses
})
except Exception as e:
logging.error(f"Fehler beim Laden der Kundenlizenzen: {str(e)}")
return jsonify({'error': 'Fehler beim Laden der Daten'}), 500
finally:
cur.close()
conn.close()
@customer_bp.route("/api/customer/<int:customer_id>/quick-stats")
@login_required
def api_customer_quick_stats(customer_id):
"""Schnelle Statistiken für einen Kunden"""
conn = get_connection()
cur = conn.cursor()
try:
cur.execute("""
SELECT
COUNT(l.id) as total_licenses,
COUNT(CASE WHEN l.is_active = true THEN 1 END) as active_licenses,
COUNT(CASE WHEN l.is_test = true THEN 1 END) as test_licenses,
SUM(l.device_limit) as total_device_limit
FROM licenses l
WHERE l.customer_id = %s
""", (customer_id,))
row = cur.fetchone()
return jsonify({
'total_licenses': row[0] or 0,
'active_licenses': row[1] or 0,
'test_licenses': row[2] or 0,
'total_device_limit': row[3] or 0
})
except Exception as e:
logging.error(f"Fehler beim Laden der Kundenstatistiken: {str(e)}")
return jsonify({'error': 'Fehler beim Laden der Daten'}), 500
finally:
cur.close()
conn.close()

Datei anzeigen

@@ -1,364 +0,0 @@
import logging
from datetime import datetime, timedelta
from zoneinfo import ZoneInfo
from flask import Blueprint, request, send_file
import config
from auth.decorators import login_required
from utils.export import create_excel_export, prepare_audit_export_data
from db import get_connection
# Create Blueprint
export_bp = Blueprint('export', __name__, url_prefix='/export')
@export_bp.route("/licenses")
@login_required
def export_licenses():
"""Exportiert Lizenzen als Excel-Datei"""
conn = get_connection()
cur = conn.cursor()
try:
# Filter aus Request
show_test = request.args.get('show_test', 'false') == 'true'
# SQL Query mit optionalem Test-Filter
if show_test:
query = """
SELECT
l.id,
l.license_key,
c.name as customer_name,
c.email as customer_email,
l.license_type,
l.valid_from,
l.valid_until,
l.active,
l.device_limit,
l.created_at,
l.is_test,
CASE
WHEN l.valid_until < CURRENT_DATE THEN 'Abgelaufen'
WHEN l.active = false THEN 'Deaktiviert'
ELSE 'Aktiv'
END as status,
(SELECT COUNT(*) FROM sessions s WHERE s.license_key = l.license_key AND s.active = true) as active_sessions,
(SELECT COUNT(DISTINCT device_id) FROM sessions s WHERE s.license_key = l.license_key) as registered_devices
FROM licenses l
LEFT JOIN customers c ON l.customer_id = c.id
ORDER BY l.created_at DESC
"""
else:
query = """
SELECT
l.id,
l.license_key,
c.name as customer_name,
c.email as customer_email,
l.license_type,
l.valid_from,
l.valid_until,
l.active,
l.device_limit,
l.created_at,
l.is_test,
CASE
WHEN l.valid_until < CURRENT_DATE THEN 'Abgelaufen'
WHEN l.active = false THEN 'Deaktiviert'
ELSE 'Aktiv'
END as status,
(SELECT COUNT(*) FROM sessions s WHERE s.license_key = l.license_key AND s.active = true) as active_sessions,
(SELECT COUNT(DISTINCT device_id) FROM sessions s WHERE s.license_key = l.license_key) as registered_devices
FROM licenses l
LEFT JOIN customers c ON l.customer_id = c.id
WHERE l.is_test = false
ORDER BY l.created_at DESC
"""
cur.execute(query)
# Daten für Export vorbereiten
data = []
columns = ['ID', 'Lizenzschlüssel', 'Kunde', 'E-Mail', 'Typ', 'Gültig von',
'Gültig bis', 'Aktiv', 'Gerätelimit', 'Erstellt am', 'Test-Lizenz',
'Status', 'Aktive Sessions', 'Registrierte Geräte']
for row in cur.fetchall():
data.append(list(row))
# Excel-Datei erstellen
excel_file = create_excel_export(data, columns, 'Lizenzen')
# Datei senden
filename = f"lizenzen_export_{datetime.now().strftime('%Y%m%d_%H%M%S')}.xlsx"
return send_file(
excel_file,
mimetype='application/vnd.openxmlformats-officedocument.spreadsheetml.sheet',
as_attachment=True,
download_name=filename
)
except Exception as e:
logging.error(f"Fehler beim Export: {str(e)}")
return "Fehler beim Exportieren der Lizenzen", 500
finally:
cur.close()
conn.close()
@export_bp.route("/audit")
@login_required
def export_audit():
"""Exportiert Audit-Logs als Excel-Datei"""
conn = get_connection()
cur = conn.cursor()
try:
# Filter aus Request
days = int(request.args.get('days', 30))
action_filter = request.args.get('action', '')
entity_type_filter = request.args.get('entity_type', '')
# Daten für Export vorbereiten
data = prepare_audit_export_data(days, action_filter, entity_type_filter)
# Excel-Datei erstellen
columns = ['Zeitstempel', 'Benutzer', 'Aktion', 'Entität', 'Entität ID',
'IP-Adresse', 'Alte Werte', 'Neue Werte', 'Zusatzinfo']
excel_file = create_excel_export(data, columns, 'Audit-Log')
# Datei senden
filename = f"audit_log_export_{datetime.now().strftime('%Y%m%d_%H%M%S')}.xlsx"
return send_file(
excel_file,
mimetype='application/vnd.openxmlformats-officedocument.spreadsheetml.sheet',
as_attachment=True,
download_name=filename
)
except Exception as e:
logging.error(f"Fehler beim Export: {str(e)}")
return "Fehler beim Exportieren der Audit-Logs", 500
finally:
cur.close()
conn.close()
@export_bp.route("/customers")
@login_required
def export_customers():
"""Exportiert Kunden als Excel-Datei"""
conn = get_connection()
cur = conn.cursor()
try:
# SQL Query
cur.execute("""
SELECT
c.id,
c.name,
c.email,
c.phone,
c.address,
c.created_at,
c.is_test,
COUNT(l.id) as license_count,
COUNT(CASE WHEN l.active = true THEN 1 END) as active_licenses,
COUNT(CASE WHEN l.valid_until < CURRENT_DATE THEN 1 END) as expired_licenses
FROM customers c
LEFT JOIN licenses l ON c.id = l.customer_id
GROUP BY c.id, c.name, c.email, c.phone, c.address, c.created_at, c.is_test
ORDER BY c.name
""")
# Daten für Export vorbereiten
data = []
columns = ['ID', 'Name', 'E-Mail', 'Telefon', 'Adresse', 'Erstellt am',
'Test-Kunde', 'Anzahl Lizenzen', 'Aktive Lizenzen', 'Abgelaufene Lizenzen']
for row in cur.fetchall():
data.append(list(row))
# Excel-Datei erstellen
excel_file = create_excel_export(data, columns, 'Kunden')
# Datei senden
filename = f"kunden_export_{datetime.now().strftime('%Y%m%d_%H%M%S')}.xlsx"
return send_file(
excel_file,
mimetype='application/vnd.openxmlformats-officedocument.spreadsheetml.sheet',
as_attachment=True,
download_name=filename
)
except Exception as e:
logging.error(f"Fehler beim Export: {str(e)}")
return "Fehler beim Exportieren der Kunden", 500
finally:
cur.close()
conn.close()
@export_bp.route("/sessions")
@login_required
def export_sessions():
"""Exportiert Sessions als Excel-Datei"""
conn = get_connection()
cur = conn.cursor()
try:
# Filter aus Request
days = int(request.args.get('days', 7))
active_only = request.args.get('active_only', 'false') == 'true'
# SQL Query
if active_only:
query = """
SELECT
s.id,
s.license_key,
l.customer_name,
s.username,
s.device_id,
s.login_time,
s.logout_time,
s.last_activity,
s.active,
l.license_type,
l.is_test
FROM sessions s
LEFT JOIN licenses l ON s.license_key = l.license_key
WHERE s.active = true
ORDER BY s.login_time DESC
"""
cur.execute(query)
else:
query = """
SELECT
s.id,
s.license_key,
l.customer_name,
s.username,
s.device_id,
s.login_time,
s.logout_time,
s.last_activity,
s.active,
l.license_type,
l.is_test
FROM sessions s
LEFT JOIN licenses l ON s.license_key = l.license_key
WHERE s.login_time >= CURRENT_TIMESTAMP - INTERVAL '%s days'
ORDER BY s.login_time DESC
"""
cur.execute(query, (days,))
# Daten für Export vorbereiten
data = []
columns = ['ID', 'Lizenzschlüssel', 'Kunde', 'Benutzer', 'Geräte-ID',
'Login-Zeit', 'Logout-Zeit', 'Letzte Aktivität', 'Aktiv',
'Lizenztyp', 'Test-Lizenz']
for row in cur.fetchall():
data.append(list(row))
# Excel-Datei erstellen
excel_file = create_excel_export(data, columns, 'Sessions')
# Datei senden
filename = f"sessions_export_{datetime.now().strftime('%Y%m%d_%H%M%S')}.xlsx"
return send_file(
excel_file,
mimetype='application/vnd.openxmlformats-officedocument.spreadsheetml.sheet',
as_attachment=True,
download_name=filename
)
except Exception as e:
logging.error(f"Fehler beim Export: {str(e)}")
return "Fehler beim Exportieren der Sessions", 500
finally:
cur.close()
conn.close()
@export_bp.route("/resources")
@login_required
def export_resources():
"""Exportiert Ressourcen als Excel-Datei"""
conn = get_connection()
cur = conn.cursor()
try:
# Filter aus Request
resource_type = request.args.get('type', 'all')
status_filter = request.args.get('status', 'all')
show_test = request.args.get('show_test', 'false') == 'true'
# SQL Query aufbauen
query = """
SELECT
rp.id,
rp.resource_type,
rp.resource_value,
rp.status,
rp.is_test,
l.license_key,
c.name as customer_name,
rp.created_at,
rp.created_by,
rp.status_changed_at,
rp.status_changed_by,
rp.quarantine_reason
FROM resource_pools rp
LEFT JOIN licenses l ON rp.allocated_to_license = l.id
LEFT JOIN customers c ON l.customer_id = c.id
WHERE 1=1
"""
params = []
if resource_type != 'all':
query += " AND rp.resource_type = %s"
params.append(resource_type)
if status_filter != 'all':
query += " AND rp.status = %s"
params.append(status_filter)
if not show_test:
query += " AND rp.is_test = false"
query += " ORDER BY rp.resource_type, rp.resource_value"
cur.execute(query, params)
# Daten für Export vorbereiten
data = []
columns = ['ID', 'Typ', 'Wert', 'Status', 'Test-Ressource', 'Lizenzschlüssel',
'Kunde', 'Erstellt am', 'Erstellt von', 'Status geändert am',
'Status geändert von', 'Quarantäne-Grund']
for row in cur.fetchall():
data.append(list(row))
# Excel-Datei erstellen
excel_file = create_excel_export(data, columns, 'Ressourcen')
# Datei senden
filename = f"ressourcen_export_{datetime.now().strftime('%Y%m%d_%H%M%S')}.xlsx"
return send_file(
excel_file,
mimetype='application/vnd.openxmlformats-officedocument.spreadsheetml.sheet',
as_attachment=True,
download_name=filename
)
except Exception as e:
logging.error(f"Fehler beim Export: {str(e)}")
return "Fehler beim Exportieren der Ressourcen", 500
finally:
cur.close()
conn.close()

Datei anzeigen

@@ -1,429 +0,0 @@
import logging
from datetime import datetime, timedelta
from zoneinfo import ZoneInfo
from flask import Blueprint, render_template, request, redirect, session, url_for, flash
import config
from auth.decorators import login_required
from utils.audit import log_audit
from utils.network import get_client_ip
from db import get_connection, get_db_connection, get_db_cursor
from models import get_active_sessions
# Create Blueprint
session_bp = Blueprint('sessions', __name__)
@session_bp.route("/sessions")
@login_required
def sessions():
conn = get_connection()
cur = conn.cursor()
try:
# Get active sessions with calculated inactive time
cur.execute("""
SELECT s.id, s.session_id, l.license_key, c.name, s.ip_address,
s.user_agent, s.started_at, s.last_heartbeat,
EXTRACT(EPOCH FROM (NOW() - s.last_heartbeat))/60 as minutes_inactive
FROM sessions s
JOIN licenses l ON s.license_id = l.id
JOIN customers c ON l.customer_id = c.id
WHERE s.is_active = TRUE
ORDER BY s.last_heartbeat DESC
""")
active_sessions = cur.fetchall()
# Get recent ended sessions
cur.execute("""
SELECT s.id, s.session_id, l.license_key, c.name, s.ip_address,
s.started_at, s.ended_at,
EXTRACT(EPOCH FROM (s.ended_at - s.started_at))/60 as duration_minutes
FROM sessions s
JOIN licenses l ON s.license_id = l.id
JOIN customers c ON l.customer_id = c.id
WHERE s.is_active = FALSE
AND s.ended_at > NOW() - INTERVAL '24 hours'
ORDER BY s.ended_at DESC
LIMIT 50
""")
recent_sessions = cur.fetchall()
return render_template("sessions.html",
active_sessions=active_sessions,
recent_sessions=recent_sessions)
except Exception as e:
logging.error(f"Error loading sessions: {str(e)}")
flash('Fehler beim Laden der Sessions!', 'error')
return redirect(url_for('admin.dashboard'))
finally:
cur.close()
conn.close()
@session_bp.route("/sessions/history")
@login_required
def session_history():
"""Zeigt die Session-Historie"""
conn = get_connection()
cur = conn.cursor()
try:
# Query parameters
license_key = request.args.get('license_key', '')
username = request.args.get('username', '')
days = int(request.args.get('days', 7))
# Base query
query = """
SELECT
s.id,
s.license_key,
s.username,
s.device_id,
s.login_time,
s.logout_time,
s.last_activity,
s.active,
l.customer_name,
l.license_type,
l.is_test
FROM sessions s
LEFT JOIN licenses l ON s.license_key = l.license_key
WHERE 1=1
"""
params = []
# Apply filters
if license_key:
query += " AND s.license_key = %s"
params.append(license_key)
if username:
query += " AND s.username ILIKE %s"
params.append(f'%{username}%')
# Time filter
query += " AND s.login_time >= CURRENT_TIMESTAMP - INTERVAL '%s days'"
params.append(days)
query += " ORDER BY s.login_time DESC LIMIT 1000"
cur.execute(query, params)
sessions_list = []
for row in cur.fetchall():
session_duration = None
if row[4] and row[5]: # login_time and logout_time
duration = row[5] - row[4]
hours = int(duration.total_seconds() // 3600)
minutes = int((duration.total_seconds() % 3600) // 60)
session_duration = f"{hours}h {minutes}m"
elif row[4] and row[7]: # login_time and active
duration = datetime.now(ZoneInfo("UTC")) - row[4]
hours = int(duration.total_seconds() // 3600)
minutes = int((duration.total_seconds() % 3600) // 60)
session_duration = f"{hours}h {minutes}m (aktiv)"
sessions_list.append({
'id': row[0],
'license_key': row[1],
'username': row[2],
'device_id': row[3],
'login_time': row[4],
'logout_time': row[5],
'last_activity': row[6],
'active': row[7],
'customer_name': row[8],
'license_type': row[9],
'is_test': row[10],
'duration': session_duration
})
# Get unique license keys for filter dropdown
cur.execute("""
SELECT DISTINCT s.license_key, l.customer_name
FROM sessions s
LEFT JOIN licenses l ON s.license_key = l.license_key
WHERE s.login_time >= CURRENT_TIMESTAMP - INTERVAL '30 days'
ORDER BY l.customer_name, s.license_key
""")
available_licenses = []
for row in cur.fetchall():
available_licenses.append({
'license_key': row[0],
'customer_name': row[1] or 'Unbekannt'
})
return render_template("session_history.html",
sessions=sessions_list,
available_licenses=available_licenses,
filters={
'license_key': license_key,
'username': username,
'days': days
})
except Exception as e:
logging.error(f"Fehler beim Laden der Session-Historie: {str(e)}")
flash('Fehler beim Laden der Session-Historie!', 'error')
return redirect(url_for('sessions.sessions'))
finally:
cur.close()
conn.close()
@session_bp.route("/session/end/<int:session_id>", methods=["POST"])
@login_required
def terminate_session(session_id):
"""Beendet eine aktive Session"""
conn = get_connection()
cur = conn.cursor()
try:
# Get session info
cur.execute("""
SELECT license_key, username, device_id
FROM sessions
WHERE id = %s AND active = true
""", (session_id,))
session_info = cur.fetchone()
if not session_info:
flash('Session nicht gefunden oder bereits beendet!', 'error')
return redirect(url_for('sessions.sessions'))
# Terminate session
cur.execute("""
UPDATE sessions
SET active = false, logout_time = CURRENT_TIMESTAMP
WHERE id = %s
""", (session_id,))
conn.commit()
# Audit log
log_audit('SESSION_TERMINATE', 'session', session_id,
additional_info=f"Session beendet für {session_info[1]} auf Lizenz {session_info[0]}")
flash('Session erfolgreich beendet!', 'success')
except Exception as e:
conn.rollback()
logging.error(f"Fehler beim Beenden der Session: {str(e)}")
flash('Fehler beim Beenden der Session!', 'error')
finally:
cur.close()
conn.close()
return redirect(url_for('sessions.sessions'))
@session_bp.route("/sessions/terminate-all/<license_key>", methods=["POST"])
@login_required
def terminate_all_sessions(license_key):
"""Beendet alle aktiven Sessions einer Lizenz"""
conn = get_connection()
cur = conn.cursor()
try:
# Count active sessions
cur.execute("""
SELECT COUNT(*) FROM sessions
WHERE license_key = %s AND active = true
""", (license_key,))
active_count = cur.fetchone()[0]
if active_count == 0:
flash('Keine aktiven Sessions gefunden!', 'info')
return redirect(url_for('sessions.sessions'))
# Terminate all sessions
cur.execute("""
UPDATE sessions
SET active = false, logout_time = CURRENT_TIMESTAMP
WHERE license_key = %s AND active = true
""", (license_key,))
conn.commit()
# Audit log
log_audit('SESSION_TERMINATE_ALL', 'license', None,
additional_info=f"{active_count} Sessions beendet für Lizenz {license_key}")
flash(f'{active_count} Sessions erfolgreich beendet!', 'success')
except Exception as e:
conn.rollback()
logging.error(f"Fehler beim Beenden der Sessions: {str(e)}")
flash('Fehler beim Beenden der Sessions!', 'error')
finally:
cur.close()
conn.close()
return redirect(url_for('sessions.sessions'))
@session_bp.route("/sessions/cleanup", methods=["POST"])
@login_required
def cleanup_sessions():
"""Bereinigt alte inaktive Sessions"""
conn = get_connection()
cur = conn.cursor()
try:
days = int(request.form.get('days', 30))
# Delete old inactive sessions
cur.execute("""
DELETE FROM sessions
WHERE active = false
AND logout_time < CURRENT_TIMESTAMP - INTERVAL '%s days'
RETURNING id
""", (days,))
deleted_ids = [row[0] for row in cur.fetchall()]
deleted_count = len(deleted_ids)
conn.commit()
# Audit log
if deleted_count > 0:
log_audit('SESSION_CLEANUP', 'system', None,
additional_info=f"{deleted_count} Sessions älter als {days} Tage gelöscht")
flash(f'{deleted_count} alte Sessions bereinigt!', 'success')
except Exception as e:
conn.rollback()
logging.error(f"Fehler beim Bereinigen der Sessions: {str(e)}")
flash('Fehler beim Bereinigen der Sessions!', 'error')
finally:
cur.close()
conn.close()
return redirect(url_for('sessions.session_history'))
@session_bp.route("/sessions/statistics")
@login_required
def session_statistics():
"""Zeigt Session-Statistiken"""
conn = get_connection()
cur = conn.cursor()
try:
# Aktuelle Statistiken
cur.execute("""
SELECT
COUNT(DISTINCT s.license_key) as active_licenses,
COUNT(DISTINCT s.username) as unique_users,
COUNT(DISTINCT s.device_id) as unique_devices,
COUNT(*) as total_active_sessions
FROM sessions s
WHERE s.active = true
""")
current_stats = cur.fetchone()
# Sessions nach Lizenztyp
cur.execute("""
SELECT
l.license_type,
COUNT(*) as session_count
FROM sessions s
JOIN licenses l ON s.license_key = l.license_key
WHERE s.active = true
GROUP BY l.license_type
ORDER BY session_count DESC
""")
sessions_by_type = []
for row in cur.fetchall():
sessions_by_type.append({
'license_type': row[0],
'count': row[1]
})
# Top 10 Lizenzen nach aktiven Sessions
cur.execute("""
SELECT
s.license_key,
l.customer_name,
COUNT(*) as session_count,
l.device_limit
FROM sessions s
JOIN licenses l ON s.license_key = l.license_key
WHERE s.active = true
GROUP BY s.license_key, l.customer_name, l.device_limit
ORDER BY session_count DESC
LIMIT 10
""")
top_licenses = []
for row in cur.fetchall():
top_licenses.append({
'license_key': row[0],
'customer_name': row[1],
'session_count': row[2],
'device_limit': row[3]
})
# Session-Verlauf (letzte 7 Tage)
cur.execute("""
SELECT
DATE(login_time) as date,
COUNT(*) as login_count,
COUNT(DISTINCT license_key) as unique_licenses,
COUNT(DISTINCT username) as unique_users
FROM sessions
WHERE login_time >= CURRENT_DATE - INTERVAL '7 days'
GROUP BY DATE(login_time)
ORDER BY date
""")
session_history = []
for row in cur.fetchall():
session_history.append({
'date': row[0].strftime('%Y-%m-%d'),
'login_count': row[1],
'unique_licenses': row[2],
'unique_users': row[3]
})
# Durchschnittliche Session-Dauer
cur.execute("""
SELECT
AVG(EXTRACT(EPOCH FROM (logout_time - login_time))/3600) as avg_duration_hours
FROM sessions
WHERE active = false
AND logout_time IS NOT NULL
AND logout_time - login_time < INTERVAL '24 hours'
AND login_time >= CURRENT_DATE - INTERVAL '30 days'
""")
avg_duration = cur.fetchone()[0] or 0
return render_template("session_statistics.html",
current_stats={
'active_licenses': current_stats[0],
'unique_users': current_stats[1],
'unique_devices': current_stats[2],
'total_sessions': current_stats[3]
},
sessions_by_type=sessions_by_type,
top_licenses=top_licenses,
session_history=session_history,
avg_duration=round(avg_duration, 1))
except Exception as e:
logging.error(f"Fehler beim Laden der Session-Statistiken: {str(e)}")
flash('Fehler beim Laden der Statistiken!', 'error')
return redirect(url_for('sessions.sessions'))
finally:
cur.close()
conn.close()