Commits vergleichen

...

10 Commits

Autor SHA1 Nachricht Datum
6f6cde65db Add latest changes 2025-07-03 20:38:33 +00:00
63f3d92724 Server backup 20250703_203754 - Full system backup before changes 2025-07-03 20:37:56 +00:00
482fa3bd52 Server backup 20250703_172107 - Full system backup before changes 2025-07-03 17:21:09 +00:00
0c66d16a8a Server backup 20250703_153414 - Full system backup before changes 2025-07-03 15:34:16 +00:00
c4fb4c0699 Server backup 20250703_145459 - Full system backup before changes 2025-07-03 14:55:01 +00:00
91075406e4 Server backup 20250703_141921 - Full system backup before changes 2025-07-03 14:19:23 +00:00
fdecfbdd5a Server backup 20250702_215036 - Full system backup before changes 2025-07-02 21:50:38 +00:00
ffc6aa744a Fix version check endpoint authentication
Changed version check endpoints to use X-API-Key authentication instead of Bearer token authentication. This makes them consistent with all other license server endpoints.

Changes:
- Updated /api/version/check to use validate_api_key dependency
- Updated /api/version/latest to use validate_api_key dependency
- Both endpoints now expect X-API-Key header instead of Authorization Bearer
- Fixes HTTP 403 errors reported by client applications

This resolves the issue where session heartbeat worked but version check failed with 403 Forbidden.

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-07-02 21:37:58 +00:00
735e42a9c4 Server backup 20250702_213331 - Full system backup before changes 2025-07-02 21:33:33 +00:00
740dc706c5 Server backup 20250702_211500 - Full system backup before changes 2025-07-02 21:15:02 +00:00
122 geänderte Dateien mit 4004 neuen und 949 gelöschten Zeilen

Datei anzeigen

@@ -80,7 +80,7 @@ Content-Type: application/json
```json
{
"license_key": "XXXX-XXXX-XXXX-XXXX",
"hardware_hash": "unique-hardware-identifier",
"hardware_fingerprint": "unique-hardware-identifier",
"machine_name": "DESKTOP-ABC123",
"app_version": "1.0.0"
}
@@ -93,7 +93,7 @@ Content-Type: application/json
"activation": {
"id": 123,
"license_key": "XXXX-XXXX-XXXX-XXXX",
"hardware_hash": "unique-hardware-identifier",
"hardware_fingerprint": "unique-hardware-identifier",
"machine_name": "DESKTOP-ABC123",
"activated_at": "2025-06-19T10:30:00Z",
"last_heartbeat": "2025-06-19T10:30:00Z",
@@ -115,7 +115,7 @@ Content-Type: application/json
```json
{
"license_key": "XXXX-XXXX-XXXX-XXXX",
"hardware_hash": "unique-hardware-identifier",
"hardware_fingerprint": "unique-hardware-identifier",
"app_version": "1.0.0"
}
```
@@ -128,7 +128,7 @@ Content-Type: application/json
"license": {
"key": "XXXX-XXXX-XXXX-XXXX",
"valid_until": "2026-01-01",
"max_users": 10
"concurrent_sessions_limit": 10
},
"update_available": false,
"latest_version": "1.0.0"
@@ -153,14 +153,14 @@ X-API-Key: AF-2025-XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
"type": "perpetual",
"valid_from": "2025-01-01",
"valid_until": "2026-01-01",
"max_activations": 5,
"max_users": 10,
"device_limit": 5,
"concurrent_sessions_limit": 10,
"is_active": true
},
"activations": [
{
"id": 456,
"hardware_hash": "unique-hardware-identifier",
"hardware_fingerprint": "unique-hardware-identifier",
"machine_name": "DESKTOP-ABC123",
"activated_at": "2025-06-19T10:00:00Z",
"last_heartbeat": "2025-06-19T14:30:00Z",
@@ -187,8 +187,8 @@ Content-Type: application/json
```json
{
"license_key": "XXXX-XXXX-XXXX-XXXX",
"machine_id": "DESKTOP-ABC123",
"hardware_hash": "unique-hardware-identifier",
"machine_name": "DESKTOP-ABC123",
"hardware_fingerprint": "unique-hardware-identifier",
"version": "1.0.0"
}
```
@@ -359,7 +359,7 @@ Get devices for a license.
"devices": [
{
"id": 123,
"hardware_hash": "unique-hardware-identifier",
"hardware_fingerprint": "unique-hardware-identifier",
"machine_name": "DESKTOP-ABC123",
"activated_at": "2025-01-01T10:00:00Z",
"last_heartbeat": "2025-06-19T14:30:00Z",
@@ -376,7 +376,7 @@ Register a new device.
**Request:**
```json
{
"hardware_hash": "unique-hardware-identifier",
"hardware_fingerprint": "unique-hardware-identifier",
"machine_name": "DESKTOP-XYZ789",
"app_version": "1.0.0"
}

711
API_REFERENCE_DOWNLOAD.md Normale Datei
Datei anzeigen

@@ -0,0 +1,711 @@
# V2-Docker API Reference
## Authentication
### API Key Authentication
All License Server API endpoints require authentication using an API key. The API key must be included in the request headers.
**Header Format:**
```
X-API-Key: AF-2025-XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
```
**API Key Management:**
- API keys can be managed through the Admin Panel under "Lizenzserver Administration" → "System-API-Key generieren"
- Keys follow the format: `AF-YYYY-[32 random characters]`
- Only one system API key is active at a time
- Regenerating the key will immediately invalidate the old key
- The initial API key is automatically generated on first startup
- To retrieve the initial API key from database: `SELECT api_key FROM system_api_key WHERE id = 1;`
**Error Response (401 Unauthorized):**
```json
{
"error": "Invalid or missing API key",
"code": "INVALID_API_KEY",
"status": 401
}
```
## License Server API
**Base URL:** `https://api-software-undso.intelsight.de`
### Public Endpoints
#### GET /
Root endpoint - Service status.
**Response:**
```json
{
"status": "ok",
"service": "V2 License Server",
"timestamp": "2025-06-19T10:30:00Z"
}
```
#### GET /health
Health check endpoint.
**Response:**
```json
{
"status": "healthy",
"timestamp": "2025-06-19T10:30:00Z"
}
```
#### GET /metrics
Prometheus metrics endpoint.
**Response:**
Prometheus metrics in CONTENT_TYPE_LATEST format.
### License API Endpoints
All license endpoints require API key authentication via `X-API-Key` header.
#### POST /api/license/activate
Activate a license on a new system.
**Headers:**
```
X-API-Key: AF-2025-XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
Content-Type: application/json
```
**Request:**
```json
{
"license_key": "XXXX-XXXX-XXXX-XXXX",
"hardware_fingerprint": "unique-hardware-identifier",
"machine_name": "DESKTOP-ABC123",
"app_version": "1.0.0"
}
```
**Response:**
```json
{
"message": "License activated successfully",
"activation": {
"id": 123,
"license_key": "XXXX-XXXX-XXXX-XXXX",
"hardware_fingerprint": "unique-hardware-identifier",
"machine_name": "DESKTOP-ABC123",
"activated_at": "2025-06-19T10:30:00Z",
"last_heartbeat": "2025-06-19T10:30:00Z",
"is_active": true
}
}
```
#### POST /api/license/verify
Verify an active license.
**Headers:**
```
X-API-Key: AF-2025-XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
Content-Type: application/json
```
**Request:**
```json
{
"license_key": "XXXX-XXXX-XXXX-XXXX",
"hardware_fingerprint": "unique-hardware-identifier",
"app_version": "1.0.0"
}
```
**Response:**
```json
{
"valid": true,
"message": "License is valid",
"license": {
"key": "XXXX-XXXX-XXXX-XXXX",
"valid_until": "2026-01-01",
"concurrent_sessions_limit": 10
},
"update_available": false,
"latest_version": "1.0.0"
}
```
#### GET /api/license/info/{license_key}
Get license information.
**Headers:**
```
X-API-Key: AF-2025-XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
```
**Response:**
```json
{
"license": {
"id": 123,
"key": "XXXX-XXXX-XXXX-XXXX",
"customer_name": "ACME Corp",
"type": "perpetual",
"valid_from": "2025-01-01",
"valid_until": "2026-01-01",
"device_limit": 5,
"concurrent_sessions_limit": 10,
"is_active": true
},
"activations": [
{
"id": 456,
"hardware_fingerprint": "unique-hardware-identifier",
"machine_name": "DESKTOP-ABC123",
"activated_at": "2025-06-19T10:00:00Z",
"last_heartbeat": "2025-06-19T14:30:00Z",
"is_active": true
}
]
}
```
### Session Management API Endpoints
**Note:** Session endpoints require that the client application is configured in the `client_configs` table. The default client "Account Forger" is pre-configured.
#### POST /api/license/session/start
Start a new session for a license.
**Headers:**
```
X-API-Key: AF-2025-XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
Content-Type: application/json
```
**Request:**
```json
{
"license_key": "XXXX-XXXX-XXXX-XXXX",
"machine_name": "DESKTOP-ABC123",
"hardware_fingerprint": "unique-hardware-identifier",
"version": "1.0.0"
}
```
**Response:**
- 200 OK: Returns session_token and optional update info
- 409 Conflict: "Es ist nur eine Sitzung erlaubt..." (single session enforcement)
#### POST /api/license/session/heartbeat
Keep session alive with heartbeat.
**Headers:**
```
X-API-Key: AF-2025-XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
Content-Type: application/json
```
**Request:**
```json
{
"session_token": "550e8400-e29b-41d4-a716-446655440000",
"license_key": "XXXX-XXXX-XXXX-XXXX"
}
```
**Response:** 200 OK with last_heartbeat timestamp
#### POST /api/license/session/end
End an active session.
**Headers:**
```
X-API-Key: AF-2025-XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
Content-Type: application/json
```
**Request:**
```json
{
"session_token": "550e8400-e29b-41d4-a716-446655440000"
}
```
**Response:** 200 OK with session duration and end reason
### Version API Endpoints
#### POST /api/version/check
Check for available updates.
**Headers:**
```
X-API-Key: AF-2025-XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
```
**Request:**
```json
{
"current_version": "1.0.0",
"license_key": "XXXX-XXXX-XXXX-XXXX"
}
```
**Response:**
```json
{
"update_available": true,
"latest_version": "1.1.0",
"download_url": "https://example.com/download/v1.1.0",
"release_notes": "Bug fixes and performance improvements"
}
```
#### GET /api/version/latest
Get latest version information.
**Headers:**
```
X-API-Key: AF-2025-XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
```
**Response:**
```json
{
"version": "1.1.0",
"release_date": "2025-06-20",
"download_url": "https://example.com/download/v1.1.0",
"release_notes": "Bug fixes and performance improvements"
}
```
## Admin Panel API
**Base URL:** `https://admin-panel-undso.intelsight.de`
### Customer API Endpoints
#### GET /api/customers
Search customers for Select2 dropdown.
**Query Parameters:**
- `q`: Search query
- `page`: Page number (default: 1)
**Response:**
```json
{
"results": [
{
"id": 123,
"text": "ACME Corp - admin@acme.com"
}
],
"pagination": {
"more": false
}
}
```
### License Management API
- `POST /api/license/{id}/toggle` - Toggle active status
- `POST /api/licenses/bulk-activate` - Activate multiple (license_ids array)
- `POST /api/licenses/bulk-deactivate` - Deactivate multiple
- `POST /api/licenses/bulk-delete` - Delete multiple
- `POST /api/license/{id}/quick-edit` - Update validity/limits
- `GET /api/license/{id}/devices` - List registered devices
#### POST /api/license/{license_id}/quick-edit
Quick edit license properties.
**Request:**
```json
{
"valid_until": "2027-01-01",
"max_activations": 10,
"max_users": 50
}
```
**Response:**
```json
{
"success": true,
"message": "License updated successfully"
}
```
#### POST /api/generate-license-key
Generate a new license key.
**Response:**
```json
{
"license_key": "NEW1-NEW2-NEW3-NEW4"
}
```
### Device Management API
#### GET /api/license/{license_id}/devices
Get devices for a license.
**Response:**
```json
{
"devices": [
{
"id": 123,
"hardware_fingerprint": "unique-hardware-identifier",
"machine_name": "DESKTOP-ABC123",
"activated_at": "2025-01-01T10:00:00Z",
"last_heartbeat": "2025-06-19T14:30:00Z",
"is_active": true,
"app_version": "1.0.0"
}
]
}
```
#### POST /api/license/{license_id}/register-device
Register a new device.
**Request:**
```json
{
"hardware_fingerprint": "unique-hardware-identifier",
"machine_name": "DESKTOP-XYZ789",
"app_version": "1.0.0"
}
```
**Response:**
```json
{
"success": true,
"device_id": 456,
"message": "Device registered successfully"
}
```
#### POST /api/license/{license_id}/deactivate-device/{device_id}
Deactivate a device.
**Response:**
```json
{
"success": true,
"message": "Device deactivated successfully"
}
```
### Resource Management API
#### GET /api/license/{license_id}/resources
Get resources for a license.
**Response:**
```json
{
"resources": [
{
"id": 789,
"type": "server",
"identifier": "SRV-001",
"status": "allocated",
"allocated_at": "2025-06-01T10:00:00Z"
}
]
}
```
#### POST /api/resources/allocate
Allocate resources to a license.
**Request:**
```json
{
"license_id": 123,
"resource_ids": [789, 790]
}
```
**Response:**
```json
{
"success": true,
"allocated": 2,
"message": "2 resources allocated successfully"
}
```
#### GET /api/resources/check-availability
Check resource availability.
**Query Parameters:**
- `type`: Resource type
- `count`: Number of resources needed
**Response:**
```json
{
"available": true,
"count": 5,
"resources": [
{
"id": 791,
"type": "server",
"identifier": "SRV-002"
}
]
}
```
### Search API
#### GET /api/global-search
Global search across all entities.
**Query Parameters:**
- `q`: Search query
- `type`: Entity type filter (customer, license, device)
- `limit`: Maximum results (default: 20)
**Response:**
```json
{
"results": [
{
"type": "customer",
"id": 123,
"title": "ACME Corp",
"subtitle": "admin@acme.com",
"url": "/customer/edit/123"
},
{
"type": "license",
"id": 456,
"title": "XXXX-XXXX-XXXX-XXXX",
"subtitle": "ACME Corp - Active",
"url": "/license/edit/456"
}
],
"total": 15
}
```
### Lead Management API
#### GET /leads/api/institutions
Get all institutions with pagination.
**Query Parameters:**
- `page`: Page number (default: 1)
- `per_page`: Items per page (default: 20)
- `search`: Search query
**Response:**
```json
{
"institutions": [
{
"id": 1,
"name": "Tech University",
"contact_count": 5,
"created_at": "2025-06-19T10:00:00Z"
}
],
"total": 100,
"page": 1,
"per_page": 20
}
```
#### POST /leads/api/institutions
Create a new institution.
**Request:**
```json
{
"name": "New University"
}
```
**Response:**
```json
{
"id": 101,
"name": "New University",
"created_at": "2025-06-19T15:00:00Z"
}
```
#### GET /leads/api/contacts/{contact_id}
Get contact details.
**Response:**
```json
{
"id": 1,
"first_name": "John",
"last_name": "Doe",
"position": "IT Manager",
"institution_id": 1,
"details": [
{
"id": 1,
"type": "email",
"value": "john.doe@example.com",
"label": "Work"
},
{
"id": 2,
"type": "phone",
"value": "+49 123 456789",
"label": "Mobile"
}
],
"notes": [
{
"id": 1,
"content": "Initial contact",
"version": 1,
"created_at": "2025-06-19T10:00:00Z",
"created_by": "admin"
}
]
}
```
#### POST /leads/api/contacts/{contact_id}/details
Add contact detail (phone/email).
**Request:**
```json
{
"type": "email",
"value": "secondary@example.com",
"label": "Secondary"
}
```
**Response:**
```json
{
"id": 3,
"type": "email",
"value": "secondary@example.com",
"label": "Secondary"
}
```
### Resource Management API
#### POST /api/resources/allocate
Allocate resources to a license.
**Request:**
```json
{
"license_id": 123,
"resource_type": "domain",
"resource_ids": [45, 46, 47]
}
```
**Response:**
```json
{
"success": true,
"allocated": 3,
"message": "3 resources allocated successfully"
}
```
## Lead Management API
### GET /leads/api/stats
Get lead statistics.
**Response:**
```json
{
"total_institutions": 150,
"total_contacts": 450,
"recent_activities": 25,
"conversion_rate": 12.5,
"by_type": {
"university": 50,
"company": 75,
"government": 25
}
}
```
### Lead Routes (HTML Pages)
- `GET /leads/` - Lead overview page
- `GET /leads/create` - Create lead form
- `POST /leads/create` - Save new lead
- `GET /leads/edit/{lead_id}` - Edit lead form
- `POST /leads/update/{lead_id}` - Update lead
- `POST /leads/delete/{lead_id}` - Delete lead
- `GET /leads/export` - Export leads
- `POST /leads/import` - Import leads
## Common Response Codes
- `200 OK`: Successful request
- `201 Created`: Resource created
- `400 Bad Request`: Invalid request data
- `401 Unauthorized`: Missing or invalid authentication
- `403 Forbidden`: Insufficient permissions
- `404 Not Found`: Resource not found
- `409 Conflict`: Resource conflict (e.g., duplicate)
- `429 Too Many Requests`: Rate limit exceeded
- `500 Internal Server Error`: Server error
## Rate Limiting
- API endpoints: 100 requests/minute
- Login attempts: 5 per minute
- Configurable via Admin Panel
## Error Response Format
All errors return JSON with `error`, `code`, and `status` fields.
## Client Integration
Example request with required headers:
```bash
curl -X POST https://api-software-undso.intelsight.de/api/license/activate \
-H "X-API-Key: AF-2025-XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX" \
-H "Content-Type: application/json" \
-d '{
"license_key": "XXXX-XXXX-XXXX-XXXX",
"hardware_hash": "unique-hardware-id",
"machine_name": "DESKTOP-123",
"app_version": "1.0.0"
}'
```
## Testing
### Test Credentials
- Admin Users:
- Username: `rac00n` / Password: `1248163264`
- Username: `w@rh@mm3r` / Password: `Warhammer123!`
- API Key: Generated in Admin Panel under "Lizenzserver Administration"
### Getting the Initial API Key
If you need to retrieve the API key directly from the database:
```bash
docker exec -it v2_postgres psql -U postgres -d v2_db -c "SELECT api_key FROM system_api_key WHERE id = 1;"
```
### Test Endpoints
- Admin Panel: `https://admin-panel-undso.intelsight.de/`
- License Server API: `https://api-software-undso.intelsight.de/`

66
CLAUDE.md Normale Datei
Datei anzeigen

@@ -0,0 +1,66 @@
## CRITICAL RULES - ALWAYS FOLLOW
### 1. BACKUP BEFORE ANY CHANGES
**MANDATORY**: Create backup before ANY code changes:
```bash
./create_full_backup.sh
```
- Creates full server backup and pushes to GitHub automatically
- Local copy remains for quick rollback
- Restore if needed: `./restore_full_backup.sh server_backup_YYYYMMDD_HHMMSS`
### 2. GITHUB BACKUPS ARE PERMANENT
- **NEVER DELETE** backups from GitHub repository (hetzner-backup)
- Only local backups can be deleted after successful upload
- GitHub serves as permanent backup archive
### 3. BACKUP TROUBLESHOOTING
If `create_full_backup.sh` fails to push:
- SSH key configured at: `~/.ssh/github_backup`
- Fix "Author identity unknown": `git -c user.email="backup@intelsight.de" -c user.name="Backup System" commit -m "..."`
- Repository: `git@github.com:UserIsMH/hetzner-backup.git`
### 4. BACKUP SCHEDULE
- Manual backups: Before EVERY change using `./create_full_backup.sh`
- Automatic backups: Daily at 3:00 AM via Admin Panel
- Admin Panel backup interface: https://admin-panel-undso.intelsight.de/backups
## SYSTEM OVERVIEW
Production license management system at intelsight.de with:
- **Admin Panel** (Flask): Web interface for customer/license/resource management
- **License Server** (FastAPI): API for license validation and heartbeat monitoring
- **PostgreSQL**: Database with partitioned tables for performance
- **Nginx**: SSL termination and routing
## KEY FEATURES
### 1. License Management
- **Device Limit**: Each license has a `device_limit` (1-10 devices)
- **Concurrent Sessions**: Each license has a `concurrent_sessions_limit` (max simultaneous users)
- **Constraint**: concurrent_sessions_limit ≤ device_limit
- **Resource Allocation**: Domains, IPv4 addresses, phone numbers per license
### 2. Device Management
- **Single Table**: `device_registrations` stores all device information
- **Device Fields**: `hardware_fingerprint` (unique ID), `device_name`, `device_type`
- **Tracking**: First activation, last seen, active status
- **No automatic termination**: When session limit reached, new sessions are denied
### 3. Authentication & Security
- **API Authentication**: X-API-Key header (format: AF-YYYY-XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX)
- **API Key Management**: Admin Panel → "Lizenzserver Administration" → "System-API-Key generieren"
- **2FA Support**: TOTP-based two-factor authentication for admin users
- **Audit Logging**: All changes tracked in audit_log table
### 4. Session Management
- **Heartbeat**: 30-second intervals (configurable)
- **Timeout**: 60 seconds without heartbeat = automatic cleanup
- **Single Device Resume**: Same device can resume existing session
- **Session Token**: UUID v4 for session identification
### 5. Database Structure
- **Partitioned Tables**: license_heartbeats (monthly partitions)
- **Resource Pools**: Centralized management of domains/IPs/phones
- **Session History**: Complete tracking with end reasons
- **Lead CRM**: Institution and contact management system

652
RESSOURCE_API_PLAN.md Normale Datei
Datei anzeigen

@@ -0,0 +1,652 @@
# Resource Management API Implementation Plan
## Executive Summary
This plan outlines the implementation of a **Dynamic Resource Pool API** for the License Server that allows client applications to:
1. View available resources from their license's pool
2. Reserve resources for their active session
3. Automatically release resources when session ends
4. Prevent resource conflicts between concurrent sessions
The system works like a "hotel room" model - resources are assigned to a license (the hotel), but individual sessions (guests) reserve specific resources during their stay.
## Current State Analysis
### Existing Infrastructure
- **Database**: Complete resource management tables (`resource_pools`, `license_resources`, `resource_history`)
- **Admin Panel**: Full CRUD operations for resource management
- **License Server**: No resource endpoints (critical gap)
- **Resource Types**: domain, ipv4, phone
- **Limits**: 0-10 resources per type per license
### Gap Analysis
- Clients cannot retrieve their assigned resources
- No validation endpoint for resource ownership
- Resource data not included in license activation/verification
## Proposed API Architecture
### Resource Pool Concept
**Key Principles**:
- Resources belong to a license pool, but reservation rules differ by type
- **Domains**: Can be shared (soft reservation) - prefer free, but allow sharing
- **IPv4 & Phone**: Exclusive use (hard reservation) - one session only
```
License Pool (Admin-defined)
├── 5 Domains total
│ ├── 2 Used by Session A (can be shared)
│ ├── 1 Used by Session B (can be shared)
│ ├── 1 Used by both A & B (shared)
│ └── 1 Completely free
├── 3 IPv4 Addresses total
│ ├── 1 Reserved by Session A (exclusive)
│ └── 2 Available
└── 2 Phone Numbers total
├── 1 Reserved by Session B (exclusive)
└── 1 Available
```
**Reservation Rules**:
- **Domains**: `prefer_exclusive = true` - Try to get unused domain first, but allow sharing if needed
- **IPv4**: `exclusive_only = true` - Fail if none available
- **Phone**: `exclusive_only = true` - Fail if none available
### 1. New API Endpoints
#### GET /api/resources/available
**Purpose**: Get available (unreserved) resources from the license pool
**Request Headers:**
```
X-API-Key: AF-2025-XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
X-License-Key: XXXX-XXXX-XXXX-XXXX
X-Session-Token: 550e8400-e29b-41d4-a716-446655440000
```
**Response:**
```json
{
"success": true,
"pool_status": {
"domains": {
"total": 5,
"reserved": 3,
"available": 2,
"reserved_by_you": 1
},
"ipv4_addresses": {
"total": 3,
"reserved": 1,
"available": 2,
"reserved_by_you": 0
},
"phone_numbers": {
"total": 2,
"reserved": 0,
"available": 2,
"reserved_by_you": 0
}
},
"available_resources": {
"domains": [
{
"id": 124,
"value": "example2.com",
"usage_count": 0, // Completely free
"is_shared": false
},
{
"id": 125,
"value": "example3.com",
"usage_count": 1, // Already used by 1 session
"is_shared": true
}
],
"ipv4_addresses": [
{
"id": 457,
"value": "192.168.1.101"
},
{
"id": 458,
"value": "192.168.1.102"
}
],
"phone_numbers": [
{
"id": 789,
"value": "+49123456789"
},
{
"id": 790,
"value": "+49123456790"
}
]
},
"your_reservations": {
"domains": [
{
"id": 123,
"value": "example.com",
"reserved_at": "2025-01-15T10:00:00Z"
}
]
}
}
```
#### POST /api/resources/reserve
**Purpose**: Reserve specific resources for the current session
**Request:**
```json
{
"resource_type": "domain",
"resource_id": 124,
"prefer_exclusive": true // Optional, default true for domains
}
```
**Response (Domain - Exclusive):**
```json
{
"success": true,
"reservation": {
"resource_id": 124,
"resource_type": "domain",
"resource_value": "example2.com",
"session_token": "550e8400-e29b-41d4-a716-446655440000",
"reserved_at": "2025-01-15T10:30:00Z",
"is_shared": false,
"usage_count": 1
},
"message": "Domain reserved exclusively"
}
```
**Response (Domain - Shared):**
```json
{
"success": true,
"reservation": {
"resource_id": 125,
"resource_type": "domain",
"resource_value": "example3.com",
"session_token": "550e8400-e29b-41d4-a716-446655440000",
"reserved_at": "2025-01-15T10:30:00Z",
"is_shared": true,
"usage_count": 2 // Now used by 2 sessions
},
"message": "Domain reserved (shared with 1 other session)"
}
```
**Response (IPv4/Phone - Failed):**
```json
{
"success": false,
"error": "Resource already exclusively reserved by another session",
"code": "RESOURCE_UNAVAILABLE"
}
```
#### POST /api/resources/release
**Purpose**: Release a reserved resource (or auto-release on session end)
**Request:**
```json
{
"resource_id": 124
}
```
**Response:**
```json
{
"success": true,
"message": "Resource released successfully"
}
```
#### GET /api/resources/my-reservations
**Purpose**: Get all resources reserved by current session
**Response:**
```json
{
"success": true,
"reservations": {
"domains": [
{
"id": 123,
"value": "example.com",
"reserved_at": "2025-01-15T10:00:00Z"
}
],
"ipv4_addresses": [],
"phone_numbers": []
}
}
```
#### POST /api/resources/validate
**Purpose**: Validate if a specific resource belongs to the license
**Request:**
```json
{
"resource_type": "domain",
"resource_value": "example.com"
}
```
**Response:**
```json
{
"valid": true,
"resource_id": 123,
"assigned_at": "2025-01-15T10:00:00Z",
"message": "Resource is assigned to your license"
}
```
#### GET /api/resources/types
**Purpose**: Get available resource types and current usage
**Response:**
```json
{
"resource_types": [
{
"type": "domain",
"display_name": "Domains",
"limit": 5,
"used": 2,
"available": 3,
"validation_pattern": "^[a-zA-Z0-9][a-zA-Z0-9-]{0,61}[a-zA-Z0-9]?\\.[a-zA-Z]{2,}$"
},
{
"type": "ipv4",
"display_name": "IPv4 Addresses",
"limit": 3,
"used": 1,
"available": 2,
"validation_pattern": "^(?:[0-9]{1,3}\\.){3}[0-9]{1,3}$"
},
{
"type": "phone",
"display_name": "Phone Numbers",
"limit": 2,
"used": 0,
"available": 2,
"validation_pattern": "^\\+?[0-9]{1,15}$"
}
]
}
```
### 2. Enhanced Existing Endpoints
#### POST /api/license/activate (Enhanced)
Add resource information to activation response:
```json
{
"message": "License activated successfully",
"activation": { ... },
"resources": {
"domains": ["example.com", "test.com"],
"ipv4_addresses": ["192.168.1.100"],
"phone_numbers": []
},
"resource_limits": {
"domain_count": 5,
"ipv4_count": 3,
"phone_count": 2
}
}
```
#### POST /api/license/verify (Enhanced)
Include resource summary:
```json
{
"valid": true,
"license": { ... },
"resources_assigned": {
"domains": 2,
"ipv4_addresses": 1,
"phone_numbers": 0
}
}
```
## Implementation Strategy
### Phase 1: Database Schema Extension
1. **New Table: resource_reservations**
```sql
CREATE TABLE resource_reservations (
id SERIAL PRIMARY KEY,
resource_id INTEGER REFERENCES resource_pools(id) ON DELETE CASCADE,
session_token VARCHAR(255) NOT NULL,
license_id INTEGER REFERENCES licenses(id) ON DELETE CASCADE,
reserved_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
last_accessed TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
UNIQUE(resource_id, session_token)
);
CREATE INDEX idx_reservations_session ON resource_reservations(session_token);
CREATE INDEX idx_reservations_resource ON resource_reservations(resource_id);
-- View for usage counts
CREATE VIEW resource_usage_counts AS
SELECT
rp.id,
rp.resource_type,
rp.resource_value,
COUNT(rr.id) as usage_count,
CASE
WHEN rp.resource_type = 'domain' AND COUNT(rr.id) > 0 THEN true
ELSE false
END as is_shared
FROM resource_pools rp
LEFT JOIN resource_reservations rr ON rp.id = rr.resource_id
WHERE rp.status = 'allocated'
GROUP BY rp.id, rp.resource_type, rp.resource_value;
```
2. **Automatic Cleanup Trigger**
```sql
-- When session ends, release all reserved resources
CREATE OR REPLACE FUNCTION release_session_resources()
RETURNS TRIGGER AS $$
BEGIN
DELETE FROM resource_reservations
WHERE session_token = OLD.session_token;
RETURN OLD;
END;
$$ LANGUAGE plpgsql;
CREATE TRIGGER cleanup_session_resources
AFTER DELETE ON license_sessions
FOR EACH ROW EXECUTE FUNCTION release_session_resources();
```
### Phase 2: License Server Backend (Week 1)
1. **Create Resource Models** (`v2_lizenzserver/app/models/resource.py`)
```python
class ResourceReservation(BaseModel):
id: int
resource_id: int
resource_type: ResourceType
value: str
session_token: str
reserved_at: datetime
class PoolStatus(BaseModel):
total: int
reserved: int
available: int
reserved_by_you: int
```
2. **Create Resource Schemas** (`v2_lizenzserver/app/schemas/resource.py`)
```python
class ResourcesResponse(BaseModel):
success: bool
resources: Dict[str, List[AssignedResource]]
limits: Dict[str, int]
```
3. **Create Resource Service** (`v2_lizenzserver/app/services/resource_service.py`)
- Query resource_pools and license_resources tables
- Implement caching for performance
- Add resource validation logic
4. **Create Resource API Routes** (`v2_lizenzserver/app/api/resource.py`)
- Implement all new endpoints
- Add proper error handling
- Include rate limiting
### Phase 2: Security & Authentication (Week 1)
1. **Multi-Factor Authentication**
- Require API Key + License Key + Session Token
- Validate session is active and belongs to license
- Rate limit: 100 requests/minute per license
2. **Data Access Control**
- Resources only visible to owning license
- No cross-license data leakage
- Audit log all resource API access
3. **Caching Strategy**
- Cache resource assignments for 5 minutes
- Invalidate on Admin Panel changes
- Use Redis if available, in-memory fallback
### Phase 3: Integration & Testing (Week 2)
1. **Integration Tests**
- Test all resource endpoints
- Verify security boundaries
- Load test with multiple concurrent requests
2. **Client SDK Updates**
- Update Python client example
- Update C# client example
- Create resource caching example
3. **Documentation**
- Update API_REFERENCE.md
- Create resource API examples
- Add troubleshooting guide
### Phase 4: Monitoring & Optimization (Week 2)
1. **Metrics**
- Resource API request count
- Cache hit/miss ratio
- Response time percentiles
2. **Performance Optimization**
- Database query optimization
- Add indexes if needed
- Implement connection pooling
## Security Considerations
1. **Authentication Layers**
- API Key (system-level)
- License Key (license-level)
- Session Token (session-level)
2. **Rate Limiting**
- Per-license: 100 req/min
- Per-IP: 1000 req/min
- Burst allowance: 10 requests
3. **Data Isolation**
- Strict license-based filtering
- No enumeration attacks possible
- Resource IDs not sequential
4. **Audit Trail**
- Log all resource API access
- Track abnormal access patterns
- Alert on suspicious activity
## Client Integration Guide
### Python Example
```python
class AccountForgerClient:
def __init__(self):
self.reserved_resources = {}
def get_available_resources(self):
"""Get available resources from the pool"""
headers = self._get_headers()
response = requests.get(
f"{self.base_url}/api/resources/available",
headers=headers
)
return response.json()
def reserve_resource(self, resource_type, resource_id):
"""Reserve a specific resource for this session"""
headers = self._get_headers()
data = {
'resource_type': resource_type,
'resource_id': resource_id
}
response = requests.post(
f"{self.base_url}/api/resources/reserve",
headers=headers,
json=data
)
if response.status_code == 200:
result = response.json()
# Cache locally for quick access
if resource_type not in self.reserved_resources:
self.reserved_resources[resource_type] = []
self.reserved_resources[resource_type].append(result['reservation'])
return response.json()
def release_resource(self, resource_id):
"""Release a reserved resource"""
headers = self._get_headers()
data = {'resource_id': resource_id}
response = requests.post(
f"{self.base_url}/api/resources/release",
headers=headers,
json=data
)
return response.json()
def get_my_reservations(self):
"""Get all my reserved resources"""
headers = self._get_headers()
response = requests.get(
f"{self.base_url}/api/resources/my-reservations",
headers=headers
)
return response.json()
def auto_reserve_resources(self, needed):
"""Automatically reserve needed resources with smart domain selection"""
# Example: needed = {'domains': 2, 'ipv4_addresses': 1}
available = self.get_available_resources()
reserved = {}
for resource_type, count in needed.items():
reserved[resource_type] = []
available_list = available['available_resources'].get(resource_type, [])
if resource_type == 'domain':
# For domains: prefer free ones, but take shared if needed
# Sort by usage_count (free domains first)
sorted_domains = sorted(available_list, key=lambda x: x.get('usage_count', 0))
for domain in sorted_domains[:count]:
result = self.reserve_resource('domain', domain['id'])
if result['success']:
reserved['domains'].append(result['reservation'])
else:
# For IPv4/Phone: only take truly available ones
for i in range(min(count, len(available_list))):
resource = available_list[i]
result = self.reserve_resource(resource_type, resource['id'])
if result['success']:
reserved[resource_type].append(result['reservation'])
else:
# Stop trying if we hit exclusive reservation error
break
return reserved
```
### Resource Reservation Flow Example
```python
# 1. Start session
session = client.start_session(license_key)
# 2. Check available resources
available = client.get_available_resources()
print(f"Available domains: {available['pool_status']['domains']['available']}")
# 3. Reserve what you need
if available['available_resources']['domains']:
domain = available['available_resources']['domains'][0]
client.reserve_resource('domain', domain['id'])
# 4. Use the resources...
# 5. Resources auto-released when session ends
client.end_session() # Triggers automatic cleanup
```
### Handling Resource Conflicts
```python
def reserve_with_retry(client, resource_type, resource_id, max_retries=3):
"""Handle race conditions when multiple sessions reserve simultaneously"""
for attempt in range(max_retries):
try:
result = client.reserve_resource(resource_type, resource_id)
if result['success']:
return result
except Exception as e:
if 'already reserved' in str(e) and attempt < max_retries - 1:
# Get fresh list of available resources
available = client.get_available_resources()
# Pick different resource
continue
raise
return None
```
## Database Migrations
### Required Indexes
```sql
-- Optimize resource queries
CREATE INDEX idx_license_resources_active_license
ON license_resources(license_id, is_active)
WHERE is_active = TRUE;
CREATE INDEX idx_resource_pools_allocated
ON resource_pools(allocated_to_license, resource_type)
WHERE status = 'allocated';
```
## Rollout Plan
1. **Week 1**: Backend implementation + Security
2. **Week 2**: Testing + Client integration
3. **Week 3**: Staged rollout (10% → 50% → 100%)
4. **Week 4**: Monitoring + Optimization
## Success Metrics
- API response time < 100ms (p95)
- Cache hit ratio > 80%
- Zero security incidents
- Client adoption > 90% within 30 days
## Risk Mitigation
1. **Performance Risk**: Pre-emptive caching and query optimization
2. **Security Risk**: Multi-layer authentication and rate limiting
3. **Compatibility Risk**: Maintain backward compatibility
4. **Scalability Risk**: Horizontal scaling ready architecture
## Future Enhancements
1. **Webhook Notifications**: Notify clients of resource changes
2. **Resource Usage Analytics**: Track actual resource utilization
3. **Dynamic Resource Allocation**: Auto-assign based on usage patterns
4. **Resource Sharing**: Allow resource sharing between licenses

1
backup-repo Submodul

Submodul backup-repo hinzugefügt bei 3736a28334

Datei anzeigen

@@ -1 +0,0 @@
vJgDckVjr3cSictLNFLGl8QIfqSXVD5skPU7kVhkyfc=

40
logs/cron_backup.log Normale Datei
Datei anzeigen

@@ -0,0 +1,40 @@
Traceback (most recent call last):
File "/opt/v2-Docker/v2_adminpanel/scheduled_backup.py", line 6, in <module>
from utils.backup import create_backup_with_github, create_server_backup
File "/opt/v2-Docker/v2_adminpanel/utils/backup.py", line 12, in <module>
from db import get_db_connection, get_db_cursor
File "/opt/v2-Docker/v2_adminpanel/db.py", line 1, in <module>
import psycopg2
ModuleNotFoundError: No module named 'psycopg2'
Traceback (most recent call last):
File "/opt/v2-Docker/v2_adminpanel/scheduled_backup.py", line 6, in <module>
from utils.backup import create_backup_with_github, create_server_backup
File "/opt/v2-Docker/v2_adminpanel/utils/backup.py", line 12, in <module>
from db import get_db_connection, get_db_cursor
File "/opt/v2-Docker/v2_adminpanel/db.py", line 1, in <module>
import psycopg2
ModuleNotFoundError: No module named 'psycopg2'
Traceback (most recent call last):
File "/opt/v2-Docker/v2_adminpanel/scheduled_backup.py", line 6, in <module>
from utils.backup import create_backup_with_github, create_server_backup
File "/opt/v2-Docker/v2_adminpanel/utils/backup.py", line 12, in <module>
from db import get_db_connection, get_db_cursor
File "/opt/v2-Docker/v2_adminpanel/db.py", line 1, in <module>
import psycopg2
ModuleNotFoundError: No module named 'psycopg2'
Traceback (most recent call last):
File "/opt/v2-Docker/v2_adminpanel/scheduled_backup.py", line 6, in <module>
from utils.backup import create_backup_with_github, create_server_backup
File "/opt/v2-Docker/v2_adminpanel/utils/backup.py", line 12, in <module>
from db import get_db_connection, get_db_cursor
File "/opt/v2-Docker/v2_adminpanel/db.py", line 1, in <module>
import psycopg2
ModuleNotFoundError: No module named 'psycopg2'
Traceback (most recent call last):
File "/opt/v2-Docker/v2_adminpanel/scheduled_backup.py", line 6, in <module>
from utils.backup import create_backup_with_github, create_server_backup
File "/opt/v2-Docker/v2_adminpanel/utils/backup.py", line 12, in <module>
from db import get_db_connection, get_db_cursor
File "/opt/v2-Docker/v2_adminpanel/db.py", line 1, in <module>
import psycopg2
ModuleNotFoundError: No module named 'psycopg2'

Datei anzeigen

@@ -0,0 +1,69 @@
-- Hardware ID Cleanup Migration
-- Phase 1: Add new columns alongside old ones
-- 1. Update sessions table
ALTER TABLE sessions
ADD COLUMN IF NOT EXISTS machine_name VARCHAR(255),
ADD COLUMN IF NOT EXISTS hardware_fingerprint VARCHAR(255);
-- 2. Update device_registrations table
ALTER TABLE device_registrations
ADD COLUMN IF NOT EXISTS machine_name VARCHAR(255),
ADD COLUMN IF NOT EXISTS hardware_fingerprint TEXT;
-- 3. Update license_tokens table
ALTER TABLE license_tokens
ADD COLUMN IF NOT EXISTS machine_name VARCHAR(255),
ADD COLUMN IF NOT EXISTS hardware_fingerprint VARCHAR(255);
-- 4. Update license_heartbeats table (partitioned)
ALTER TABLE license_heartbeats
ADD COLUMN IF NOT EXISTS machine_name VARCHAR(255),
ADD COLUMN IF NOT EXISTS hardware_fingerprint VARCHAR(255);
-- 5. Update activation_events table
ALTER TABLE activation_events
ADD COLUMN IF NOT EXISTS machine_name VARCHAR(255),
ADD COLUMN IF NOT EXISTS hardware_fingerprint VARCHAR(255),
ADD COLUMN IF NOT EXISTS previous_hardware_fingerprint VARCHAR(255);
-- 6. Update active_sessions table
ALTER TABLE active_sessions
ADD COLUMN IF NOT EXISTS machine_name VARCHAR(255),
ADD COLUMN IF NOT EXISTS hardware_fingerprint VARCHAR(255);
-- 7. Update license_sessions table
ALTER TABLE license_sessions
ADD COLUMN IF NOT EXISTS machine_name VARCHAR(255),
ADD COLUMN IF NOT EXISTS hardware_fingerprint VARCHAR(255);
-- 8. Update session_history table
ALTER TABLE session_history
ADD COLUMN IF NOT EXISTS machine_name VARCHAR(255),
ADD COLUMN IF NOT EXISTS hardware_fingerprint VARCHAR(255);
-- Copy existing data to new columns
-- For now, we'll copy hardware_id to hardware_fingerprint
-- machine_name will be populated by the application
UPDATE sessions SET hardware_fingerprint = hardware_id WHERE hardware_fingerprint IS NULL;
UPDATE device_registrations SET hardware_fingerprint = hardware_id WHERE hardware_fingerprint IS NULL;
UPDATE license_tokens SET hardware_fingerprint = hardware_id WHERE hardware_fingerprint IS NULL;
UPDATE license_heartbeats SET hardware_fingerprint = hardware_id WHERE hardware_fingerprint IS NULL;
UPDATE activation_events SET hardware_fingerprint = hardware_id WHERE hardware_fingerprint IS NULL;
UPDATE activation_events SET previous_hardware_fingerprint = previous_hardware_id WHERE previous_hardware_fingerprint IS NULL;
UPDATE active_sessions SET hardware_fingerprint = hardware_id WHERE hardware_fingerprint IS NULL;
UPDATE license_sessions SET hardware_fingerprint = hardware_id WHERE hardware_fingerprint IS NULL;
UPDATE session_history SET hardware_fingerprint = hardware_id WHERE hardware_fingerprint IS NULL;
-- Create indexes for new columns
CREATE INDEX IF NOT EXISTS idx_sessions_hardware_fingerprint ON sessions(hardware_fingerprint);
CREATE INDEX IF NOT EXISTS idx_device_registrations_hardware_fingerprint ON device_registrations(hardware_fingerprint);
CREATE INDEX IF NOT EXISTS idx_license_tokens_hardware_fingerprint ON license_tokens(hardware_fingerprint);
CREATE INDEX IF NOT EXISTS idx_license_heartbeats_hardware_fingerprint ON license_heartbeats(hardware_fingerprint, timestamp DESC);
CREATE INDEX IF NOT EXISTS idx_active_sessions_hardware_fingerprint ON active_sessions(hardware_fingerprint);
CREATE INDEX IF NOT EXISTS idx_license_sessions_hardware_fingerprint ON license_sessions(license_id, hardware_fingerprint);
CREATE INDEX IF NOT EXISTS idx_session_history_hardware_fingerprint ON session_history(hardware_fingerprint);
-- Note: Old columns are NOT dropped yet. This will be done in a later migration
-- after verifying the system works with new columns.

Datei anzeigen

@@ -0,0 +1,38 @@
-- Remove old hardware columns after migration to new names
-- This should only be run after all clients have been updated!
-- 1. Drop old columns from sessions table
ALTER TABLE sessions DROP COLUMN IF EXISTS hardware_id;
-- 2. Drop old columns from device_registrations table
ALTER TABLE device_registrations DROP COLUMN IF EXISTS hardware_id;
-- 3. Drop old columns from license_tokens table
ALTER TABLE license_tokens DROP COLUMN IF EXISTS hardware_id;
-- 4. Drop old columns from license_heartbeats table (partitioned)
ALTER TABLE license_heartbeats DROP COLUMN IF EXISTS hardware_id;
-- 5. Drop old columns from activation_events table
ALTER TABLE activation_events
DROP COLUMN IF EXISTS hardware_id,
DROP COLUMN IF EXISTS previous_hardware_id;
-- 6. Drop old columns from active_sessions table
ALTER TABLE active_sessions DROP COLUMN IF EXISTS hardware_id;
-- 7. Drop old columns from license_sessions table
ALTER TABLE license_sessions DROP COLUMN IF EXISTS hardware_id;
-- 8. Drop old columns from session_history table
ALTER TABLE session_history DROP COLUMN IF EXISTS hardware_id;
-- Drop old indexes that referenced hardware_id
DROP INDEX IF EXISTS idx_device_hardware;
DROP INDEX IF EXISTS idx_hardware;
DROP INDEX IF EXISTS idx_heartbeat_hardware_time;
DROP INDEX IF EXISTS idx_license_sessions_license_hardware;
-- Note: The activations table in the license server database
-- still uses machine_id and hardware_hash columns.
-- Those are handled separately in the license server.

Datei anzeigen

@@ -0,0 +1,27 @@
-- Migrate activations table to new column names
-- Add new columns
ALTER TABLE activations
ADD COLUMN IF NOT EXISTS machine_name VARCHAR(255),
ADD COLUMN IF NOT EXISTS hardware_fingerprint VARCHAR(255);
-- Copy data from old to new columns
UPDATE activations
SET machine_name = COALESCE(device_name, machine_id),
hardware_fingerprint = hardware_hash
WHERE machine_name IS NULL OR hardware_fingerprint IS NULL;
-- Make new columns NOT NULL after data is copied
ALTER TABLE activations
ALTER COLUMN machine_name SET NOT NULL,
ALTER COLUMN hardware_fingerprint SET NOT NULL;
-- Drop old columns
ALTER TABLE activations
DROP COLUMN IF EXISTS machine_id,
DROP COLUMN IF EXISTS hardware_hash,
DROP COLUMN IF EXISTS device_name;
-- Update any indexes
CREATE INDEX IF NOT EXISTS idx_activations_machine_name ON activations(machine_name);
CREATE INDEX IF NOT EXISTS idx_activations_hardware_fingerprint ON activations(hardware_fingerprint);

Datei anzeigen

@@ -0,0 +1,241 @@
-- =====================================================
-- Migration: Device Management System Cleanup
-- Date: 2025-01-03
-- Purpose: Consolidate device tracking into single source of truth
-- =====================================================
-- STEP 1: Backup existing data
-- =====================================================
CREATE TABLE IF NOT EXISTS backup_device_data_20250103 AS
SELECT
'activations' as source_table,
a.id,
a.license_id,
COALESCE(a.device_name, a.machine_id) as device_name,
a.hardware_hash as hardware_fingerprint,
a.first_seen as first_activated_at,
a.last_seen as last_seen_at,
a.is_active,
a.os_info,
a.app_version
FROM activations a
UNION ALL
SELECT
'device_registrations' as source_table,
dr.id,
dr.license_id,
dr.device_name,
dr.hardware_id as hardware_fingerprint,
dr.first_seen as first_activated_at,
dr.last_seen as last_seen_at,
dr.is_active,
dr.operating_system::jsonb as os_info,
NULL as app_version
FROM device_registrations dr;
-- STEP 2: Standardize device_registrations table
-- =====================================================
-- Add missing columns to device_registrations
ALTER TABLE device_registrations
ADD COLUMN IF NOT EXISTS hardware_fingerprint VARCHAR(255),
ADD COLUMN IF NOT EXISTS app_version VARCHAR(20),
ADD COLUMN IF NOT EXISTS metadata JSONB DEFAULT '{}',
ADD COLUMN IF NOT EXISTS first_activated_at TIMESTAMP WITH TIME ZONE,
ADD COLUMN IF NOT EXISTS last_seen_at TIMESTAMP WITH TIME ZONE;
-- Migrate existing hardware_id to hardware_fingerprint
UPDATE device_registrations
SET hardware_fingerprint = hardware_id
WHERE hardware_fingerprint IS NULL AND hardware_id IS NOT NULL;
-- Migrate timestamp columns
UPDATE device_registrations
SET first_activated_at = first_seen,
last_seen_at = last_seen
WHERE first_activated_at IS NULL OR last_seen_at IS NULL;
-- STEP 3: Migrate data from activations to device_registrations
-- =====================================================
-- Insert activations that don't exist in device_registrations
INSERT INTO device_registrations (
license_id,
hardware_fingerprint,
device_name,
device_type,
operating_system,
first_activated_at,
last_seen_at,
is_active,
ip_address,
user_agent,
app_version,
metadata
)
SELECT
a.license_id,
a.hardware_hash as hardware_fingerprint,
COALESCE(a.device_name, a.machine_id) as device_name,
CASE
WHEN a.os_info->>'os' ILIKE '%windows%' THEN 'desktop'
WHEN a.os_info->>'os' ILIKE '%mac%' THEN 'desktop'
WHEN a.os_info->>'os' ILIKE '%linux%' THEN 'desktop'
ELSE 'unknown'
END as device_type,
a.os_info->>'os' as operating_system,
a.first_seen,
a.last_seen,
a.is_active,
NULL as ip_address,
NULL as user_agent,
a.app_version,
a.os_info as metadata
FROM activations a
WHERE NOT EXISTS (
SELECT 1 FROM device_registrations dr
WHERE dr.license_id = a.license_id
AND dr.hardware_fingerprint = a.hardware_hash
);
-- Update existing device_registrations with latest data from activations
UPDATE device_registrations dr
SET
last_seen_at = GREATEST(dr.last_seen_at, a.last_seen),
is_active = a.is_active,
app_version = COALESCE(a.app_version, dr.app_version),
metadata = COALESCE(dr.metadata, '{}')::jsonb || COALESCE(a.os_info, '{}')::jsonb
FROM activations a
WHERE dr.license_id = a.license_id
AND dr.hardware_fingerprint = a.hardware_hash
AND a.last_seen > dr.last_seen_at;
-- STEP 4: Standardize licenses table
-- =====================================================
-- Remove duplicate device limit columns, keep only device_limit
ALTER TABLE licenses DROP COLUMN IF EXISTS max_devices;
ALTER TABLE licenses DROP COLUMN IF EXISTS max_activations;
-- Ensure device_limit has proper constraints
ALTER TABLE licenses
DROP CONSTRAINT IF EXISTS licenses_device_limit_check,
ADD CONSTRAINT licenses_device_limit_check CHECK (device_limit >= 1);
-- Rename max_concurrent_sessions to concurrent_sessions_limit for clarity
ALTER TABLE licenses
RENAME COLUMN max_concurrent_sessions TO concurrent_sessions_limit;
-- Update constraint to reference device_limit
ALTER TABLE licenses
DROP CONSTRAINT IF EXISTS check_concurrent_sessions,
ADD CONSTRAINT check_concurrent_sessions CHECK (concurrent_sessions_limit <= device_limit);
-- STEP 5: Clean up session tables
-- =====================================================
-- Consolidate into single license_sessions table
-- First, backup existing session data
CREATE TABLE IF NOT EXISTS backup_sessions_20250103 AS
SELECT
id,
license_id,
license_key,
session_id,
username,
computer_name,
hardware_id,
ip_address,
user_agent,
app_version,
started_at,
last_heartbeat,
ended_at,
is_active
FROM sessions;
-- Add missing columns to license_sessions
ALTER TABLE license_sessions
ADD COLUMN IF NOT EXISTS device_registration_id INTEGER REFERENCES device_registrations(id),
ADD COLUMN IF NOT EXISTS ended_at TIMESTAMP WITH TIME ZONE,
ADD COLUMN IF NOT EXISTS end_reason VARCHAR(50),
ADD COLUMN IF NOT EXISTS user_agent TEXT;
-- Link license_sessions to device_registrations
UPDATE license_sessions ls
SET device_registration_id = dr.id
FROM device_registrations dr
WHERE ls.license_id = dr.license_id
AND ls.hardware_id = dr.hardware_fingerprint
AND ls.device_registration_id IS NULL;
-- STEP 6: Create indexes for performance
-- =====================================================
CREATE INDEX IF NOT EXISTS idx_device_registrations_license_fingerprint
ON device_registrations(license_id, hardware_fingerprint);
CREATE INDEX IF NOT EXISTS idx_device_registrations_active
ON device_registrations(license_id, is_active) WHERE is_active = true;
CREATE INDEX IF NOT EXISTS idx_device_registrations_last_seen
ON device_registrations(last_seen_at DESC);
-- STEP 7: Drop old columns (after verification)
-- =====================================================
-- These will be dropped after confirming migration success
ALTER TABLE device_registrations
DROP COLUMN IF EXISTS hardware_id,
DROP COLUMN IF EXISTS first_seen,
DROP COLUMN IF EXISTS last_seen;
-- STEP 8: Create views for backwards compatibility (temporary)
-- =====================================================
CREATE OR REPLACE VIEW v_activations AS
SELECT
id,
license_id,
device_name as machine_id,
device_name,
hardware_fingerprint as hardware_hash,
first_activated_at as activation_date,
first_activated_at as first_seen,
last_seen_at as last_seen,
last_seen_at as last_heartbeat,
is_active,
metadata as os_info,
operating_system,
app_version,
ip_address,
user_agent,
device_type,
deactivated_at,
deactivated_by
FROM device_registrations;
-- STEP 9: Update system_api_key usage tracking
-- =====================================================
UPDATE system_api_key
SET last_used_at = CURRENT_TIMESTAMP,
usage_count = usage_count + 1
WHERE id = 1;
-- STEP 10: Add audit log entry for migration
-- =====================================================
INSERT INTO audit_log (
timestamp,
username,
action,
entity_type,
entity_id,
additional_info
) VALUES (
CURRENT_TIMESTAMP,
'system',
'device_management_migration',
'database',
0,
'Consolidated device management system - merged activations into device_registrations'
);
-- Summary of changes:
-- 1. Consolidated device tracking into device_registrations table
-- 2. Removed duplicate columns: max_devices, max_activations
-- 3. Standardized on device_limit column
-- 4. Renamed max_concurrent_sessions to concurrent_sessions_limit
-- 5. Added proper foreign key relationships
-- 6. Created backwards compatibility view for activations
-- 7. Improved indexing for performance

Datei anzeigen

@@ -0,0 +1,26 @@
-- Migration: Cleanup old device management structures
-- Date: 2025-01-03
-- Description: Remove old tables and compatibility views after successful migration
BEGIN;
-- Drop compatibility view
DROP VIEW IF EXISTS v_activations CASCADE;
-- Drop old activations table
DROP TABLE IF EXISTS activations CASCADE;
-- Drop any backup tables if they exist
DROP TABLE IF EXISTS device_registrations_backup CASCADE;
DROP TABLE IF EXISTS licenses_backup CASCADE;
-- Drop old columns that might still exist
ALTER TABLE licenses DROP COLUMN IF EXISTS max_devices CASCADE;
ALTER TABLE licenses DROP COLUMN IF EXISTS max_activations CASCADE;
-- Add comment to document the cleanup
COMMENT ON TABLE device_registrations IS 'Main table for device management - replaces old activations table';
COMMENT ON COLUMN device_registrations.hardware_fingerprint IS 'Unique hardware identifier - replaces old hardware_id/hardware_hash';
COMMENT ON COLUMN device_registrations.device_name IS 'Device name - replaces old machine_name/machine_id';
COMMIT;

Datei anzeigen

@@ -0,0 +1,27 @@
-- Migration: Cleanup license_sessions hardware_id column
-- Date: 2025-01-03
-- Description: Migrate hardware_id data to hardware_fingerprint and remove old column
BEGIN;
-- Copy data from hardware_id to hardware_fingerprint where it's null
UPDATE license_sessions
SET hardware_fingerprint = hardware_id
WHERE hardware_fingerprint IS NULL AND hardware_id IS NOT NULL;
-- Make hardware_fingerprint NOT NULL (it should have data now)
ALTER TABLE license_sessions
ALTER COLUMN hardware_fingerprint SET NOT NULL;
-- Drop the old hardware_id column
ALTER TABLE license_sessions
DROP COLUMN hardware_id CASCADE;
-- Update the index to use hardware_fingerprint
DROP INDEX IF EXISTS idx_license_sessions_license_hardware;
CREATE INDEX idx_license_sessions_license_hardware ON license_sessions(license_id, hardware_fingerprint);
-- Add comment
COMMENT ON COLUMN license_sessions.hardware_fingerprint IS 'Unique hardware identifier for the session';
COMMIT;

Datei anzeigen

@@ -0,0 +1,23 @@
-- Migration: Cleanup session_history hardware_id column
-- Date: 2025-01-03
-- Description: Migrate hardware_id data to hardware_fingerprint and remove old column
BEGIN;
-- Copy data from hardware_id to hardware_fingerprint where it's null
UPDATE session_history
SET hardware_fingerprint = hardware_id
WHERE hardware_fingerprint IS NULL AND hardware_id IS NOT NULL;
-- Make hardware_fingerprint NOT NULL
ALTER TABLE session_history
ALTER COLUMN hardware_fingerprint SET NOT NULL;
-- Drop the old hardware_id column
ALTER TABLE session_history
DROP COLUMN hardware_id CASCADE;
-- Add comment
COMMENT ON COLUMN session_history.hardware_fingerprint IS 'Unique hardware identifier for the session';
COMMIT;

Datei anzeigen

@@ -1,10 +0,0 @@
# Ignore all SSL certificates
*.pem
*.crt
*.key
*.p12
*.pfx
# But keep the README
!README.md
!.gitignore

Datei anzeigen

@@ -1,29 +0,0 @@
# SSL Certificate Directory
This directory should contain the following files for SSL to work:
1. **fullchain.pem** - The full certificate chain
2. **privkey.pem** - The private key (keep this secure!)
3. **dhparam.pem** - Diffie-Hellman parameters for enhanced security
## For intelsight.de deployment:
Copy your SSL certificates here:
```bash
cp /path/to/fullchain.pem ./
cp /path/to/privkey.pem ./
```
Generate dhparam.pem if not exists:
```bash
openssl dhparam -out dhparam.pem 2048
```
## File Permissions:
```bash
chmod 644 fullchain.pem
chmod 600 privkey.pem
chmod 644 dhparam.pem
```
**IMPORTANT**: Never commit actual SSL certificates to the repository!

Datei anzeigen

@@ -1,5 +0,0 @@
NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS
admin-panel v2-admin-panel "python app.py" admin-panel 21 hours ago Up 21 hours 5000/tcp
db v2-postgres "docker-entrypoint.s…" postgres 21 hours ago Up 21 hours 5432/tcp
license-server v2-license-server "uvicorn app.main:ap…" license-server 21 hours ago Up 21 hours 8443/tcp
nginx-proxy v2-nginx "/docker-entrypoint.…" nginx 21 hours ago Up 21 hours 0.0.0.0:80->80/tcp, [::]:80->80/tcp, 0.0.0.0:443->443/tcp, [::]:443->443/tcp

Datei anzeigen

@@ -1,5 +0,0 @@
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
2e19a609cc5c v2-nginx "/docker-entrypoint.…" 21 hours ago Up 21 hours 0.0.0.0:80->80/tcp, [::]:80->80/tcp, 0.0.0.0:443->443/tcp, [::]:443->443/tcp nginx-proxy
60acd5642854 v2-admin-panel "python app.py" 21 hours ago Up 21 hours 5000/tcp admin-panel
d2aa58e670bc v2-license-server "uvicorn app.main:ap…" 21 hours ago Up 21 hours 8443/tcp license-server
6f40b240e975 v2-postgres "docker-entrypoint.s…" 21 hours ago Up 21 hours 5432/tcp db

Datei anzeigen

@@ -1,10 +0,0 @@
bad7324 Backup nach Import von Lizenzen und Ressourcen (77 Lizenzen, 31 Ressourcen)
b28b60e nur backups
f105039 Backup nach Wiederherstellung der Kundendaten aus altem Backup
a77c34c Backup nach User-Migration zu Datenbank
85c7499 Add full server backup with Git LFS
8aa79c6 Merge branch 'main' of https://github.com/UserIsMH/v2-Docker
4ab51a7 Hetzner Deploy Version (hoffentlich)
35fd8fd Aktualisieren von SYSTEM_DOCUMENTATION.md
5b71a1d Namenskonsistenz + Ablauf der Lizenzen
cdf81e2 Dashboard angepasst

Datei anzeigen

@@ -1,70 +0,0 @@
# PostgreSQL-Datenbank
POSTGRES_DB=meinedatenbank
POSTGRES_USER=adminuser
POSTGRES_PASSWORD=supergeheimespasswort
# Admin-Panel Zugangsdaten
ADMIN1_USERNAME=rac00n
ADMIN1_PASSWORD=1248163264
ADMIN2_USERNAME=w@rh@mm3r
ADMIN2_PASSWORD=Warhammer123!
# Lizenzserver API Key für Authentifizierung
# Domains (können von der App ausgewertet werden, z.B. für Links oder CORS)
API_DOMAIN=api-software-undso.intelsight.de
ADMIN_PANEL_DOMAIN=admin-panel-undso.intelsight.de
# ===================== OPTIONALE VARIABLEN =====================
# JWT für API-Auth (WICHTIG: Für sichere Token-Verschlüsselung!)
JWT_SECRET=xY9ZmK2pL7nQ4wF6jH8vB3tG5aZ1dE7fR9hT2kM4nP6qS8uW0xC3yA5bD7eF9gH2jK4
# E-Mail Konfiguration (z.B. bei Ablaufwarnungen)
# MAIL_SERVER=smtp.meinedomain.de
# MAIL_PORT=587
# MAIL_USERNAME=deinemail
# MAIL_PASSWORD=geheim
# MAIL_FROM=no-reply@meinedomain.de
# Logging
# LOG_LEVEL=info
# Erlaubte CORS-Domains (für Web-Frontend)
# ALLOWED_ORIGINS=https://admin.meinedomain.de
# ===================== VERSION =====================
# Serverseitig gepflegte aktuelle Software-Version
# Diese wird vom Lizenzserver genutzt, um die Kundenversion zu vergleichen
LATEST_CLIENT_VERSION=1.0.0
# ===================== BACKUP KONFIGURATION =====================
# E-Mail für Backup-Benachrichtigungen
EMAIL_ENABLED=false
# Backup-Verschlüsselung (optional, wird automatisch generiert wenn leer)
# BACKUP_ENCRYPTION_KEY=
# ===================== CAPTCHA KONFIGURATION =====================
# Google reCAPTCHA v2 Keys (https://www.google.com/recaptcha/admin)
# Für PoC-Phase auskommentiert - CAPTCHA wird übersprungen wenn Keys fehlen
# RECAPTCHA_SITE_KEY=your-site-key-here
# RECAPTCHA_SECRET_KEY=your-secret-key-here
# ===================== MONITORING KONFIGURATION =====================
# Grafana Admin Credentials
GRAFANA_USER=admin
GRAFANA_PASSWORD=admin
# SMTP Settings for Alertmanager (optional)
# SMTP_USERNAME=your-email@gmail.com
# SMTP_PASSWORD=your-app-password
# Webhook URLs for critical alerts (optional)
# WEBHOOK_CRITICAL=https://your-webhook-url/critical
# WEBHOOK_SECURITY=https://your-webhook-url/security

Datei anzeigen

@@ -1,164 +0,0 @@
services:
postgres:
build:
context: ../v2_postgres
container_name: db
restart: always
env_file: .env
environment:
POSTGRES_HOST: postgres
POSTGRES_INITDB_ARGS: '--encoding=UTF8 --locale=de_DE.UTF-8'
POSTGRES_COLLATE: 'de_DE.UTF-8'
POSTGRES_CTYPE: 'de_DE.UTF-8'
TZ: Europe/Berlin
PGTZ: Europe/Berlin
volumes:
# Persistente Speicherung der Datenbank auf dem Windows-Host
- postgres_data:/var/lib/postgresql/data
# Init-Skript für Tabellen
- ../v2_adminpanel/init.sql:/docker-entrypoint-initdb.d/init.sql
networks:
- internal_net
deploy:
resources:
limits:
cpus: '2'
memory: 4g
license-server:
build:
context: ../v2_lizenzserver
container_name: license-server
restart: always
# Port-Mapping entfernt - nur noch über Nginx erreichbar
env_file: .env
environment:
TZ: Europe/Berlin
depends_on:
- postgres
networks:
- internal_net
deploy:
resources:
limits:
cpus: '2'
memory: 4g
# auth-service:
# build:
# context: ../lizenzserver/services/auth
# container_name: auth-service
# restart: always
# # Port 5001 - nur intern erreichbar
# env_file: .env
# environment:
# TZ: Europe/Berlin
# DATABASE_URL: postgresql://postgres:${POSTGRES_PASSWORD}@postgres:5432/v2_adminpanel
# REDIS_URL: redis://redis:6379/1
# JWT_SECRET: ${JWT_SECRET}
# FLASK_ENV: production
# depends_on:
# - postgres
# - redis
# networks:
# - internal_net
# deploy:
# resources:
# limits:
# cpus: '1'
# memory: 1g
# analytics-service:
# build:
# context: ../lizenzserver/services/analytics
# container_name: analytics-service
# restart: always
# # Port 5003 - nur intern erreichbar
# env_file: .env
# environment:
# TZ: Europe/Berlin
# DATABASE_URL: postgresql://postgres:${POSTGRES_PASSWORD}@postgres:5432/v2_adminpanel
# REDIS_URL: redis://redis:6379/2
# JWT_SECRET: ${JWT_SECRET}
# FLASK_ENV: production
# depends_on:
# - postgres
# - redis
# - rabbitmq
# networks:
# - internal_net
# deploy:
# resources:
# limits:
# cpus: '1'
# memory: 2g
# admin-api-service:
# build:
# context: ../lizenzserver/services/admin_api
# container_name: admin-api-service
# restart: always
# # Port 5004 - nur intern erreichbar
# env_file: .env
# environment:
# TZ: Europe/Berlin
# DATABASE_URL: postgresql://postgres:${POSTGRES_PASSWORD}@postgres:5432/v2_adminpanel
# REDIS_URL: redis://redis:6379/3
# JWT_SECRET: ${JWT_SECRET}
# FLASK_ENV: production
# depends_on:
# - postgres
# - redis
# - rabbitmq
# networks:
# - internal_net
# deploy:
# resources:
# limits:
# cpus: '1'
# memory: 2g
admin-panel:
build:
context: ../v2_adminpanel
container_name: admin-panel
restart: always
# Port-Mapping entfernt - nur über nginx erreichbar
env_file: .env
environment:
TZ: Europe/Berlin
depends_on:
- postgres
networks:
- internal_net
volumes:
# Backup-Verzeichnis
- ../backups:/app/backups
deploy:
resources:
limits:
cpus: '2'
memory: 4g
nginx:
build:
context: ../v2_nginx
container_name: nginx-proxy
restart: always
ports:
- "80:80"
- "443:443"
environment:
TZ: Europe/Berlin
depends_on:
- admin-panel
- license-server
networks:
- internal_net
networks:
internal_net:
driver: bridge
volumes:
postgres_data:

Datei anzeigen

@@ -1,122 +0,0 @@
events {
worker_connections 1024;
}
http {
# Enable nginx status page for monitoring
server {
listen 8080;
server_name localhost;
location /nginx_status {
stub_status on;
access_log off;
allow 127.0.0.1;
allow 172.16.0.0/12; # Docker networks
deny all;
}
}
# Moderne SSL-Einstellungen für maximale Sicherheit
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers 'ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384';
ssl_prefer_server_ciphers off;
# SSL Session Einstellungen
ssl_session_timeout 1d;
ssl_session_cache shared:SSL:10m;
ssl_session_tickets off;
# OCSP Stapling
ssl_stapling on;
ssl_stapling_verify on;
resolver 8.8.8.8 8.8.4.4 valid=300s;
resolver_timeout 5s;
# DH parameters für Perfect Forward Secrecy
ssl_dhparam /etc/nginx/ssl/dhparam.pem;
# Admin Panel
server {
listen 80;
server_name admin-panel-undso.intelsight.de;
# Redirect HTTP to HTTPS
return 301 https://$server_name$request_uri;
}
server {
listen 443 ssl;
server_name admin-panel-undso.intelsight.de;
# SSL-Zertifikate (echte Zertifikate)
ssl_certificate /etc/nginx/ssl/fullchain.pem;
ssl_certificate_key /etc/nginx/ssl/privkey.pem;
# Security Headers
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;
add_header X-Content-Type-Options "nosniff" always;
add_header X-Frame-Options "SAMEORIGIN" always;
add_header X-XSS-Protection "1; mode=block" always;
add_header Referrer-Policy "strict-origin-when-cross-origin" always;
# Proxy-Einstellungen
location / {
proxy_pass http://admin-panel:5000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
# WebSocket support (falls benötigt)
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
# Auth Service API (internal only) - temporarily disabled
# location /api/v1/auth/ {
# proxy_pass http://auth-service:5001/api/v1/auth/;
# proxy_set_header Host $host;
# proxy_set_header X-Real-IP $remote_addr;
# proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
# proxy_set_header X-Forwarded-Proto $scheme;
# proxy_set_header Authorization $http_authorization;
# }
}
# API Server (für später)
server {
listen 80;
server_name api-software-undso.intelsight.de;
return 301 https://$server_name$request_uri;
}
server {
listen 443 ssl;
server_name api-software-undso.intelsight.de;
ssl_certificate /etc/nginx/ssl/fullchain.pem;
ssl_certificate_key /etc/nginx/ssl/privkey.pem;
# Security Headers
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;
add_header X-Content-Type-Options "nosniff" always;
add_header X-Frame-Options "SAMEORIGIN" always;
add_header X-XSS-Protection "1; mode=block" always;
add_header Referrer-Policy "strict-origin-when-cross-origin" always;
location / {
proxy_pass http://license-server:8443;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
# WebSocket support (falls benötigt)
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
}
}

Datei anzeigen

@@ -1,5 +0,0 @@
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
2e19a609cc5c v2-nginx "/docker-entrypoint.…" 25 hours ago Up About an hour 0.0.0.0:80->80/tcp, [::]:80->80/tcp, 0.0.0.0:443->443/tcp, [::]:443->443/tcp nginx-proxy
60acd5642854 v2-admin-panel "python app.py" 25 hours ago Up About an hour 5000/tcp admin-panel
d2aa58e670bc v2-license-server "uvicorn app.main:ap…" 25 hours ago Up About an hour 8443/tcp license-server
6f40b240e975 v2-postgres "docker-entrypoint.s…" 25 hours ago Up About an hour 5432/tcp db

Datei anzeigen

@@ -1,50 +0,0 @@
553c376 Test backup
98bee9c Backup vor Admin Panel Backup-System Erweiterung
bad7324 Backup nach Import von Lizenzen und Ressourcen (77 Lizenzen, 31 Ressourcen)
b28b60e nur backups
f105039 Backup nach Wiederherstellung der Kundendaten aus altem Backup
a77c34c Backup nach User-Migration zu Datenbank
85c7499 Add full server backup with Git LFS
8aa79c6 Merge branch 'main' of https://github.com/UserIsMH/v2-Docker
4ab51a7 Hetzner Deploy Version (hoffentlich)
35fd8fd Aktualisieren von SYSTEM_DOCUMENTATION.md
5b71a1d Namenskonsistenz + Ablauf der Lizenzen
cdf81e2 Dashboard angepasst
4a13946 Lead Management Usability Upgrade
45e236f Lead Management - Zwischenstand
8cb483a Documentation gerade gezogen
4b093fa log Benutzer Fix
b9b943e Export Button geht jetzt
74391e6 Lizenzübersicjht DB Data Problem Fix
9982f14 Lizenzübersicht fix
ce03b90 Lizenzübersicht besser
1146406 BUG fix - API
4ed8889 API-Key - Fix - Nicht mehr mehrere
889a7b4 Documentation Update
1b5b7d0 API Key Config ist fertig
b420452 lizenzserver API gedöns
6d1577c Create TODO_LIZENZSERVER_CONFIG.md
20be02d CLAUDE.md als Richtlinie
75c2f0d Monitoring fix
0a994fa Error handling
08e4e93 Die UNterscheidung von Test und Echt Lizenzen ist strikter
fdf74c1 Monitoring Anpassung
3d02c7a Service Status im Dashboard
e2b5247 System Status - License Server fix
1e6012a Unnötige Reddis und Rabbit MQ entfernt
e6799c6 Garfana und sowas aufgeräumt
3d899b1 Test zu Fake geändert, weil Namensproblem
fec588b Löschen Lizenz Schutz
1451a23 Alle Lkzenzen in der Navbar
627c6c3 Dashboard zeigt Realdaten
fff82f4 Session zu Aktive Nutzung im Dashboard
ae30b74 Lizenzserver (Backend) - Erstellt
afa2b52 Kunden & Lizenzen Fix
b822504 Kontakte - Telefonnummern und E-Mail-Adressen Bearbeiten ist drin
9e5843a Übersicht der Kontakte
0e79e5e Alle .md einmal aufgeräumt
f73c64a Notizen kann man bearbeiten
72e328a Leads sind integriert
c349469 Stand geupdatet
f82131b Vorläufig fertiger server
c30d974 Zwischenstand - ohne Prometheus

Datei anzeigen

@@ -1,39 +0,0 @@
On branch main
Changes not staged for commit:
(use "git add/rm <file>..." to update what will be committed)
(use "git restore <file>..." to discard changes in working directory)
deleted: server-backups/server_backup_20250628_145911.tar.gz
deleted: server-backups/server_backup_20250628_153152.tar.gz
deleted: server-backups/server_backup_20250628_160032.tar.gz
deleted: server-backups/server_backup_20250628_165902.tar.gz
deleted: server-backups/server_backup_20250628_171741.tar.gz
deleted: server-backups/server_backup_20250628_190433.tar.gz
Untracked files:
(use "git add <file>..." to include in what will be committed)
.gitattributes
API_REFERENCE.md
JOURNAL.md
SSL/
backup_before_cleanup.sh
backups/
cloud-init.yaml
create_full_backup.sh
generate-secrets.py
lizenzserver/
migrations/
restore_full_backup.sh
scripts/
server-backups/server_backup_20250628_171705/
server-backups/server_backup_20250628_203904/
setup_backup_cron.sh
v2/
v2_adminpanel/
v2_lizenzserver/
v2_nginx/
v2_postgreSQL/
v2_postgres/
v2_testing/
verify-deployment.sh
no changes added to commit (use "git add" and/or "git commit -a")

Binäre Datei nicht angezeigt.

Datei anzeigen

@@ -1,6 +1,6 @@
V2-Docker Server Backup
Created: Sat Jun 28 08:39:06 PM UTC 2025
Timestamp: 20250628_203904
Created: Thu Jul 3 08:37:56 PM UTC 2025
Timestamp: 20250703_203754
Type: Full Server Backup
Contents:
- Configuration files (docker-compose, nginx, SSL)

Datei anzeigen

@@ -134,6 +134,15 @@ services:
volumes:
# Backup-Verzeichnis
- ../backups:/app/backups
# Server-Backup Verzeichnisse
- ../server-backups:/app/server-backups
- ../database-backups:/app/database-backups
# Voller Zugriff auf v2-Docker für Server-Backups
- /opt/v2-Docker:/opt/v2-Docker
# Git SSH Key für GitHub Push
- ~/.ssh:/root/.ssh:ro
# Git Config
- ~/.gitconfig:/root/.gitconfig:ro
deploy:
resources:
limits:

Datei anzeigen

@@ -112,6 +112,7 @@ http {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-API-Key $http_x_api_key;
# WebSocket support (falls benötigt)
proxy_http_version 1.1;
@@ -119,4 +120,47 @@ http {
proxy_set_header Connection "upgrade";
}
}
# Gitea Server
server {
listen 80;
server_name gitea-undso.intelsight.de;
# Redirect HTTP to HTTPS
return 301 https://$server_name$request_uri;
}
server {
listen 443 ssl;
server_name gitea-undso.intelsight.de;
# SSL-Zertifikate (echte Zertifikate)
ssl_certificate /etc/nginx/ssl/fullchain.pem;
ssl_certificate_key /etc/nginx/ssl/privkey.pem;
# Security Headers
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;
add_header X-Content-Type-Options "nosniff" always;
add_header X-Frame-Options "SAMEORIGIN" always;
add_header X-XSS-Protection "1; mode=block" always;
add_header Referrer-Policy "strict-origin-when-cross-origin" always;
# Proxy settings
location / {
proxy_pass http://gitea:3000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
# WebSocket support
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
# Increase buffer sizes for Gitea
proxy_buffering off;
client_max_body_size 50M;
}
}
}

Datei anzeigen

@@ -0,0 +1,23 @@
-----BEGIN CERTIFICATE-----
MIID3TCCA2OgAwIBAgISBimcX2wwj3Z1U/Qlfu5y5keoMAoGCCqGSM49BAMDMDIx
CzAJBgNVBAYTAlVTMRYwFAYDVQQKEw1MZXQncyBFbmNyeXB0MQswCQYDVQQDEwJF
NjAeFw0yNTA2MjYxNjAwMjBaFw0yNTA5MjQxNjAwMTlaMBgxFjAUBgNVBAMTDWlu
dGVsc2lnaHQuZGUwWTATBgcqhkjOPQIBBggqhkjOPQMBBwNCAATEQD6vfDoXM7Yz
iT75OmB/kvxoEebMFRBCzpTOdUZpThlFmLijjCsYnxc8DeWDn8/eLltrBWhuM4Yx
gX8tseO0o4ICcTCCAm0wDgYDVR0PAQH/BAQDAgeAMB0GA1UdJQQWMBQGCCsGAQUF
BwMBBggrBgEFBQcDAjAMBgNVHRMBAf8EAjAAMB0GA1UdDgQWBBSM5CYyn//CSmLp
JADwjccRtsnZFDAfBgNVHSMEGDAWgBSTJ0aYA6lRaI6Y1sRCSNsjv1iU0jAyBggr
BgEFBQcBAQQmMCQwIgYIKwYBBQUHMAKGFmh0dHA6Ly9lNi5pLmxlbmNyLm9yZy8w
bgYDVR0RBGcwZYIfYWRtaW4tcGFuZWwtdW5kc28uaW50ZWxzaWdodC5kZYIgYXBp
LXNvZnR3YXJlLXVuZHNvLmludGVsc2lnaHQuZGWCDWludGVsc2lnaHQuZGWCEXd3
dy5pbnRlbHNpZ2h0LmRlMBMGA1UdIAQMMAowCAYGZ4EMAQIBMC0GA1UdHwQmMCQw
IqAgoB6GHGh0dHA6Ly9lNi5jLmxlbmNyLm9yZy80MS5jcmwwggEEBgorBgEEAdZ5
AgQCBIH1BIHyAPAAdgDM+w9qhXEJZf6Vm1PO6bJ8IumFXA2XjbapflTA/kwNsAAA
AZetLYOmAAAEAwBHMEUCIB8bQYn7h64sSmHZavNbIM6ScHDBxmMWN6WqjyaTz75I
AiEArz5mC+TaVMsofIIFkEj+dOMD1/oj6w10zgVunTPb01wAdgCkQsUGSWBhVI8P
1Oqc+3otJkVNh6l/L99FWfYnTzqEVAAAAZetLYRWAAAEAwBHMEUCIFVulS2bEmSQ
HYcE2UbsHhn7WJl8MeWZJSKGG1LbtnvyAiEAsLHL/VyIfXVhOmcMf1gmPL/eu7xj
W/2JuPHVWgjUDhQwCgYIKoZIzj0EAwMDaAAwZQIxANaSy/SOYXq9+oQJNhpXIlMJ
i0HBvwebvhNVkNGJN2QodV5gE2yi4s4q19XkpFO+fQIwCCqLSQvaC+AcOTFT9XL5
6hk8bFapLf/b2EFv3DE06qKIrDVPWhtYwyEYBRT4Ii4p
-----END CERTIFICATE-----

Datei anzeigen

@@ -0,0 +1,26 @@
-----BEGIN CERTIFICATE-----
MIIEVzCCAj+gAwIBAgIRALBXPpFzlydw27SHyzpFKzgwDQYJKoZIhvcNAQELBQAw
TzELMAkGA1UEBhMCVVMxKTAnBgNVBAoTIEludGVybmV0IFNlY3VyaXR5IFJlc2Vh
cmNoIEdyb3VwMRUwEwYDVQQDEwxJU1JHIFJvb3QgWDEwHhcNMjQwMzEzMDAwMDAw
WhcNMjcwMzEyMjM1OTU5WjAyMQswCQYDVQQGEwJVUzEWMBQGA1UEChMNTGV0J3Mg
RW5jcnlwdDELMAkGA1UEAxMCRTYwdjAQBgcqhkjOPQIBBgUrgQQAIgNiAATZ8Z5G
h/ghcWCoJuuj+rnq2h25EqfUJtlRFLFhfHWWvyILOR/VvtEKRqotPEoJhC6+QJVV
6RlAN2Z17TJOdwRJ+HB7wxjnzvdxEP6sdNgA1O1tHHMWMxCcOrLqbGL0vbijgfgw
gfUwDgYDVR0PAQH/BAQDAgGGMB0GA1UdJQQWMBQGCCsGAQUFBwMCBggrBgEFBQcD
ATASBgNVHRMBAf8ECDAGAQH/AgEAMB0GA1UdDgQWBBSTJ0aYA6lRaI6Y1sRCSNsj
v1iU0jAfBgNVHSMEGDAWgBR5tFnme7bl5AFzgAiIyBpY9umbbjAyBggrBgEFBQcB
AQQmMCQwIgYIKwYBBQUHMAKGFmh0dHA6Ly94MS5pLmxlbmNyLm9yZy8wEwYDVR0g
BAwwCjAIBgZngQwBAgEwJwYDVR0fBCAwHjAcoBqgGIYWaHR0cDovL3gxLmMubGVu
Y3Iub3JnLzANBgkqhkiG9w0BAQsFAAOCAgEAfYt7SiA1sgWGCIpunk46r4AExIRc
MxkKgUhNlrrv1B21hOaXN/5miE+LOTbrcmU/M9yvC6MVY730GNFoL8IhJ8j8vrOL
pMY22OP6baS1k9YMrtDTlwJHoGby04ThTUeBDksS9RiuHvicZqBedQdIF65pZuhp
eDcGBcLiYasQr/EO5gxxtLyTmgsHSOVSBcFOn9lgv7LECPq9i7mfH3mpxgrRKSxH
pOoZ0KXMcB+hHuvlklHntvcI0mMMQ0mhYj6qtMFStkF1RpCG3IPdIwpVCQqu8GV7
s8ubknRzs+3C/Bm19RFOoiPpDkwvyNfvmQ14XkyqqKK5oZ8zhD32kFRQkxa8uZSu
h4aTImFxknu39waBxIRXE4jKxlAmQc4QjFZoq1KmQqQg0J/1JF8RlFvJas1VcjLv
YlvUB2t6npO6oQjB3l+PNf0DpQH7iUx3Wz5AjQCi6L25FjyE06q6BZ/QlmtYdl/8
ZYao4SRqPEs/6cAiF+Qf5zg2UkaWtDphl1LKMuTNLotvsX99HP69V2faNyegodQ0
LyTApr/vT01YPE46vNsDLgK+4cL6TrzC/a4WcmF5SRJ938zrv/duJHLXQIku5v0+
EwOy59Hdm0PT/Er/84dDV0CSjdR/2XuZM3kpysSKLgD1cKiDA+IRguODCxfO9cyY
Ig46v9mFmBvyH04=
-----END CERTIFICATE-----

Datei anzeigen

@@ -0,0 +1,50 @@
-----BEGIN CERTIFICATE-----
MIID+jCCA4GgAwIBAgISBk2wQoy66uSHlfTq30661D5IMAoGCCqGSM49BAMDMDIx
CzAJBgNVBAYTAlVTMRYwFAYDVQQKEw1MZXQncyBFbmNyeXB0MQswCQYDVQQDEwJF
NjAeFw0yNTA3MDIxODI2MDBaFw0yNTA5MzAxODI1NTlaMBgxFjAUBgNVBAMTDWlu
dGVsc2lnaHQuZGUwWTATBgcqhkjOPQIBBggqhkjOPQMBBwNCAAQGXJ8nfR4c72Lf
MiaSx4G9mQKiQwBP1GKSijuP3+rB7/7JgTI9gbbE1phr9muJX+rfBpatQMGMqkta
Eh9aYKfpo4ICjzCCAoswDgYDVR0PAQH/BAQDAgeAMB0GA1UdJQQWMBQGCCsGAQUF
BwMBBggrBgEFBQcDAjAMBgNVHRMBAf8EAjAAMB0GA1UdDgQWBBS2reTabf1b11dw
VDFkxC32ky2KUjAfBgNVHSMEGDAWgBSTJ0aYA6lRaI6Y1sRCSNsjv1iU0jAyBggr
BgEFBQcBAQQmMCQwIgYIKwYBBQUHMAKGFmh0dHA6Ly9lNi5pLmxlbmNyLm9yZy8w
gYsGA1UdEQSBgzCBgIIfYWRtaW4tcGFuZWwtdW5kc28uaW50ZWxzaWdodC5kZYIg
YXBpLXNvZnR3YXJlLXVuZHNvLmludGVsc2lnaHQuZGWCGWdpdGVhLXVuZHNvLmlu
dGVsc2lnaHQuZGWCDWludGVsc2lnaHQuZGWCEXd3dy5pbnRlbHNpZ2h0LmRlMBMG
A1UdIAQMMAowCAYGZ4EMAQIBMC0GA1UdHwQmMCQwIqAgoB6GHGh0dHA6Ly9lNi5j
LmxlbmNyLm9yZy83My5jcmwwggEEBgorBgEEAdZ5AgQCBIH1BIHyAPAAdgCkQsUG
SWBhVI8P1Oqc+3otJkVNh6l/L99FWfYnTzqEVAAAAZfMmQcDAAAEAwBHMEUCIQCs
NbpSr/Zc+pOVES7nYqSZEO1W8aoCs3kSsyC3eVD/nwIgBUjt448hY9XnWZ3bS6h9
CsUXd5xx0wxtjlqBrR7HHEYAdgDd3Mo0ldfhFgXnlTL6x5/4PRxQ39sAOhQSdgos
rLvIKgAAAZfMmQdmAAAEAwBHMEUCIC1LmUYFCt/Zz5UZERN/yrNs+AtJNc8W+UZ+
p0ylID67AiEAoxyvkN3QJA/w05v7yjrOjVUGKDTskJttfQfw/wEuwoEwCgYIKoZI
zj0EAwMDZwAwZAIwBr2iNJZftQ/CA3uhZ4aVvYQdNL4FQNVQHgT0PzIe8EgfaMUv
yTrNl0uaE3tQsXa/AjBp5WxzivMsO/HPJuS1MGbhIrVZic40ndla/IHwBAm32rYC
MKv7XMKJ7vu+Sqd60y0=
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----
MIIEVzCCAj+gAwIBAgIRALBXPpFzlydw27SHyzpFKzgwDQYJKoZIhvcNAQELBQAw
TzELMAkGA1UEBhMCVVMxKTAnBgNVBAoTIEludGVybmV0IFNlY3VyaXR5IFJlc2Vh
cmNoIEdyb3VwMRUwEwYDVQQDEwxJU1JHIFJvb3QgWDEwHhcNMjQwMzEzMDAwMDAw
WhcNMjcwMzEyMjM1OTU5WjAyMQswCQYDVQQGEwJVUzEWMBQGA1UEChMNTGV0J3Mg
RW5jcnlwdDELMAkGA1UEAxMCRTYwdjAQBgcqhkjOPQIBBgUrgQQAIgNiAATZ8Z5G
h/ghcWCoJuuj+rnq2h25EqfUJtlRFLFhfHWWvyILOR/VvtEKRqotPEoJhC6+QJVV
6RlAN2Z17TJOdwRJ+HB7wxjnzvdxEP6sdNgA1O1tHHMWMxCcOrLqbGL0vbijgfgw
gfUwDgYDVR0PAQH/BAQDAgGGMB0GA1UdJQQWMBQGCCsGAQUFBwMCBggrBgEFBQcD
ATASBgNVHRMBAf8ECDAGAQH/AgEAMB0GA1UdDgQWBBSTJ0aYA6lRaI6Y1sRCSNsj
v1iU0jAfBgNVHSMEGDAWgBR5tFnme7bl5AFzgAiIyBpY9umbbjAyBggrBgEFBQcB
AQQmMCQwIgYIKwYBBQUHMAKGFmh0dHA6Ly94MS5pLmxlbmNyLm9yZy8wEwYDVR0g
BAwwCjAIBgZngQwBAgEwJwYDVR0fBCAwHjAcoBqgGIYWaHR0cDovL3gxLmMubGVu
Y3Iub3JnLzANBgkqhkiG9w0BAQsFAAOCAgEAfYt7SiA1sgWGCIpunk46r4AExIRc
MxkKgUhNlrrv1B21hOaXN/5miE+LOTbrcmU/M9yvC6MVY730GNFoL8IhJ8j8vrOL
pMY22OP6baS1k9YMrtDTlwJHoGby04ThTUeBDksS9RiuHvicZqBedQdIF65pZuhp
eDcGBcLiYasQr/EO5gxxtLyTmgsHSOVSBcFOn9lgv7LECPq9i7mfH3mpxgrRKSxH
pOoZ0KXMcB+hHuvlklHntvcI0mMMQ0mhYj6qtMFStkF1RpCG3IPdIwpVCQqu8GV7
s8ubknRzs+3C/Bm19RFOoiPpDkwvyNfvmQ14XkyqqKK5oZ8zhD32kFRQkxa8uZSu
h4aTImFxknu39waBxIRXE4jKxlAmQc4QjFZoq1KmQqQg0J/1JF8RlFvJas1VcjLv
YlvUB2t6npO6oQjB3l+PNf0DpQH7iUx3Wz5AjQCi6L25FjyE06q6BZ/QlmtYdl/8
ZYao4SRqPEs/6cAiF+Qf5zg2UkaWtDphl1LKMuTNLotvsX99HP69V2faNyegodQ0
LyTApr/vT01YPE46vNsDLgK+4cL6TrzC/a4WcmF5SRJ938zrv/duJHLXQIku5v0+
EwOy59Hdm0PT/Er/84dDV0CSjdR/2XuZM3kpysSKLgD1cKiDA+IRguODCxfO9cyY
Ig46v9mFmBvyH04=
-----END CERTIFICATE-----

Datei anzeigen

@@ -0,0 +1,5 @@
-----BEGIN PRIVATE KEY-----
MIGHAgEAMBMGByqGSM49AgEGCCqGSM49AwEHBG0wawIBAQQgPGfcuJkq/qSnOGde
EIrhSbQQ5jT5WeQRXxg/CCtG2BqhRANCAAQGXJ8nfR4c72LfMiaSx4G9mQKiQwBP
1GKSijuP3+rB7/7JgTI9gbbE1phr9muJX+rfBpatQMGMqktaEh9aYKfp
-----END PRIVATE KEY-----

Datei anzeigen

@@ -0,0 +1,13 @@
version = 4.1.1
archive_dir = /etc/letsencrypt/archive/intelsight.de
cert = /etc/letsencrypt/live/intelsight.de/cert.pem
privkey = /etc/letsencrypt/live/intelsight.de/privkey.pem
chain = /etc/letsencrypt/live/intelsight.de/chain.pem
fullchain = /etc/letsencrypt/live/intelsight.de/fullchain.pem
# Options used in the renewal process
[renewalparams]
account = 4cf4b39b4e945d8b93d829e56273ba75
authenticator = standalone
server = https://acme-v02.api.letsencrypt.org/directory
key_type = ecdsa

Datei anzeigen

@@ -0,0 +1,6 @@
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
a5a7cd7baf84 gitea/gitea:latest "/usr/bin/entrypoint…" 16 minutes ago Up 16 minutes 0.0.0.0:2222->2222/tcp, [::]:2222->2222/tcp, 22/tcp, 0.0.0.0:3000->3000/tcp, [::]:3000->3000/tcp gitea
c7fdb8477ae6 v2-nginx "/docker-entrypoint.…" 2 hours ago Up 2 hours 0.0.0.0:80->80/tcp, [::]:80->80/tcp, 0.0.0.0:443->443/tcp, [::]:443->443/tcp nginx-proxy
f9a1a0e73902 v2-license-server "uvicorn app.main:ap…" 2 hours ago Up 2 hours 8443/tcp license-server
292c508dbe6b v2-admin-panel "python app.py" 2 hours ago Up 2 hours 5000/tcp admin-panel
7318afc0161c v2-postgres "docker-entrypoint.s…" 2 hours ago Up 2 hours 5432/tcp db

Datei anzeigen

@@ -0,0 +1,50 @@
482fa3b Server backup 20250703_172107 - Full system backup before changes
0c66d16 Server backup 20250703_153414 - Full system backup before changes
c4fb4c0 Server backup 20250703_145459 - Full system backup before changes
9107540 Server backup 20250703_141921 - Full system backup before changes
fdecfbd Server backup 20250702_215036 - Full system backup before changes
ffc6aa7 Fix version check endpoint authentication
735e42a Server backup 20250702_213331 - Full system backup before changes
740dc70 Server backup 20250702_211500 - Full system backup before changes
aed6b39 Server backup 20250702_174353 - Full system backup before changes
4d56b64 Server backup 20250702_172458 - Full system backup before changes
3f172cf Server backup 20250702_163546 - Full system backup before changes
7437ee1 Server backup 20250702_162138 - Full system backup before changes
3722642 Server backup 20250702_160711 - Full system backup before changes
d822242 Server backup 20250702_155437 - Full system backup before changes
50690ad Server backup 20250702_135941 - Full system backup before changes
766bfdf Server backup 20250702_133229 - Full system backup before changes
2e0764a Server backup 20250702_131851 - Full system backup before changes
89f5105 Server backup 20250702_102014 - Full system backup before changes
f4fce74 Server backup 20250702_002930 - Full system backup before changes
d1747fe Server backup 20250702_002743 - Full system backup before changes
fec6a86 Server backup 20250702_001604 - Full system backup before changes
1474098 Server backup 20250702_000750 - Full system backup before changes
4f387ae Server backup 20250701_234343 - Full system backup before changes
7ee86ee Server backup 20250701_233321 - Full system backup before changes
56efbde Server backup 20250701_232409 - Full system backup before changes
e53b503 Server backup 20250701_230231 - Full system backup before changes
6379250 Server backup 20250701_222336 - Full system backup before changes
a6fff8d Server backup 20250701_215728 - Full system backup before changes
76570eb Server backup 20250701_213925 - Full system backup before changes
927d6f6 Server backup 20250630_171826 - Full system backup before changes
981f039 Server backup 20250628_232101 - Full system backup before changes
7dc37f4 Server backup 20250628_230701 - Full system backup before changes
8c66e9e Server backup 20250628_225351 - Full system backup before changes
17071c4 Server backup 20250628_224534 - Full system backup before changes
3a75523 Local changes before sync
972401c Server backup 20250628_203904 - Full system backup before changes
4cf8c41 Test backup
f90eb61 Server backup 20250628_203904 - Backup before fixing monitoring SQL queries
c7413ac Merge branch 'main' of https://github.com/UserIsMH/hetzner-backup
98bee9c Backup vor Admin Panel Backup-System Erweiterung
22522b8 Merge branch 'main' of https://github.com/UserIsMH/hetzner-backup
bad7324 Backup nach Import von Lizenzen und Ressourcen (77 Lizenzen, 31 Ressourcen)
7970004 Backup nach Import von Lizenzen und Ressourcen (77 Lizenzen, 31 Ressourcen)
b28b60e nur backups
f105039 Backup nach Wiederherstellung der Kundendaten aus altem Backup
a77c34c Backup nach User-Migration zu Datenbank
85c7499 Add full server backup with Git LFS
8aa79c6 Merge branch 'main' of https://github.com/UserIsMH/v2-Docker
4ab51a7 Hetzner Deploy Version (hoffentlich)
35fd8fd Aktualisieren von SYSTEM_DOCUMENTATION.md

Datei anzeigen

@@ -0,0 +1,127 @@
warning: could not open directory 'v2_nginx/ssl/accounts/': Permission denied
warning: could not open directory 'v2_nginx/ssl/archive/': Permission denied
warning: could not open directory 'v2_nginx/ssl/live/': Permission denied
On branch main
Changes not staged for commit:
(use "git add/rm <file>..." to update what will be committed)
(use "git restore <file>..." to discard changes in working directory)
modified: API_REFERENCE.md
deleted: backups/.backup_key
deleted: server-backups/server_backup_20250628_171705/configs/.env
deleted: server-backups/server_backup_20250628_171705/configs/docker-compose.yaml
deleted: server-backups/server_backup_20250628_171705/configs/nginx.conf
deleted: server-backups/server_backup_20250628_171705/configs/ssl/.gitignore
deleted: server-backups/server_backup_20250628_171705/configs/ssl/README.md
deleted: server-backups/server_backup_20250628_171705/database_backup.sql.gz
deleted: server-backups/server_backup_20250628_171705/docker_compose_status.txt
deleted: server-backups/server_backup_20250628_171705/docker_containers.txt
deleted: server-backups/server_backup_20250628_171705/git_recent_commits.txt
deleted: server-backups/server_backup_20250628_171705/git_status.txt
deleted: server-backups/server_backup_20250628_171705/volumes/v2_postgres_data.tar.gz
deleted: server-backups/server_backup_20250628_203904.tar.gz
deleted: server-backups/server_backup_20250628_203904/backup_info.txt
deleted: server-backups/server_backup_20250628_203904/configs/.env
deleted: server-backups/server_backup_20250628_203904/configs/docker-compose.yaml
deleted: server-backups/server_backup_20250628_203904/configs/nginx.conf
deleted: server-backups/server_backup_20250628_203904/configs/ssl/dhparam.pem
deleted: server-backups/server_backup_20250628_203904/database_backup.sql.gz
deleted: server-backups/server_backup_20250628_203904/docker_compose_status.txt
deleted: server-backups/server_backup_20250628_203904/docker_containers.txt
deleted: server-backups/server_backup_20250628_203904/git_recent_commits.txt
deleted: server-backups/server_backup_20250628_203904/git_status.txt
deleted: server-backups/server_backup_20250628_203904/volumes/postgres_data.tar.gz
deleted: server-backups/server_backup_20250628_224534.tar.gz
deleted: server-backups/server_backup_20250628_225351.tar.gz
deleted: server-backups/server_backup_20250628_230701.tar.gz
deleted: server-backups/server_backup_20250628_232101.tar.gz
deleted: server-backups/server_backup_20250630_171826.tar.gz
deleted: server-backups/server_backup_20250701_213925.tar.gz
deleted: server-backups/server_backup_20250701_215728.tar.gz
deleted: server-backups/server_backup_20250701_222336.tar.gz
deleted: server-backups/server_backup_20250701_230231.tar.gz
deleted: server-backups/server_backup_20250701_232409.tar.gz
deleted: server-backups/server_backup_20250701_233321.tar.gz
deleted: server-backups/server_backup_20250701_234343.tar.gz
deleted: server-backups/server_backup_20250702_000750.tar.gz
deleted: server-backups/server_backup_20250702_001604.tar.gz
deleted: server-backups/server_backup_20250702_002743.tar.gz
deleted: server-backups/server_backup_20250702_002930.tar.gz
deleted: server-backups/server_backup_20250702_102014.tar.gz
deleted: server-backups/server_backup_20250702_131851.tar.gz
deleted: server-backups/server_backup_20250702_133229.tar.gz
deleted: server-backups/server_backup_20250702_135941.tar.gz
deleted: server-backups/server_backup_20250702_155437.tar.gz
deleted: server-backups/server_backup_20250702_160711.tar.gz
deleted: server-backups/server_backup_20250702_162138.tar.gz
deleted: server-backups/server_backup_20250702_163546.tar.gz
deleted: server-backups/server_backup_20250702_172458.tar.gz
deleted: server-backups/server_backup_20250702_173643.tar.gz
deleted: server-backups/server_backup_20250702_174353.tar.gz
deleted: server-backups/server_backup_20250702_211500.tar.gz
deleted: server-backups/server_backup_20250702_213331.tar.gz
deleted: server-backups/server_backup_20250702_215036.tar.gz
deleted: server-backups/server_backup_20250703_141921.tar.gz
deleted: server-backups/server_backup_20250703_145459.tar.gz
deleted: server-backups/server_backup_20250703_153414.tar.gz
deleted: server-backups/server_backup_20250703_172107.tar.gz
deleted: test_client_version_check.py
deleted: test_version_endpoint.py
modified: v2/docker-compose.yaml
modified: v2_adminpanel/Dockerfile
deleted: v2_adminpanel/__pycache__/db.cpython-312.pyc
modified: v2_adminpanel/config.py
modified: v2_adminpanel/init.sql
modified: v2_adminpanel/models.py
modified: v2_adminpanel/routes/admin_routes.py
modified: v2_adminpanel/routes/api_routes.py
modified: v2_adminpanel/routes/batch_routes.py
modified: v2_adminpanel/routes/customer_routes.py
modified: v2_adminpanel/routes/export_routes.py
modified: v2_adminpanel/routes/license_routes.py
modified: v2_adminpanel/routes/monitoring_routes.py
modified: v2_adminpanel/routes/session_routes.py
modified: v2_adminpanel/scheduler.py
modified: v2_adminpanel/templates/backups_new.html
modified: v2_adminpanel/templates/base.html
modified: v2_adminpanel/templates/batch_form.html
modified: v2_adminpanel/templates/customers_licenses.html
modified: v2_adminpanel/templates/dashboard.html
modified: v2_adminpanel/templates/edit_license.html
modified: v2_adminpanel/templates/index.html
modified: v2_adminpanel/templates/licenses.html
modified: v2_adminpanel/templates/monitoring/analytics.html
modified: v2_adminpanel/templates/monitoring/live_dashboard.html
modified: v2_adminpanel/templates/monitoring/unified_monitoring.html
deleted: v2_adminpanel/utils/__pycache__/__init__.cpython-312.pyc
deleted: v2_adminpanel/utils/__pycache__/backup.cpython-312.pyc
modified: v2_adminpanel/utils/backup.py
modified: v2_adminpanel/utils/export.py
modified: v2_adminpanel/utils/github_backup.py
modified: v2_lizenzserver/app/api/license.py
modified: v2_lizenzserver/app/core/api_key_auth.py
modified: v2_lizenzserver/app/core/config.py
modified: v2_lizenzserver/app/main.py
modified: v2_lizenzserver/app/models/__init__.py
modified: v2_lizenzserver/app/models/models.py
modified: v2_lizenzserver/app/schemas/license.py
modified: v2_lizenzserver/requirements.txt
modified: v2_nginx/nginx.conf
Untracked files:
(use "git add <file>..." to include in what will be committed)
API_REFERENCE_DOWNLOAD.md
CLAUDE.md
RESSOURCE_API_PLAN.md
backup-repo/
logs/
migrations/completed/
server-backups/server_backup_20250703_203754/
temp_check/
v2_adminpanel/db_license.py
v2_adminpanel/templates/monitoring/device_limits.html
v2_adminpanel/test_device_count.py
v2_adminpanel/utils/device_monitoring.py
v2_lizenzserver/app/core/scheduler.py
v2_nginx/ssl/renewal/
no changes added to commit (use "git add" and/or "git commit -a")

Submodul temp_check/hetzner-backup hinzugefügt bei 3736a28334

Datei anzeigen

@@ -134,6 +134,15 @@ services:
volumes:
# Backup-Verzeichnis
- ../backups:/app/backups
# Server-Backup Verzeichnisse
- ../server-backups:/app/server-backups
- ../database-backups:/app/database-backups
# Voller Zugriff auf v2-Docker für Server-Backups
- /opt/v2-Docker:/opt/v2-Docker
# Git SSH Key für GitHub Push
- ~/.ssh:/root/.ssh:ro
# Git Config
- ~/.gitconfig:/root/.gitconfig:ro
deploy:
resources:
limits:

Datei anzeigen

@@ -15,13 +15,18 @@ RUN apt-get update && apt-get install -y \
locales \
postgresql-client \
tzdata \
git \
git-lfs \
openssh-client \
&& sed -i '/de_DE.UTF-8/s/^# //g' /etc/locale.gen \
&& locale-gen \
&& update-locale LANG=de_DE.UTF-8 \
&& ln -sf /usr/share/zoneinfo/Europe/Berlin /etc/localtime \
&& echo "Europe/Berlin" > /etc/timezone \
&& git lfs install \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
&& rm -rf /var/lib/apt/lists/* \
&& git config --global --add safe.directory /opt/v2-Docker
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt

Datei anzeigen

@@ -10,7 +10,7 @@ SECRET_KEY = os.urandom(24)
SESSION_TYPE = 'filesystem'
JSON_AS_ASCII = False
JSONIFY_MIMETYPE = 'application/json; charset=utf-8'
PERMANENT_SESSION_LIFETIME = timedelta(minutes=5)
PERMANENT_SESSION_LIFETIME = timedelta(minutes=15)
SESSION_COOKIE_HTTPONLY = True
SESSION_COOKIE_SECURE = os.getenv("SESSION_COOKIE_SECURE", "true").lower() == "true" # Default True for HTTPS
SESSION_COOKIE_SAMESITE = 'Lax'

62
v2_adminpanel/db_license.py Normale Datei
Datei anzeigen

@@ -0,0 +1,62 @@
"""
Database connection helper for License Server database
"""
import psycopg2
from psycopg2.extras import RealDictCursor
import logging
from contextlib import contextmanager
# License Server DB configuration
LICENSE_DB_CONFIG = {
'host': 'db', # Same container name as in docker network
'port': 5432,
'database': 'meinedatenbank', # License Server database name
'user': 'adminuser',
'password': 'supergeheimespasswort'
}
logger = logging.getLogger(__name__)
def get_license_db_connection():
"""Get a connection to the license server database"""
try:
conn = psycopg2.connect(**LICENSE_DB_CONFIG)
return conn
except Exception as e:
logger.error(f"Failed to connect to license server database: {str(e)}")
raise
@contextmanager
def get_license_db_cursor(dict_cursor=False):
"""Context manager for license server database cursor"""
conn = None
cur = None
try:
conn = get_license_db_connection()
cursor_factory = RealDictCursor if dict_cursor else None
cur = conn.cursor(cursor_factory=cursor_factory)
yield cur
conn.commit()
except Exception as e:
if conn:
conn.rollback()
logger.error(f"License DB error: {str(e)}")
raise
finally:
if cur:
cur.close()
if conn:
conn.close()
def test_license_db_connection():
"""Test the connection to license server database"""
try:
with get_license_db_cursor() as cur:
cur.execute("SELECT 1")
result = cur.fetchone()
if result:
logger.info("Successfully connected to license server database")
return True
except Exception as e:
logger.error(f"Failed to test license server database connection: {str(e)}")
return False

Datei anzeigen

@@ -570,6 +570,37 @@ BEGIN
END IF;
END $$;
-- Migration: Add max_concurrent_sessions column to licenses if it doesn't exist
DO $$
BEGIN
IF NOT EXISTS (SELECT 1 FROM information_schema.columns
WHERE table_name = 'licenses' AND column_name = 'max_concurrent_sessions') THEN
ALTER TABLE licenses ADD COLUMN max_concurrent_sessions INTEGER DEFAULT 1 CHECK (max_concurrent_sessions >= 1);
-- Set initial value to same as max_devices for existing licenses
UPDATE licenses SET max_concurrent_sessions = max_devices WHERE max_concurrent_sessions IS NULL;
-- Add constraint to ensure concurrent sessions don't exceed device limit
ALTER TABLE licenses ADD CONSTRAINT check_concurrent_sessions CHECK (max_concurrent_sessions <= max_devices);
END IF;
END $$;
-- Migration: Remove UNIQUE constraint on license_sessions.license_id to allow multiple concurrent sessions
DO $$
BEGIN
-- Check if the unique constraint exists and drop it
IF EXISTS (SELECT 1 FROM pg_constraint
WHERE conname = 'license_sessions_license_id_key'
AND conrelid = 'license_sessions'::regclass) THEN
ALTER TABLE license_sessions DROP CONSTRAINT license_sessions_license_id_key;
END IF;
-- Add a compound index for better performance on concurrent session queries
IF NOT EXISTS (SELECT 1 FROM pg_indexes
WHERE tablename = 'license_sessions'
AND indexname = 'idx_license_sessions_license_hardware') THEN
CREATE INDEX idx_license_sessions_license_hardware ON license_sessions(license_id, hardware_id);
END IF;
END $$;
-- Migration: Add device_type column to device_registrations table
DO $$

Datei anzeigen

@@ -39,14 +39,16 @@ def get_licenses(show_fake=False):
with get_db_cursor(conn) as cur:
if show_fake:
cur.execute("""
SELECT l.*, c.name as customer_name
SELECT l.*, c.name as customer_name,
(SELECT COUNT(*) FROM license_sessions ls WHERE ls.license_id = l.id) as active_sessions
FROM licenses l
LEFT JOIN customers c ON l.customer_id = c.id
ORDER BY l.created_at DESC
""")
else:
cur.execute("""
SELECT l.*, c.name as customer_name
SELECT l.*, c.name as customer_name,
(SELECT COUNT(*) FROM license_sessions ls WHERE ls.license_id = l.id) as active_sessions
FROM licenses l
LEFT JOIN customers c ON l.customer_id = c.id
WHERE l.is_fake = false
@@ -70,7 +72,8 @@ def get_license_by_id(license_id):
with get_db_connection() as conn:
with get_db_cursor(conn) as cur:
cur.execute("""
SELECT l.*, c.name as customer_name
SELECT l.*, c.name as customer_name,
(SELECT COUNT(*) FROM license_sessions ls WHERE ls.license_id = l.id) as active_sessions
FROM licenses l
LEFT JOIN customers c ON l.customer_id = c.id
WHERE l.id = %s
@@ -86,6 +89,37 @@ def get_license_by_id(license_id):
return None
def get_license_session_stats(license_id):
"""Get session statistics for a specific license"""
try:
with get_db_connection() as conn:
with get_db_cursor(conn) as cur:
cur.execute("""
SELECT
l.device_limit,
l.concurrent_sessions_limit,
(SELECT COUNT(*) FROM device_registrations dr WHERE dr.license_id = l.id AND dr.is_active = true) as registered_devices,
(SELECT COUNT(*) FROM license_sessions ls WHERE ls.license_id = l.id) as active_sessions,
l.concurrent_sessions_limit - (SELECT COUNT(*) FROM license_sessions ls WHERE ls.license_id = l.id) as available_sessions
FROM licenses l
WHERE l.id = %s
""", (license_id,))
row = cur.fetchone()
if row:
return {
'device_limit': row[0],
'concurrent_sessions_limit': row[1],
'registered_devices': row[2],
'active_sessions': row[3],
'available_sessions': row[4]
}
return None
except Exception as e:
logger.error(f"Error fetching session stats for license {license_id}: {str(e)}")
return None
def get_customers(show_fake=False, search=None):
"""Get all customers from database"""
try:
@@ -175,4 +209,67 @@ def get_active_sessions():
return sessions
except Exception as e:
logger.error(f"Error fetching is_active sessions: {str(e)}")
return []
return []
def get_devices_for_license(license_id):
"""Get all registered devices for a specific license"""
try:
with get_db_connection() as conn:
with get_db_cursor(conn) as cur:
cur.execute("""
SELECT
id,
hardware_fingerprint,
device_name,
device_type,
operating_system,
app_version,
first_activated_at,
last_seen_at,
is_active,
ip_address,
(SELECT COUNT(*) FROM license_sessions ls
WHERE ls.device_registration_id = dr.id) as active_sessions
FROM device_registrations dr
WHERE dr.license_id = %s
ORDER BY dr.last_seen_at DESC
""", (license_id,))
columns = [desc[0] for desc in cur.description]
devices = []
for row in cur.fetchall():
device_dict = dict(zip(columns, row))
devices.append(device_dict)
return devices
except Exception as e:
logger.error(f"Error fetching devices for license {license_id}: {str(e)}")
return []
def check_device_limit(license_id):
"""Check if license has reached its device limit"""
try:
with get_db_connection() as conn:
with get_db_cursor(conn) as cur:
cur.execute("""
SELECT
l.device_limit,
COUNT(dr.id) as active_devices
FROM licenses l
LEFT JOIN device_registrations dr ON l.id = dr.license_id AND dr.is_active = true
WHERE l.id = %s
GROUP BY l.device_limit
""", (license_id,))
row = cur.fetchone()
if row:
return {
'device_limit': row[0],
'active_devices': row[1],
'limit_reached': row[1] >= row[0]
}
return None
except Exception as e:
logger.error(f"Error checking device limit for license {license_id}: {str(e)}")
return None

Datei anzeigen

@@ -4,6 +4,7 @@ from zoneinfo import ZoneInfo
from pathlib import Path
from flask import Blueprint, render_template, request, redirect, session, url_for, flash, send_file, jsonify, current_app
import requests
import traceback
import config
from config import DATABASE_CONFIG
@@ -12,6 +13,7 @@ from utils.audit import log_audit
from utils.backup import create_backup, restore_backup, create_backup_with_github, create_server_backup
from utils.network import get_client_ip
from db import get_connection, get_db_connection, get_db_cursor, execute_query
from db_license import get_license_db_cursor
from utils.export import create_excel_export, prepare_audit_export_data
# Create Blueprint
@@ -116,9 +118,17 @@ def dashboard():
cur.execute("SELECT COUNT(*) FROM licenses WHERE is_fake = true")
test_licenses_count = cur.fetchone()[0] if cur.rowcount > 0 else 0
# Anzahl aktiver Sessions (Admin-Panel)
cur.execute("SELECT COUNT(*) FROM sessions WHERE is_active = true")
active_sessions = cur.fetchone()[0] if cur.rowcount > 0 else 0
# Anzahl aktiver Sessions aus License Server DB
active_sessions = 0
try:
with get_license_db_cursor() as license_cur:
license_cur.execute("SELECT COUNT(*) FROM license_sessions WHERE ended_at IS NULL")
active_sessions = license_cur.fetchone()[0] if license_cur.rowcount > 0 else 0
except Exception as e:
current_app.logger.warning(f"Could not get active sessions from license server: {str(e)}")
# Fallback auf Admin DB
cur.execute("SELECT COUNT(*) FROM sessions WHERE is_active = true")
active_sessions = cur.fetchone()[0] if cur.rowcount > 0 else 0
# Aktive Nutzung (Kunden-Software) - Lizenzen mit Heartbeats in den letzten 15 Minuten
active_usage = 0
@@ -333,6 +343,52 @@ def dashboard():
except:
pass
# Session-Auslastung (Concurrent Sessions)
try:
cur.execute("""
SELECT
SUM(active_sessions) as total_active_sessions,
SUM(max_concurrent_sessions) as total_max_sessions,
COUNT(CASE WHEN active_sessions >= max_concurrent_sessions THEN 1 END) as at_limit_count
FROM (
SELECT
l.id,
l.max_concurrent_sessions,
(SELECT COUNT(*) FROM license_sessions ls WHERE ls.license_id = l.id) as active_sessions
FROM licenses l
WHERE l.is_fake = false AND l.is_active = true
) session_data
""")
session_stats = cur.fetchone()
if session_stats:
total_active = session_stats[0] or 0
total_max = session_stats[1] or 0
at_limit = session_stats[2] or 0
utilization = int((total_active / total_max * 100)) if total_max > 0 else 0
stats['session_stats'] = {
'total_active_sessions': total_active,
'total_max_sessions': total_max,
'utilization_percent': utilization,
'licenses_at_limit': at_limit
}
else:
stats['session_stats'] = {
'total_active_sessions': 0,
'total_max_sessions': 0,
'utilization_percent': 0,
'licenses_at_limit': 0
}
except Exception as e:
current_app.logger.warning(f"Could not get session statistics: {str(e)}")
stats['session_stats'] = {
'total_active_sessions': 0,
'total_max_sessions': 0,
'utilization_percent': 0,
'licenses_at_limit': 0
}
conn.rollback()
license_distribution = []
hourly_sessions = []
@@ -621,7 +677,12 @@ def create_backup_route():
def restore_backup_route(backup_id):
"""Backup wiederherstellen"""
from flask import jsonify
encryption_key = request.form.get('encryption_key')
# Handle both JSON and form data
if request.is_json:
encryption_key = request.json.get('encryption_key')
else:
encryption_key = request.form.get('encryption_key')
success, message = restore_backup(backup_id, encryption_key)
@@ -967,7 +1028,7 @@ def license_analytics():
AVG(device_count) as avg_usage
FROM licenses l
LEFT JOIN (
SELECT license_id, COUNT(DISTINCT hardware_id) as device_count
SELECT license_id, COUNT(DISTINCT hardware_fingerprint) as device_count
FROM license_heartbeats
WHERE timestamp > NOW() - INTERVAL '30 days'
GROUP BY license_id
@@ -1282,7 +1343,7 @@ def terminate_session(session_id):
# Get session info
cur.execute("""
SELECT license_id, hardware_id, ip_address, client_version, started_at
SELECT license_id, hardware_fingerprint, ip_address, client_version, started_at
FROM license_sessions
WHERE id = %s
""", (session_id,))
@@ -1424,6 +1485,9 @@ def regenerate_api_key():
random_part = ''.join(random.choices(string.ascii_uppercase + string.digits, k=32))
new_api_key = f"AF-{year_part}-{random_part}"
# Log what we're attempting
app.logger.info(f"Attempting to regenerate API key. New key: {new_api_key[:10]}...")
# Update the API key
cur.execute("""
UPDATE system_api_key
@@ -1433,15 +1497,27 @@ def regenerate_api_key():
WHERE id = 1
""", (new_api_key, session.get('username')))
# Log rows affected
app.logger.info(f"Rows affected by UPDATE: {cur.rowcount}")
conn.commit()
flash('API Key wurde erfolgreich regeneriert', 'success')
# Verify the update
cur.execute("SELECT api_key FROM system_api_key WHERE id = 1")
result = cur.fetchone()
if result and result[0] == new_api_key:
app.logger.info("API key successfully updated in database")
flash('API Key wurde erfolgreich regeneriert', 'success')
else:
app.logger.error(f"API key update verification failed. Expected: {new_api_key[:10]}..., Found: {result[0][:10] if result else 'None'}...")
flash('API Key wurde regeneriert, aber Verifizierung fehlgeschlagen', 'warning')
# Log action
log_audit('API_KEY_REGENERATED', 'system_api_key', 1,
additional_info="API Key regenerated")
except Exception as e:
app.logger.error(f"Error regenerating API key: {str(e)}", exc_info=True)
conn.rollback()
flash(f'Fehler beim Regenerieren des API Keys: {str(e)}', 'error')
@@ -1452,6 +1528,63 @@ def regenerate_api_key():
return redirect(url_for('admin.license_config'))
@admin_bp.route("/api-key/test-regenerate", methods=["GET"])
@login_required
def test_regenerate_api_key():
"""Test endpoint to check if regeneration works"""
import string
import random
conn = get_connection()
cur = conn.cursor()
try:
# Check current API key
cur.execute("SELECT api_key, regenerated_at FROM system_api_key WHERE id = 1")
current = cur.fetchone()
# Generate new API key
year_part = datetime.now().strftime('%Y')
random_part = ''.join(random.choices(string.ascii_uppercase + string.digits, k=32))
new_api_key = f"AF-{year_part}-{random_part}"
# Update the API key
cur.execute("""
UPDATE system_api_key
SET api_key = %s,
regenerated_at = CURRENT_TIMESTAMP,
regenerated_by = %s
WHERE id = 1
""", (new_api_key, session.get('username')))
rows_affected = cur.rowcount
conn.commit()
# Verify the update
cur.execute("SELECT api_key, regenerated_at FROM system_api_key WHERE id = 1")
updated = cur.fetchone()
result = {
'current_api_key': current[0] if current else None,
'current_regenerated_at': str(current[1]) if current and current[1] else None,
'new_api_key': new_api_key,
'rows_affected': rows_affected,
'updated_api_key': updated[0] if updated else None,
'updated_regenerated_at': str(updated[1]) if updated and updated[1] else None,
'success': updated and updated[0] == new_api_key
}
return jsonify(result)
except Exception as e:
conn.rollback()
return jsonify({'error': str(e), 'traceback': traceback.format_exc()})
finally:
cur.close()
conn.close()
@admin_bp.route("/test-api-key")
@login_required
def test_api_key():

Datei anzeigen

@@ -193,29 +193,27 @@ def get_license_devices(license_id):
cur.execute("""
SELECT
dr.id,
dr.hardware_id,
dr.hardware_fingerprint,
dr.device_name,
dr.device_type,
dr.first_seen as registration_date,
dr.last_seen,
dr.first_activated_at as registration_date,
dr.last_seen_at,
dr.is_active,
dr.operating_system,
dr.ip_address,
(SELECT COUNT(*) FROM sessions s
WHERE s.license_key = l.license_key
AND s.hardware_id = dr.hardware_id
AND s.is_active = true) as active_sessions
(SELECT COUNT(*) FROM license_sessions ls
WHERE ls.device_registration_id = dr.id
AND ls.ended_at IS NULL) as active_sessions
FROM device_registrations dr
JOIN licenses l ON dr.license_id = l.id
WHERE l.license_key = %s
ORDER BY dr.first_seen DESC
""", (license_data['license_key'],))
WHERE dr.license_id = %s
ORDER BY dr.last_seen_at DESC
""", (license_id,))
devices = []
for row in cur.fetchall():
devices.append({
'id': row[0],
'hardware_id': row[1],
'hardware_fingerprint': row[1],
'device_name': row[2],
'device_type': row[3],
'registration_date': row[4].isoformat() if row[4] else None,
@@ -268,22 +266,20 @@ def register_device(license_id):
# Prüfe Gerätelimit
cur.execute("""
SELECT COUNT(*) FROM device_registrations dr
JOIN licenses l ON dr.license_id = l.id
WHERE l.license_key = %s AND dr.is_active = true
""", (license_data['license_key'],))
SELECT COUNT(*) FROM device_registrations
WHERE license_id = %s AND is_active = true
""", (license_id,))
active_device_count = cur.fetchone()[0]
if active_device_count >= license_data['device_limit']:
if active_device_count >= license_data.get('device_limit', 3):
return jsonify({'error': 'Gerätelimit erreicht'}), 400
# Prüfe ob Gerät bereits registriert
cur.execute("""
SELECT dr.id, dr.is_active FROM device_registrations dr
JOIN licenses l ON dr.license_id = l.id
WHERE l.license_key = %s AND dr.hardware_id = %s
""", (license_data['license_key'], hardware_id))
SELECT id, is_active FROM device_registrations
WHERE license_id = %s AND hardware_fingerprint = %s
""", (license_id, hardware_id))
existing = cur.fetchone()
@@ -294,16 +290,18 @@ def register_device(license_id):
# Reaktiviere Gerät
cur.execute("""
UPDATE device_registrations
SET is_active = true, last_seen = CURRENT_TIMESTAMP
SET is_active = true, last_seen_at = CURRENT_TIMESTAMP
WHERE id = %s
""", (existing[0],))
else:
# Registriere neues Gerät
cur.execute("""
INSERT INTO device_registrations
(license_id, hardware_id, device_name, device_type, is_active)
VALUES (%s, %s, %s, %s, true)
""", (license_id, hardware_id, device_name, device_type))
(license_id, hardware_fingerprint, device_name, device_type, is_active,
first_activated_at, last_seen_at, operating_system, app_version)
VALUES (%s, %s, %s, %s, true, CURRENT_TIMESTAMP, CURRENT_TIMESTAMP, %s, %s)
""", (license_id, hardware_id, device_name, device_type,
data.get('operating_system', 'unknown'), data.get('app_version')))
conn.commit()
@@ -332,7 +330,7 @@ def deactivate_device(license_id, device_id):
try:
# Prüfe ob Gerät zur Lizenz gehört
cur.execute("""
SELECT dr.device_name, dr.hardware_id, l.license_key
SELECT dr.device_name, dr.hardware_fingerprint, l.license_key
FROM device_registrations dr
JOIN licenses l ON dr.license_id = l.id
WHERE dr.id = %s AND l.id = %s
@@ -345,15 +343,15 @@ def deactivate_device(license_id, device_id):
# Deaktiviere Gerät
cur.execute("""
UPDATE device_registrations
SET is_active = false
SET is_active = false, deactivated_at = CURRENT_TIMESTAMP, deactivated_by = %s
WHERE id = %s
""", (device_id,))
""", (session.get('username'), device_id))
# Beende aktive Sessions
cur.execute("""
UPDATE sessions
SET is_active = false, ended_at = CURRENT_TIMESTAMP
WHERE license_key = %s AND hardware_id = %s AND is_active = true
WHERE license_key = %s AND hardware_fingerprint = %s AND is_active = true
""", (device[2], device[1]))
conn.commit()
@@ -440,7 +438,7 @@ def bulk_delete_licenses():
try:
cur.execute("""
SELECT COUNT(*)
FROM activations
FROM device_registrations
WHERE license_id = %s
AND is_active = true
""", (license_id,))
@@ -451,7 +449,7 @@ def bulk_delete_licenses():
skipped_licenses.append(license_id)
continue
except:
# If activations table doesn't exist, continue
# If device_registrations table doesn't exist, continue
pass
# Delete associated data
@@ -468,7 +466,7 @@ def bulk_delete_licenses():
pass
try:
cur.execute("DELETE FROM activations WHERE license_id = %s", (license_id,))
cur.execute("DELETE FROM license_sessions WHERE license_id = %s", (license_id,))
except:
pass
@@ -946,9 +944,9 @@ def global_search():
# Suche in Sessions
cur.execute("""
SELECT id, license_key, username, hardware_id, is_active
SELECT id, license_key, username, hardware_fingerprint as hardware_id, is_active
FROM sessions
WHERE username ILIKE %s OR hardware_id ILIKE %s
WHERE username ILIKE %s OR hardware_fingerprint ILIKE %s
ORDER BY started_at DESC
LIMIT 10
""", (f'%{query}%', f'%{query}%'))

Datei anzeigen

@@ -81,12 +81,14 @@ def batch_create():
INSERT INTO licenses (
license_key, customer_id,
license_type, valid_from, valid_until, device_limit,
max_devices, max_concurrent_sessions,
is_fake, created_at
) VALUES (%s, %s, %s, %s, %s, %s, %s, %s)
) VALUES (%s, %s, %s, %s, %s, %s, %s, %s, %s, %s)
RETURNING id
""", (
license_key, customer_id,
license_type, valid_from, valid_until, device_limit,
device_limit, 1, # max_devices = device_limit, max_concurrent_sessions = 1 (default)
is_fake, datetime.now()
))

Datei anzeigen

@@ -338,7 +338,9 @@ def api_customer_licenses(customer_id):
END as status,
COALESCE(l.domain_count, 0) as domain_count,
COALESCE(l.ipv4_count, 0) as ipv4_count,
COALESCE(l.phone_count, 0) as phone_count
COALESCE(l.phone_count, 0) as phone_count,
l.max_concurrent_sessions,
(SELECT COUNT(*) FROM license_sessions ls WHERE ls.license_id = l.id) as active_sessions
FROM licenses l
WHERE l.customer_id = %s
ORDER BY l.created_at DESC, l.id DESC
@@ -379,6 +381,13 @@ def api_customer_licenses(customer_id):
elif res_row[1] == 'phone':
resources['phones'].append(resource_data)
# Count active devices from activations table
cur2.execute("""
SELECT COUNT(*) FROM activations
WHERE license_id = %s AND is_active = true
""", (license_id,))
active_device_count = cur2.fetchone()[0]
cur2.close()
conn2.close()
@@ -396,9 +405,10 @@ def api_customer_licenses(customer_id):
'domain_count': row[10],
'ipv4_count': row[11],
'phone_count': row[12],
'active_sessions': 0, # Platzhalter
'registered_devices': 0, # Platzhalter
'active_devices': 0, # Platzhalter
'max_concurrent_sessions': row[13],
'active_sessions': row[14],
'registered_devices': active_device_count,
'active_devices': active_device_count,
'actual_domain_count': len(resources['domains']),
'actual_ipv4_count': len(resources['ipv4s']),
'actual_phone_count': len(resources['phones']),

Datei anzeigen

@@ -32,6 +32,7 @@ def export_licenses():
l.valid_until,
l.is_active,
l.device_limit,
l.max_concurrent_sessions,
l.created_at,
l.is_fake,
CASE
@@ -39,8 +40,8 @@ def export_licenses():
WHEN l.is_active = false THEN 'Deaktiviert'
ELSE 'Aktiv'
END as status,
(SELECT COUNT(*) FROM sessions s WHERE s.license_key = l.license_key AND s.is_active = true) as active_sessions,
(SELECT COUNT(DISTINCT hardware_id) FROM sessions s WHERE s.license_key = l.license_key) as registered_devices
(SELECT COUNT(*) FROM license_sessions ls WHERE ls.license_id = l.id) as active_sessions,
(SELECT COUNT(DISTINCT hardware_fingerprint) FROM device_registrations dr WHERE dr.license_id = l.id AND dr.is_active = true) as registered_devices
FROM licenses l
LEFT JOIN customers c ON l.customer_id = c.id
WHERE l.is_fake = false
@@ -52,7 +53,7 @@ def export_licenses():
# Daten für Export vorbereiten
data = []
columns = ['ID', 'Lizenzschlüssel', 'Kunde', 'E-Mail', 'Typ', 'Gültig von',
'Gültig bis', 'Aktiv', 'Gerätelimit', 'Erstellt am', 'Fake-Lizenz',
'Gültig bis', 'Aktiv', 'Gerätelimit', 'Max. Sessions', 'Erstellt am', 'Fake-Lizenz',
'Status', 'Aktive Sessions', 'Registrierte Geräte']
for row in cur.fetchall():
@@ -62,8 +63,8 @@ def export_licenses():
row_data[5] = format_datetime_for_export(row_data[5])
if row_data[6]: # valid_until
row_data[6] = format_datetime_for_export(row_data[6])
if row_data[9]: # created_at
row_data[9] = format_datetime_for_export(row_data[9])
if row_data[10]: # created_at (index shifted due to max_concurrent_sessions)
row_data[10] = format_datetime_for_export(row_data[10])
data.append(row_data)
# Format prüfen
@@ -239,7 +240,7 @@ def export_sessions():
s.license_key,
l.customer_name,
s.username,
s.hardware_id,
s.hardware_fingerprint as hardware_id,
s.started_at,
s.ended_at,
s.last_heartbeat,
@@ -259,7 +260,7 @@ def export_sessions():
s.license_key,
l.customer_name,
s.username,
s.hardware_id,
s.hardware_fingerprint as hardware_id,
s.started_at,
s.ended_at,
s.last_heartbeat,
@@ -416,7 +417,7 @@ def export_monitoring():
lh.license_id,
l.license_key,
c.name as customer_name,
lh.hardware_id,
lh.hardware_fingerprint as hardware_id,
lh.ip_address,
'Heartbeat' as event_type,
'Normal' as severity,
@@ -447,7 +448,7 @@ def export_monitoring():
ad.license_id,
l.license_key,
c.name as customer_name,
ad.details->>'hardware_id' as hardware_id,
ad.details->>'hardware_fingerprint' as hardware_id,
ad.details->>'ip_address' as ip_address,
ad.anomaly_type as event_type,
ad.severity,

Datei anzeigen

@@ -118,13 +118,14 @@ def edit_license(license_id):
'valid_from': request.form['valid_from'],
'valid_until': request.form['valid_until'],
'is_active': 'is_active' in request.form,
'device_limit': int(request.form.get('device_limit', 3))
'max_devices': int(request.form.get('device_limit', 3)), # Form still uses device_limit
'max_concurrent_sessions': int(request.form.get('max_concurrent_sessions', 1))
}
cur.execute("""
UPDATE licenses
SET license_key = %s, license_type = %s, valid_from = %s,
valid_until = %s, is_active = %s, device_limit = %s
valid_until = %s, is_active = %s, max_devices = %s, max_concurrent_sessions = %s
WHERE id = %s
""", (
new_values['license_key'],
@@ -132,7 +133,8 @@ def edit_license(license_id):
new_values['valid_from'],
new_values['valid_until'],
new_values['is_active'],
new_values['device_limit'],
new_values['max_devices'],
new_values['max_concurrent_sessions'],
license_id
))
@@ -146,7 +148,8 @@ def edit_license(license_id):
'valid_from': str(current_license.get('valid_from', '')),
'valid_until': str(current_license.get('valid_until', '')),
'is_active': current_license.get('is_active'),
'device_limit': current_license.get('device_limit', 3)
'max_devices': current_license.get('max_devices', 3),
'max_concurrent_sessions': current_license.get('max_concurrent_sessions', 1)
},
new_values=new_values)
@@ -313,6 +316,7 @@ def create_license():
ipv4_count = int(request.form.get("ipv4_count", 1))
phone_count = int(request.form.get("phone_count", 1))
device_limit = int(request.form.get("device_limit", 3))
max_concurrent_sessions = int(request.form.get("max_concurrent_sessions", 1))
conn = get_connection()
cur = conn.cursor()
@@ -365,11 +369,11 @@ def create_license():
# Lizenz hinzufügen
cur.execute("""
INSERT INTO licenses (license_key, customer_id, license_type, valid_from, valid_until, is_active,
domain_count, ipv4_count, phone_count, device_limit, is_fake)
VALUES (%s, %s, %s, %s, %s, TRUE, %s, %s, %s, %s, %s)
domain_count, ipv4_count, phone_count, device_limit, max_devices, max_concurrent_sessions, is_fake)
VALUES (%s, %s, %s, %s, %s, TRUE, %s, %s, %s, %s, %s, %s, %s)
RETURNING id
""", (license_key, customer_id, license_type, valid_from, valid_until,
domain_count, ipv4_count, phone_count, device_limit, is_fake))
domain_count, ipv4_count, phone_count, device_limit, device_limit, max_concurrent_sessions, is_fake))
license_id = cur.fetchone()[0]
# Ressourcen zuweisen

Datei anzeigen

@@ -91,7 +91,7 @@ def unified_monitoring():
SELECT
COUNT(DISTINCT license_id) as active_licenses,
COUNT(*) as total_validations,
COUNT(DISTINCT hardware_id) as unique_devices,
COUNT(DISTINCT hardware_fingerprint) as unique_devices,
COUNT(DISTINCT ip_address) as unique_ips,
0 as avg_response_time
FROM license_heartbeats
@@ -126,7 +126,7 @@ def unified_monitoring():
l.license_key,
c.name as customer_name,
lh.ip_address,
lh.hardware_id,
lh.hardware_fingerprint,
NULL as anomaly_type,
NULL as description
FROM license_heartbeats lh
@@ -143,8 +143,8 @@ def unified_monitoring():
ad.severity,
l.license_key,
c.name as customer_name,
ad.details->>'ip_address' as ip_address,
ad.details->>'hardware_id' as hardware_id,
(ad.details->>'ip_address')::inet as ip_address,
ad.details->>'hardware_fingerprint' as hardware_fingerprint,
ad.anomaly_type,
ad.details->>'description' as description
FROM anomaly_detections ad
@@ -166,7 +166,7 @@ def unified_monitoring():
l.license_key,
c.name as customer_name,
lh.ip_address,
lh.hardware_id,
lh.hardware_fingerprint,
NULL as anomaly_type,
NULL as description
FROM license_heartbeats lh
@@ -199,7 +199,7 @@ def unified_monitoring():
l.id,
l.license_key,
c.name as customer_name,
COUNT(DISTINCT lh.hardware_id) as device_count,
COUNT(DISTINCT lh.hardware_fingerprint) as device_count,
COUNT(lh.*) as validation_count,
MAX(lh.timestamp) as last_seen,
COUNT(DISTINCT ad.id) as anomaly_count
@@ -220,7 +220,7 @@ def unified_monitoring():
l.id,
l.license_key,
c.name as customer_name,
COUNT(DISTINCT lh.hardware_id) as device_count,
COUNT(DISTINCT lh.hardware_fingerprint) as device_count,
COUNT(lh.*) as validation_count,
MAX(lh.timestamp) as last_seen,
0 as anomaly_count
@@ -345,7 +345,7 @@ def analytics():
SELECT
COUNT(DISTINCT license_id) as active_licenses,
COUNT(*) as total_validations,
COUNT(DISTINCT hardware_id) as unique_devices,
COUNT(DISTINCT hardware_fingerprint) as unique_devices,
COUNT(DISTINCT ip_address) as unique_ips
FROM license_heartbeats
WHERE timestamp > NOW() - INTERVAL '5 minutes'
@@ -403,7 +403,7 @@ def analytics_stream():
SELECT
COUNT(DISTINCT license_id) as active_licenses,
COUNT(*) as total_validations,
COUNT(DISTINCT hardware_id) as unique_devices,
COUNT(DISTINCT hardware_fingerprint) as unique_devices,
COUNT(DISTINCT ip_address) as unique_ips
FROM license_heartbeats
WHERE timestamp > NOW() - INTERVAL '5 minutes'
@@ -425,4 +425,15 @@ def analytics_stream():
time.sleep(5) # Update every 5 seconds
from flask import Response
return Response(generate(), mimetype="text/event-stream")
return Response(generate(), mimetype="text/event-stream")
@monitoring_bp.route('/device_limits')
@login_required
def device_limits():
"""Device limit monitoring dashboard"""
from utils.device_monitoring import check_device_limits, get_device_usage_stats
warnings = check_device_limits()
stats = get_device_usage_stats()
return render_template('monitoring/device_limits.html', warnings=warnings, stats=stats)

Datei anzeigen

@@ -8,6 +8,7 @@ from auth.decorators import login_required
from utils.audit import log_audit
from utils.network import get_client_ip
from db import get_connection, get_db_connection, get_db_cursor
from db_license import get_license_db_cursor
from models import get_active_sessions
# Create Blueprint
@@ -17,37 +18,72 @@ session_bp = Blueprint('sessions', __name__)
@session_bp.route("/sessions")
@login_required
def sessions():
# Use regular DB for customer/license info
conn = get_connection()
cur = conn.cursor()
try:
# Get is_active sessions with calculated inactive time
cur.execute("""
SELECT s.id, s.session_id, l.license_key, c.name, s.ip_address,
s.user_agent, s.started_at, s.last_heartbeat,
EXTRACT(EPOCH FROM (NOW() - s.last_heartbeat))/60 as minutes_inactive
FROM sessions s
JOIN licenses l ON s.license_id = l.id
JOIN customers c ON l.customer_id = c.id
WHERE s.is_active = TRUE
ORDER BY s.last_heartbeat DESC
""")
active_sessions = cur.fetchall()
# First get license mapping from admin DB
cur.execute("SELECT id, license_key FROM licenses")
license_map = {row[0]: row[1] for row in cur.fetchall()}
# Get recent ended sessions
cur.execute("""
SELECT s.id, s.session_id, l.license_key, c.name, s.ip_address,
s.started_at, s.ended_at,
EXTRACT(EPOCH FROM (s.ended_at - s.started_at))/60 as duration_minutes
FROM sessions s
JOIN licenses l ON s.license_id = l.id
JOIN customers c ON l.customer_id = c.id
WHERE s.is_active = FALSE
AND s.ended_at > NOW() - INTERVAL '24 hours'
ORDER BY s.ended_at DESC
LIMIT 50
""")
recent_sessions = cur.fetchall()
# Get customer mapping
cur.execute("SELECT l.id, c.name FROM licenses l JOIN customers c ON l.customer_id = c.id")
customer_map = {row[0]: row[1] for row in cur.fetchall()}
cur.close()
conn.close()
# Now get sessions from license server DB
with get_license_db_cursor() as license_cur:
# Get active sessions
license_cur.execute("""
SELECT id, license_id, session_token, ip_address, client_version,
started_at, last_heartbeat, hardware_id,
EXTRACT(EPOCH FROM (NOW() - last_heartbeat))/60 as minutes_inactive
FROM license_sessions
WHERE ended_at IS NULL
ORDER BY last_heartbeat DESC
""")
active_sessions = []
for row in license_cur.fetchall():
active_sessions.append((
row[0], # id
row[2], # session_token
license_map.get(row[1], 'Unknown'), # license_key
customer_map.get(row[1], 'Unknown'), # customer name
row[3], # ip_address
row[4], # client_version
row[5], # started_at
row[6], # last_heartbeat
row[8] # minutes_inactive
))
# Get recent ended sessions
license_cur.execute("""
SELECT id, license_id, session_token, ip_address,
started_at, ended_at,
EXTRACT(EPOCH FROM (ended_at - started_at))/60 as duration_minutes
FROM license_sessions
WHERE ended_at IS NOT NULL
AND ended_at > NOW() - INTERVAL '24 hours'
ORDER BY ended_at DESC
LIMIT 50
""")
recent_sessions = []
for row in license_cur.fetchall():
recent_sessions.append((
row[0], # id
row[2], # session_token
license_map.get(row[1], 'Unknown'), # license_key
customer_map.get(row[1], 'Unknown'), # customer name
row[3], # ip_address
row[4], # started_at
row[5], # ended_at
row[6] # duration_minutes
))
return render_template("sessions.html",
active_sessions=active_sessions,
@@ -57,9 +93,6 @@ def sessions():
logging.error(f"Error loading sessions: {str(e)}")
flash('Fehler beim Laden der Sessions!', 'error')
return redirect(url_for('admin.dashboard'))
finally:
cur.close()
conn.close()
@session_bp.route("/sessions/history")
@@ -78,19 +111,20 @@ def session_history():
# Base query
query = """
SELECT
s.id,
s.license_key,
s.username,
s.hardware_id,
s.started_at,
s.ended_at,
s.last_heartbeat,
s.is_active,
l.customer_name,
ls.id,
l.license_key,
ls.machine_name as username,
ls.hardware_id,
ls.started_at,
ls.ended_at,
ls.last_heartbeat,
CASE WHEN ls.ended_at IS NULL THEN true ELSE false END as is_active,
c.name as customer_name,
l.license_type,
l.is_test
FROM sessions s
LEFT JOIN licenses l ON s.license_key = l.license_key
FROM license_sessions ls
JOIN licenses l ON ls.license_id = l.id
LEFT JOIN customers c ON l.customer_id = c.id
WHERE 1=1
"""
@@ -98,18 +132,18 @@ def session_history():
# Apply filters
if license_key:
query += " AND s.license_key = %s"
query += " AND l.license_key = %s"
params.append(license_key)
if username:
query += " AND s.username ILIKE %s"
query += " AND ls.machine_name ILIKE %s"
params.append(f'%{username}%')
# Time filter
query += " AND s.started_at >= CURRENT_TIMESTAMP - INTERVAL '%s days'"
query += " AND ls.started_at >= CURRENT_TIMESTAMP - INTERVAL '%s days'"
params.append(days)
query += " ORDER BY s.started_at DESC LIMIT 1000"
query += " ORDER BY ls.started_at DESC LIMIT 1000"
cur.execute(query, params)
@@ -144,11 +178,12 @@ def session_history():
# Get unique license keys for filter dropdown
cur.execute("""
SELECT DISTINCT s.license_key, l.customer_name
FROM sessions s
LEFT JOIN licenses l ON s.license_key = l.license_key
WHERE s.started_at >= CURRENT_TIMESTAMP - INTERVAL '30 days'
ORDER BY l.customer_name, s.license_key
SELECT DISTINCT l.license_key, c.name as customer_name
FROM license_sessions ls
JOIN licenses l ON ls.license_id = l.id
LEFT JOIN customers c ON l.customer_id = c.id
WHERE ls.started_at >= CURRENT_TIMESTAMP - INTERVAL '30 days'
ORDER BY c.name, l.license_key
""")
available_licenses = []
@@ -180,44 +215,48 @@ def session_history():
@login_required
def terminate_session(session_id):
"""Beendet eine aktive Session"""
conn = get_connection()
cur = conn.cursor()
try:
# Get session info
cur.execute("""
SELECT license_key, username, hardware_id
FROM sessions
WHERE id = %s AND is_active = true
""", (session_id,))
session_info = None
session_info = cur.fetchone()
if not session_info:
flash('Session nicht gefunden oder bereits beendet!', 'error')
return redirect(url_for('sessions.sessions'))
# Get session info from license server DB
with get_license_db_cursor() as license_cur:
license_cur.execute("""
SELECT license_id, hardware_id, machine_name
FROM license_sessions
WHERE id = %s AND ended_at IS NULL
""", (session_id,))
result = license_cur.fetchone()
if not result:
flash('Session nicht gefunden oder bereits beendet!', 'error')
return redirect(url_for('sessions.sessions'))
license_id = result[0]
# Terminate session in license server DB
license_cur.execute("""
UPDATE license_sessions
SET ended_at = CURRENT_TIMESTAMP, end_reason = 'admin_terminated'
WHERE id = %s
""", (session_id,))
# Terminate session
cur.execute("""
UPDATE sessions
SET is_active = false, ended_at = CURRENT_TIMESTAMP
WHERE id = %s
""", (session_id,))
conn.commit()
# Get license key from admin DB for audit log
conn = get_connection()
cur = conn.cursor()
cur.execute("SELECT license_key FROM licenses WHERE id = %s", (license_id,))
license_key = cur.fetchone()[0] if cur.fetchone() else 'Unknown'
cur.close()
conn.close()
# Audit log
log_audit('SESSION_TERMINATE', 'session', session_id,
additional_info=f"Session beendet für {session_info[1]} auf Lizenz {session_info[0]}")
additional_info=f"Session beendet für Lizenz {license_key}")
flash('Session erfolgreich beendet!', 'success')
except Exception as e:
conn.rollback()
logging.error(f"Fehler beim Beenden der Session: {str(e)}")
flash('Fehler beim Beenden der Session!', 'error')
finally:
cur.close()
conn.close()
return redirect(url_for('sessions.sessions'))
@@ -230,10 +269,11 @@ def terminate_all_sessions(license_key):
cur = conn.cursor()
try:
# Count is_active sessions
# Count active sessions
cur.execute("""
SELECT COUNT(*) FROM sessions
WHERE license_key = %s AND is_active = true
SELECT COUNT(*) FROM license_sessions ls
JOIN licenses l ON ls.license_id = l.id
WHERE l.license_key = %s AND ls.ended_at IS NULL
""", (license_key,))
active_count = cur.fetchone()[0]
@@ -244,9 +284,11 @@ def terminate_all_sessions(license_key):
# Terminate all sessions
cur.execute("""
UPDATE sessions
SET is_active = false, ended_at = CURRENT_TIMESTAMP
WHERE license_key = %s AND is_active = true
UPDATE license_sessions
SET ended_at = CURRENT_TIMESTAMP, end_reason = 'admin_terminated_all'
WHERE license_id IN (
SELECT id FROM licenses WHERE license_key = %s
) AND ended_at IS NULL
""", (license_key,))
conn.commit()
@@ -280,8 +322,8 @@ def cleanup_sessions():
# Delete old inactive sessions
cur.execute("""
DELETE FROM sessions
WHERE is_active = false
DELETE FROM license_sessions
WHERE ended_at IS NOT NULL
AND ended_at < CURRENT_TIMESTAMP - INTERVAL '%s days'
RETURNING id
""", (days,))
@@ -320,12 +362,13 @@ def session_statistics():
# Aktuelle Statistiken
cur.execute("""
SELECT
COUNT(DISTINCT s.license_key) as active_licenses,
COUNT(DISTINCT s.username) as unique_users,
COUNT(DISTINCT s.hardware_id) as unique_devices,
COUNT(DISTINCT l.license_key) as active_licenses,
COUNT(DISTINCT ls.machine_name) as unique_users,
COUNT(DISTINCT ls.hardware_id) as unique_devices,
COUNT(*) as total_active_sessions
FROM sessions s
WHERE s.is_active = true
FROM license_sessions ls
JOIN licenses l ON ls.license_id = l.id
WHERE ls.ended_at IS NULL
""")
current_stats = cur.fetchone()
@@ -335,9 +378,9 @@ def session_statistics():
SELECT
l.license_type,
COUNT(*) as session_count
FROM sessions s
JOIN licenses l ON s.license_key = l.license_key
WHERE s.is_active = true
FROM license_sessions ls
JOIN licenses l ON ls.license_id = l.id
WHERE ls.ended_at IS NULL
GROUP BY l.license_type
ORDER BY session_count DESC
""")
@@ -352,14 +395,15 @@ def session_statistics():
# Top 10 Lizenzen nach aktiven Sessions
cur.execute("""
SELECT
s.license_key,
l.customer_name,
l.license_key,
c.name as customer_name,
COUNT(*) as session_count,
l.device_limit
FROM sessions s
JOIN licenses l ON s.license_key = l.license_key
WHERE s.is_active = true
GROUP BY s.license_key, l.customer_name, l.device_limit
FROM license_sessions ls
JOIN licenses l ON ls.license_id = l.id
JOIN customers c ON l.customer_id = c.id
WHERE ls.ended_at IS NULL
GROUP BY l.license_key, c.name, l.device_limit
ORDER BY session_count DESC
LIMIT 10
""")
@@ -376,13 +420,14 @@ def session_statistics():
# Session-Verlauf (letzte 7 Tage)
cur.execute("""
SELECT
DATE(started_at) as date,
DATE(ls.started_at) as date,
COUNT(*) as login_count,
COUNT(DISTINCT license_key) as unique_licenses,
COUNT(DISTINCT username) as unique_users
FROM sessions
WHERE started_at >= CURRENT_DATE - INTERVAL '7 days'
GROUP BY DATE(started_at)
COUNT(DISTINCT l.license_key) as unique_licenses,
COUNT(DISTINCT ls.machine_name) as unique_users
FROM license_sessions ls
JOIN licenses l ON ls.license_id = l.id
WHERE ls.started_at >= CURRENT_DATE - INTERVAL '7 days'
GROUP BY DATE(ls.started_at)
ORDER BY date
""")
@@ -399,9 +444,8 @@ def session_statistics():
cur.execute("""
SELECT
AVG(EXTRACT(EPOCH FROM (ended_at - started_at))/3600) as avg_duration_hours
FROM sessions
WHERE is_active = false
AND ended_at IS NOT NULL
FROM license_sessions
WHERE ended_at IS NOT NULL
AND ended_at - started_at < INTERVAL '24 hours'
AND started_at >= CURRENT_DATE - INTERVAL '30 days'
""")

Datei anzeigen

@@ -15,7 +15,7 @@ def scheduled_backup():
def cleanup_expired_sessions():
"""Clean up expired license sessions"""
"""Clean up expired license sessions - concurrent sessions aware"""
try:
conn = get_connection()
cur = conn.cursor()
@@ -29,11 +29,12 @@ def cleanup_expired_sessions():
result = cur.fetchone()
timeout_seconds = result[0] if result else 60
# Find expired sessions
# Find expired sessions that are still active
cur.execute("""
SELECT id, license_id, hardware_id, ip_address, client_version, started_at
SELECT id, license_id, hardware_fingerprint, ip_address, client_version, started_at, hardware_id, machine_name
FROM license_sessions
WHERE last_heartbeat < CURRENT_TIMESTAMP - INTERVAL '%s seconds'
WHERE ended_at IS NULL
AND last_heartbeat < CURRENT_TIMESTAMP - INTERVAL '%s seconds'
""", (timeout_seconds,))
expired_sessions = cur.fetchall()
@@ -41,19 +42,32 @@ def cleanup_expired_sessions():
if expired_sessions:
logging.info(f"Found {len(expired_sessions)} expired sessions to clean up")
# Count sessions by license before cleanup for logging
license_session_counts = {}
for session in expired_sessions:
license_id = session[1]
if license_id not in license_session_counts:
license_session_counts[license_id] = 0
license_session_counts[license_id] += 1
for session in expired_sessions:
# Log to history
cur.execute("""
INSERT INTO session_history
(license_id, hardware_id, ip_address, client_version, started_at, ended_at, end_reason)
VALUES (%s, %s, %s, %s, %s, CURRENT_TIMESTAMP, 'timeout')
""", (session[1], session[2], session[3], session[4], session[5]))
(license_id, hardware_id, hardware_fingerprint, machine_name, ip_address, client_version, started_at, ended_at, end_reason)
VALUES (%s, %s, %s, %s, %s, %s, %s, CURRENT_TIMESTAMP, 'timeout')
""", (session[1], session[6], session[2], session[7], session[3], session[4], session[5]))
# Delete session
cur.execute("DELETE FROM license_sessions WHERE id = %s", (session[0],))
# Mark session as ended instead of deleting
cur.execute("UPDATE license_sessions SET ended_at = CURRENT_TIMESTAMP, end_reason = 'timeout' WHERE id = %s", (session[0],))
conn.commit()
logging.info(f"Cleaned up {len(expired_sessions)} expired sessions")
# Log cleanup summary
logging.info(f"Cleaned up {len(expired_sessions)} expired sessions from {len(license_session_counts)} licenses")
for license_id, count in license_session_counts.items():
if count > 1:
logging.info(f" License ID {license_id}: {count} sessions cleaned up")
cur.close()
conn.close()

Datei anzeigen

@@ -219,7 +219,7 @@ function createBackup(type) {
btn.disabled = true;
btn.innerHTML = '<span class="spinner-border spinner-border-sm"></span> Erstelle...';
fetch('/backups/backup/create', {
fetch('/admin/backup/create', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
@@ -249,7 +249,7 @@ function createBackup(type) {
}
function downloadFromGitHub(backupId) {
window.location.href = `/backups/backup/download/${backupId}?from_github=true`;
window.location.href = `/admin/backup/download/${backupId}?from_github=true`;
}
function showRestoreModal(backupId) {
@@ -261,7 +261,7 @@ function showRestoreModal(backupId) {
function confirmRestore() {
const encryptionKey = document.getElementById('encryptionKey').value;
fetch(`/backups/backup/restore/${selectedBackupId}`, {
fetch(`/admin/backup/restore/${selectedBackupId}`, {
method: 'POST',
headers: {
'Content-Type': 'application/json',

Datei anzeigen

@@ -350,7 +350,7 @@
</ul>
<div class="d-flex align-items-center">
<div id="session-timer" class="timer-normal me-3">
⏱️ <span id="timer-display">5:00</span>
⏱️ <span id="timer-display">15:00</span>
</div>
<span class="text-white me-3">Angemeldet als: {{ username }}</span>
<a href="{{ url_for('auth.profile') }}" class="btn btn-outline-light btn-sm me-2">👤 Profil</a>
@@ -482,7 +482,7 @@
<script>
// Session-Timer Konfiguration
const SESSION_TIMEOUT = 5 * 60; // 5 Minuten in Sekunden
const SESSION_TIMEOUT = 15 * 60; // 15 Minuten in Sekunden
let timeRemaining = SESSION_TIMEOUT;
let timerInterval;
let warningShown = false;

Datei anzeigen

@@ -174,6 +174,19 @@
Jede generierte Lizenz kann auf maximal dieser Anzahl von Geräten gleichzeitig aktiviert werden.
</small>
</div>
<div class="col-md-6">
<label for="concurrentSessions" class="form-label">
Max. gleichzeitige Sessions pro Lizenz
</label>
<select class="form-select" id="concurrentSessions" name="max_concurrent_sessions" required>
{% for i in range(1, 11) %}
<option value="{{ i }}" {% if i == 1 %}selected{% endif %}>{{ i }} {% if i == 1 %}Session{% else %}Sessions{% endif %}</option>
{% endfor %}
</select>
<small class="form-text text-muted">
Wie viele Geräte können gleichzeitig online sein. Muss kleiner oder gleich dem Gerätelimit sein.
</small>
</div>
</div>
</div>
</div>
@@ -246,6 +259,30 @@ document.getElementById('validFrom').addEventListener('change', calculateValidUn
document.getElementById('duration').addEventListener('input', calculateValidUntil);
document.getElementById('durationType').addEventListener('change', calculateValidUntil);
// Funktion zur Anpassung der max_concurrent_sessions Optionen
function updateConcurrentSessionsOptions() {
const deviceLimit = parseInt(document.getElementById('deviceLimit').value);
const concurrentSelect = document.getElementById('concurrentSessions');
const currentValue = parseInt(concurrentSelect.value);
// Clear current options
concurrentSelect.innerHTML = '';
// Add new options up to device limit
for (let i = 1; i <= Math.min(deviceLimit, 10); i++) {
const option = document.createElement('option');
option.value = i;
option.text = i + (i === 1 ? ' Session' : ' Sessions');
if (i === Math.min(currentValue, deviceLimit)) {
option.selected = true;
}
concurrentSelect.appendChild(option);
}
}
// Event Listener für Device Limit Änderungen
document.getElementById('deviceLimit').addEventListener('change', updateConcurrentSessionsOptions);
// Setze heutiges Datum als Standard
document.addEventListener('DOMContentLoaded', function() {
const today = new Date().toISOString().split('T')[0];
@@ -510,5 +547,51 @@ function showCustomerTypeIndicator(type) {
function hideCustomerTypeIndicator() {
document.getElementById('customerTypeIndicator').classList.add('d-none');
}
// Validation for concurrent sessions vs device limit
document.getElementById('deviceLimit').addEventListener('change', validateSessionLimit);
document.getElementById('concurrentSessions').addEventListener('change', validateSessionLimit);
function validateSessionLimit() {
const deviceLimit = parseInt(document.getElementById('deviceLimit').value);
const concurrentSessions = parseInt(document.getElementById('concurrentSessions').value);
const sessionsSelect = document.getElementById('concurrentSessions');
// Update options to not exceed device limit
sessionsSelect.innerHTML = '';
for (let i = 1; i <= Math.min(10, deviceLimit); i++) {
const option = document.createElement('option');
option.value = i;
option.textContent = i + (i === 1 ? ' Session' : ' Sessions');
if (i === Math.min(concurrentSessions, deviceLimit)) {
option.selected = true;
}
sessionsSelect.appendChild(option);
}
// Show warning if adjusted
if (concurrentSessions > deviceLimit) {
const toast = document.createElement('div');
toast.className = 'toast align-items-center text-white bg-warning border-0 position-fixed bottom-0 end-0 m-3';
toast.setAttribute('role', 'alert');
toast.innerHTML = `
<div class="d-flex">
<div class="toast-body">
Gleichzeitige Sessions wurden auf ${deviceLimit} angepasst (Max. Gerätelimit).
</div>
<button type="button" class="btn-close btn-close-white me-2 m-auto" data-bs-dismiss="toast"></button>
</div>
`;
document.body.appendChild(toast);
const bsToast = new bootstrap.Toast(toast);
bsToast.show();
setTimeout(() => toast.remove(), 5000);
}
}
// Initialize validation on page load
document.addEventListener('DOMContentLoaded', function() {
validateSessionLimit();
});
</script>
{% endblock %}

Datei anzeigen

@@ -367,6 +367,7 @@ function updateLicenseView(customerId, licenses) {
<th>Gültig bis</th>
<th>Status</th>
<th>Server Status</th>
<th>Sessions</th>
<th>Ressourcen</th>
<th>Aktionen</th>
</tr>
@@ -448,6 +449,11 @@ function updateLicenseView(customerId, licenses) {
<td>${license.valid_until || '-'}</td>
<td><span class="badge ${statusClass}">${license.status}</span></td>
<td>${serverStatusHtml}</td>
<td>
<span class="badge bg-info">
${license.active_sessions || 0}/${license.max_concurrent_sessions || 1}
</span>
</td>
<td class="resources-cell">
${resourcesHtml || '<span class="text-muted">-</span>'}
</td>
@@ -1086,7 +1092,7 @@ function showDeviceManagement(licenseId) {
content += `
<tr>
<td>${device.device_name}</td>
<td><small class="text-muted">${device.hardware_id.substring(0, 12)}...</small></td>
<td><small class="text-muted">${device.hardware_fingerprint.substring(0, 12)}...</small></td>
<td>${device.operating_system}</td>
<td>${device.first_seen}</td>
<td>${device.last_seen}</td>

Datei anzeigen

@@ -156,6 +156,48 @@
</div>
</div>
<!-- Session Utilization -->
<div class="row g-3 mb-4">
<div class="col-12">
<div class="card">
<div class="card-header">
<h5 class="mb-0">
<i class="bi bi-broadcast"></i> Session-Auslastung
<span class="badge bg-info float-end">{{ stats.session_stats.total_active_sessions or 0 }} aktive Sessions</span>
</h5>
</div>
<div class="card-body">
<div class="row">
<div class="col-md-4">
<div class="text-center">
<h3 class="text-primary">{{ stats.session_stats.total_active_sessions or 0 }}</h3>
<p class="text-muted mb-0">Aktive Sessions</p>
</div>
</div>
<div class="col-md-4">
<div class="text-center">
<h3 class="text-success">{{ stats.session_stats.total_max_sessions or 0 }}</h3>
<p class="text-muted mb-0">Maximale Sessions</p>
</div>
</div>
<div class="col-md-4">
<div class="text-center">
<h3 class="text-warning">{{ stats.session_stats.utilization_percent or 0 }}%</h3>
<p class="text-muted mb-0">Auslastung</p>
</div>
</div>
</div>
{% if stats.session_stats.licenses_at_limit > 0 %}
<div class="alert alert-warning mt-3 mb-0">
<i class="bi bi-exclamation-triangle"></i>
<strong>{{ stats.session_stats.licenses_at_limit }}</strong> Lizenz(en) haben ihr Session-Limit erreicht
</div>
{% endif %}
</div>
</div>
</div>
</div>
<!-- Service Health Status -->
<div class="row g-3 mb-4">
<div class="col-12">

Datei anzeigen

@@ -65,6 +65,22 @@
</div>
</div>
<div class="row mb-3">
<div class="col-md-6">
<label for="concurrentSessions" class="form-label">Max. gleichzeitige Sessions</label>
<select class="form-select" id="concurrentSessions" name="max_concurrent_sessions" required>
{% set device_limit = license.get('device_limit', 3) %}
{% set upper_limit = device_limit + 1 if device_limit < 11 else 11 %}
{% for i in range(1, upper_limit) %}
<option value="{{ i }}" {% if license.get('max_concurrent_sessions', 1) == i %}selected{% endif %}>
{{ i }} {% if i == 1 %}Session{% else %}Sessions{% endif %}
</option>
{% endfor %}
</select>
<small class="form-text text-muted">Wie viele Geräte können gleichzeitig online sein</small>
</div>
</div>
<div class="alert {% if license.is_fake %}alert-warning{% else %}alert-success{% endif %} mt-3" role="alert">
<i class="fas fa-info-circle"></i>
<strong>Status:</strong>

Datei anzeigen

@@ -153,6 +153,19 @@
Anzahl der Geräte, auf denen die Lizenz gleichzeitig aktiviert sein kann.
</small>
</div>
<div class="col-md-6">
<label for="concurrentSessions" class="form-label">
Max. gleichzeitige Sessions
</label>
<select class="form-select" id="concurrentSessions" name="max_concurrent_sessions" required>
{% for i in range(1, 11) %}
<option value="{{ i }}" {% if i == 1 %}selected{% endif %}>{{ i }} {% if i == 1 %}Session{% else %}Sessions{% endif %}</option>
{% endfor %}
</select>
<small class="form-text text-muted">
Wie viele Geräte können gleichzeitig online sein. Muss kleiner oder gleich dem Gerätelimit sein.
</small>
</div>
</div>
</div>
</div>
@@ -574,5 +587,51 @@ function showCustomerTypeIndicator(type) {
function hideCustomerTypeIndicator() {
document.getElementById('customerTypeIndicator').classList.add('d-none');
}
// Validation for concurrent sessions vs device limit
document.getElementById('deviceLimit').addEventListener('change', validateSessionLimit);
document.getElementById('concurrentSessions').addEventListener('change', validateSessionLimit);
function validateSessionLimit() {
const deviceLimit = parseInt(document.getElementById('deviceLimit').value);
const concurrentSessions = parseInt(document.getElementById('concurrentSessions').value);
const sessionsSelect = document.getElementById('concurrentSessions');
// Update options to not exceed device limit
sessionsSelect.innerHTML = '';
for (let i = 1; i <= Math.min(10, deviceLimit); i++) {
const option = document.createElement('option');
option.value = i;
option.textContent = i + (i === 1 ? ' Session' : ' Sessions');
if (i === Math.min(concurrentSessions, deviceLimit)) {
option.selected = true;
}
sessionsSelect.appendChild(option);
}
// Show warning if adjusted
if (concurrentSessions > deviceLimit) {
const toast = document.createElement('div');
toast.className = 'toast align-items-center text-white bg-warning border-0 position-fixed bottom-0 end-0 m-3';
toast.setAttribute('role', 'alert');
toast.innerHTML = `
<div class="d-flex">
<div class="toast-body">
Gleichzeitige Sessions wurden auf ${deviceLimit} angepasst (Max. Gerätelimit).
</div>
<button type="button" class="btn-close btn-close-white me-2 m-auto" data-bs-dismiss="toast"></button>
</div>
`;
document.body.appendChild(toast);
const bsToast = new bootstrap.Toast(toast);
bsToast.show();
setTimeout(() => toast.remove(), 5000);
}
}
// Initialize validation on page load
document.addEventListener('DOMContentLoaded', function() {
validateSessionLimit();
});
</script>
{% endblock %}

Datei anzeigen

@@ -181,6 +181,7 @@
{{ sortable_header('Gültig von', 'valid_from', sort, order) }}
{{ sortable_header('Gültig bis', 'valid_until', sort, order) }}
{{ sortable_header('Status', 'status', sort, order) }}
<th>Sessions</th>
{{ sortable_header('Aktiv', 'active', sort, order) }}
<th>Aktionen</th>
</tr>
@@ -225,6 +226,11 @@
<span class="status-aktiv">✅ Aktiv</span>
{% endif %}
</td>
<td>
<span class="badge bg-info">
{{ license.active_sessions or 0 }}/{{ license.max_concurrent_sessions or 1 }}
</span>
</td>
<td>
<div class="form-check form-switch form-switch-custom">
<input class="form-check-input" type="checkbox"

Datei anzeigen

@@ -384,7 +384,7 @@
`<div class="d-flex justify-content-between border-bottom py-2">
<span>
<code>${v.license_key}</code> |
<span class="text-muted">${v.hardware_id}</span>
<span class="text-muted">${v.hardware_fingerprint}</span>
</span>
<span>
<span class="badge bg-secondary">${v.ip_address}</span>

Einige Dateien werden nicht angezeigt, da zu viele Dateien in diesem Diff geändert wurden Mehr anzeigen