Local changes before sync

Dieser Commit ist enthalten in:
2025-06-28 20:41:24 +00:00
Ursprung 972401cce9
Commit 3a75523384
1499 geänderte Dateien mit 44121 neuen und 18 gelöschten Zeilen

1
.gitattributes vendored Normale Datei
Datei anzeigen

@@ -0,0 +1 @@
server-backups/*.tar.gz filter=lfs diff=lfs merge=lfs -text

711
API_REFERENCE.md Normale Datei
Datei anzeigen

@@ -0,0 +1,711 @@
# V2-Docker API Reference
## Authentication
### API Key Authentication
All License Server API endpoints require authentication using an API key. The API key must be included in the request headers.
**Header Format:**
```
X-API-Key: AF-2025-XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
```
**API Key Management:**
- API keys can be managed through the Admin Panel under "Lizenzserver Administration" → "System-API-Key generieren"
- Keys follow the format: `AF-YYYY-[32 random characters]`
- Only one system API key is active at a time
- Regenerating the key will immediately invalidate the old key
- The initial API key is automatically generated on first startup
- To retrieve the initial API key from database: `SELECT api_key FROM system_api_key WHERE id = 1;`
**Error Response (401 Unauthorized):**
```json
{
"error": "Invalid or missing API key",
"code": "INVALID_API_KEY",
"status": 401
}
```
## License Server API
**Base URL:** `https://api-software-undso.intelsight.de`
### Public Endpoints
#### GET /
Root endpoint - Service status.
**Response:**
```json
{
"status": "ok",
"service": "V2 License Server",
"timestamp": "2025-06-19T10:30:00Z"
}
```
#### GET /health
Health check endpoint.
**Response:**
```json
{
"status": "healthy",
"timestamp": "2025-06-19T10:30:00Z"
}
```
#### GET /metrics
Prometheus metrics endpoint.
**Response:**
Prometheus metrics in CONTENT_TYPE_LATEST format.
### License API Endpoints
All license endpoints require API key authentication via `X-API-Key` header.
#### POST /api/license/activate
Activate a license on a new system.
**Headers:**
```
X-API-Key: AF-2025-XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
Content-Type: application/json
```
**Request:**
```json
{
"license_key": "XXXX-XXXX-XXXX-XXXX",
"hardware_hash": "unique-hardware-identifier",
"machine_name": "DESKTOP-ABC123",
"app_version": "1.0.0"
}
```
**Response:**
```json
{
"message": "License activated successfully",
"activation": {
"id": 123,
"license_key": "XXXX-XXXX-XXXX-XXXX",
"hardware_hash": "unique-hardware-identifier",
"machine_name": "DESKTOP-ABC123",
"activated_at": "2025-06-19T10:30:00Z",
"last_heartbeat": "2025-06-19T10:30:00Z",
"is_active": true
}
}
```
#### POST /api/license/verify
Verify an active license.
**Headers:**
```
X-API-Key: AF-2025-XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
Content-Type: application/json
```
**Request:**
```json
{
"license_key": "XXXX-XXXX-XXXX-XXXX",
"hardware_hash": "unique-hardware-identifier",
"app_version": "1.0.0"
}
```
**Response:**
```json
{
"valid": true,
"message": "License is valid",
"license": {
"key": "XXXX-XXXX-XXXX-XXXX",
"valid_until": "2026-01-01",
"max_users": 10
},
"update_available": false,
"latest_version": "1.0.0"
}
```
#### GET /api/license/info/{license_key}
Get license information.
**Headers:**
```
X-API-Key: AF-2025-XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
```
**Response:**
```json
{
"license": {
"id": 123,
"key": "XXXX-XXXX-XXXX-XXXX",
"customer_name": "ACME Corp",
"type": "perpetual",
"valid_from": "2025-01-01",
"valid_until": "2026-01-01",
"max_activations": 5,
"max_users": 10,
"is_active": true
},
"activations": [
{
"id": 456,
"hardware_hash": "unique-hardware-identifier",
"machine_name": "DESKTOP-ABC123",
"activated_at": "2025-06-19T10:00:00Z",
"last_heartbeat": "2025-06-19T14:30:00Z",
"is_active": true
}
]
}
```
### Session Management API Endpoints
**Note:** Session endpoints require that the client application is configured in the `client_configs` table. The default client "Account Forger" is pre-configured.
#### POST /api/license/session/start
Start a new session for a license.
**Headers:**
```
X-API-Key: AF-2025-XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
Content-Type: application/json
```
**Request:**
```json
{
"license_key": "XXXX-XXXX-XXXX-XXXX",
"machine_id": "DESKTOP-ABC123",
"hardware_hash": "unique-hardware-identifier",
"version": "1.0.0"
}
```
**Response:**
- 200 OK: Returns session_token and optional update info
- 409 Conflict: "Es ist nur eine Sitzung erlaubt..." (single session enforcement)
#### POST /api/license/session/heartbeat
Keep session alive with heartbeat.
**Headers:**
```
X-API-Key: AF-2025-XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
Content-Type: application/json
```
**Request:**
```json
{
"session_token": "550e8400-e29b-41d4-a716-446655440000",
"license_key": "XXXX-XXXX-XXXX-XXXX"
}
```
**Response:** 200 OK with last_heartbeat timestamp
#### POST /api/license/session/end
End an active session.
**Headers:**
```
X-API-Key: AF-2025-XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
Content-Type: application/json
```
**Request:**
```json
{
"session_token": "550e8400-e29b-41d4-a716-446655440000"
}
```
**Response:** 200 OK with session duration and end reason
### Version API Endpoints
#### POST /api/version/check
Check for available updates.
**Headers:**
```
X-API-Key: AF-2025-XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
```
**Request:**
```json
{
"current_version": "1.0.0",
"license_key": "XXXX-XXXX-XXXX-XXXX"
}
```
**Response:**
```json
{
"update_available": true,
"latest_version": "1.1.0",
"download_url": "https://example.com/download/v1.1.0",
"release_notes": "Bug fixes and performance improvements"
}
```
#### GET /api/version/latest
Get latest version information.
**Headers:**
```
X-API-Key: AF-2025-XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
```
**Response:**
```json
{
"version": "1.1.0",
"release_date": "2025-06-20",
"download_url": "https://example.com/download/v1.1.0",
"release_notes": "Bug fixes and performance improvements"
}
```
## Admin Panel API
**Base URL:** `https://admin-panel-undso.intelsight.de`
### Customer API Endpoints
#### GET /api/customers
Search customers for Select2 dropdown.
**Query Parameters:**
- `q`: Search query
- `page`: Page number (default: 1)
**Response:**
```json
{
"results": [
{
"id": 123,
"text": "ACME Corp - admin@acme.com"
}
],
"pagination": {
"more": false
}
}
```
### License Management API
- `POST /api/license/{id}/toggle` - Toggle active status
- `POST /api/licenses/bulk-activate` - Activate multiple (license_ids array)
- `POST /api/licenses/bulk-deactivate` - Deactivate multiple
- `POST /api/licenses/bulk-delete` - Delete multiple
- `POST /api/license/{id}/quick-edit` - Update validity/limits
- `GET /api/license/{id}/devices` - List registered devices
#### POST /api/license/{license_id}/quick-edit
Quick edit license properties.
**Request:**
```json
{
"valid_until": "2027-01-01",
"max_activations": 10,
"max_users": 50
}
```
**Response:**
```json
{
"success": true,
"message": "License updated successfully"
}
```
#### POST /api/generate-license-key
Generate a new license key.
**Response:**
```json
{
"license_key": "NEW1-NEW2-NEW3-NEW4"
}
```
### Device Management API
#### GET /api/license/{license_id}/devices
Get devices for a license.
**Response:**
```json
{
"devices": [
{
"id": 123,
"hardware_hash": "unique-hardware-identifier",
"machine_name": "DESKTOP-ABC123",
"activated_at": "2025-01-01T10:00:00Z",
"last_heartbeat": "2025-06-19T14:30:00Z",
"is_active": true,
"app_version": "1.0.0"
}
]
}
```
#### POST /api/license/{license_id}/register-device
Register a new device.
**Request:**
```json
{
"hardware_hash": "unique-hardware-identifier",
"machine_name": "DESKTOP-XYZ789",
"app_version": "1.0.0"
}
```
**Response:**
```json
{
"success": true,
"device_id": 456,
"message": "Device registered successfully"
}
```
#### POST /api/license/{license_id}/deactivate-device/{device_id}
Deactivate a device.
**Response:**
```json
{
"success": true,
"message": "Device deactivated successfully"
}
```
### Resource Management API
#### GET /api/license/{license_id}/resources
Get resources for a license.
**Response:**
```json
{
"resources": [
{
"id": 789,
"type": "server",
"identifier": "SRV-001",
"status": "allocated",
"allocated_at": "2025-06-01T10:00:00Z"
}
]
}
```
#### POST /api/resources/allocate
Allocate resources to a license.
**Request:**
```json
{
"license_id": 123,
"resource_ids": [789, 790]
}
```
**Response:**
```json
{
"success": true,
"allocated": 2,
"message": "2 resources allocated successfully"
}
```
#### GET /api/resources/check-availability
Check resource availability.
**Query Parameters:**
- `type`: Resource type
- `count`: Number of resources needed
**Response:**
```json
{
"available": true,
"count": 5,
"resources": [
{
"id": 791,
"type": "server",
"identifier": "SRV-002"
}
]
}
```
### Search API
#### GET /api/global-search
Global search across all entities.
**Query Parameters:**
- `q`: Search query
- `type`: Entity type filter (customer, license, device)
- `limit`: Maximum results (default: 20)
**Response:**
```json
{
"results": [
{
"type": "customer",
"id": 123,
"title": "ACME Corp",
"subtitle": "admin@acme.com",
"url": "/customer/edit/123"
},
{
"type": "license",
"id": 456,
"title": "XXXX-XXXX-XXXX-XXXX",
"subtitle": "ACME Corp - Active",
"url": "/license/edit/456"
}
],
"total": 15
}
```
### Lead Management API
#### GET /leads/api/institutions
Get all institutions with pagination.
**Query Parameters:**
- `page`: Page number (default: 1)
- `per_page`: Items per page (default: 20)
- `search`: Search query
**Response:**
```json
{
"institutions": [
{
"id": 1,
"name": "Tech University",
"contact_count": 5,
"created_at": "2025-06-19T10:00:00Z"
}
],
"total": 100,
"page": 1,
"per_page": 20
}
```
#### POST /leads/api/institutions
Create a new institution.
**Request:**
```json
{
"name": "New University"
}
```
**Response:**
```json
{
"id": 101,
"name": "New University",
"created_at": "2025-06-19T15:00:00Z"
}
```
#### GET /leads/api/contacts/{contact_id}
Get contact details.
**Response:**
```json
{
"id": 1,
"first_name": "John",
"last_name": "Doe",
"position": "IT Manager",
"institution_id": 1,
"details": [
{
"id": 1,
"type": "email",
"value": "john.doe@example.com",
"label": "Work"
},
{
"id": 2,
"type": "phone",
"value": "+49 123 456789",
"label": "Mobile"
}
],
"notes": [
{
"id": 1,
"content": "Initial contact",
"version": 1,
"created_at": "2025-06-19T10:00:00Z",
"created_by": "admin"
}
]
}
```
#### POST /leads/api/contacts/{contact_id}/details
Add contact detail (phone/email).
**Request:**
```json
{
"type": "email",
"value": "secondary@example.com",
"label": "Secondary"
}
```
**Response:**
```json
{
"id": 3,
"type": "email",
"value": "secondary@example.com",
"label": "Secondary"
}
```
### Resource Management API
#### POST /api/resources/allocate
Allocate resources to a license.
**Request:**
```json
{
"license_id": 123,
"resource_type": "domain",
"resource_ids": [45, 46, 47]
}
```
**Response:**
```json
{
"success": true,
"allocated": 3,
"message": "3 resources allocated successfully"
}
```
## Lead Management API
### GET /leads/api/stats
Get lead statistics.
**Response:**
```json
{
"total_institutions": 150,
"total_contacts": 450,
"recent_activities": 25,
"conversion_rate": 12.5,
"by_type": {
"university": 50,
"company": 75,
"government": 25
}
}
```
### Lead Routes (HTML Pages)
- `GET /leads/` - Lead overview page
- `GET /leads/create` - Create lead form
- `POST /leads/create` - Save new lead
- `GET /leads/edit/{lead_id}` - Edit lead form
- `POST /leads/update/{lead_id}` - Update lead
- `POST /leads/delete/{lead_id}` - Delete lead
- `GET /leads/export` - Export leads
- `POST /leads/import` - Import leads
## Common Response Codes
- `200 OK`: Successful request
- `201 Created`: Resource created
- `400 Bad Request`: Invalid request data
- `401 Unauthorized`: Missing or invalid authentication
- `403 Forbidden`: Insufficient permissions
- `404 Not Found`: Resource not found
- `409 Conflict`: Resource conflict (e.g., duplicate)
- `429 Too Many Requests`: Rate limit exceeded
- `500 Internal Server Error`: Server error
## Rate Limiting
- API endpoints: 100 requests/minute
- Login attempts: 5 per minute
- Configurable via Admin Panel
## Error Response Format
All errors return JSON with `error`, `code`, and `status` fields.
## Client Integration
Example request with required headers:
```bash
curl -X POST https://api-software-undso.intelsight.de/api/license/activate \
-H "X-API-Key: AF-2025-XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX" \
-H "Content-Type: application/json" \
-d '{
"license_key": "XXXX-XXXX-XXXX-XXXX",
"hardware_hash": "unique-hardware-id",
"machine_name": "DESKTOP-123",
"app_version": "1.0.0"
}'
```
## Testing
### Test Credentials
- Admin Users:
- Username: `rac00n` / Password: `1248163264`
- Username: `w@rh@mm3r` / Password: `Warhammer123!`
- API Key: Generated in Admin Panel under "Lizenzserver Administration"
### Getting the Initial API Key
If you need to retrieve the API key directly from the database:
```bash
docker exec -it v2_postgres psql -U postgres -d v2_db -c "SELECT api_key FROM system_api_key WHERE id = 1;"
```
### Test Endpoints
- Admin Panel: `https://admin-panel-undso.intelsight.de/`
- License Server API: `https://api-software-undso.intelsight.de/`

3217
JOURNAL.md Normale Datei

Datei-Diff unterdrückt, da er zu groß ist Diff laden

23
SSL/cert.pem Normale Datei
Datei anzeigen

@@ -0,0 +1,23 @@
-----BEGIN CERTIFICATE-----
MIID3TCCA2OgAwIBAgISBimcX2wwj3Z1U/Qlfu5y5keoMAoGCCqGSM49BAMDMDIx
CzAJBgNVBAYTAlVTMRYwFAYDVQQKEw1MZXQncyBFbmNyeXB0MQswCQYDVQQDEwJF
NjAeFw0yNTA2MjYxNjAwMjBaFw0yNTA5MjQxNjAwMTlaMBgxFjAUBgNVBAMTDWlu
dGVsc2lnaHQuZGUwWTATBgcqhkjOPQIBBggqhkjOPQMBBwNCAATEQD6vfDoXM7Yz
iT75OmB/kvxoEebMFRBCzpTOdUZpThlFmLijjCsYnxc8DeWDn8/eLltrBWhuM4Yx
gX8tseO0o4ICcTCCAm0wDgYDVR0PAQH/BAQDAgeAMB0GA1UdJQQWMBQGCCsGAQUF
BwMBBggrBgEFBQcDAjAMBgNVHRMBAf8EAjAAMB0GA1UdDgQWBBSM5CYyn//CSmLp
JADwjccRtsnZFDAfBgNVHSMEGDAWgBSTJ0aYA6lRaI6Y1sRCSNsjv1iU0jAyBggr
BgEFBQcBAQQmMCQwIgYIKwYBBQUHMAKGFmh0dHA6Ly9lNi5pLmxlbmNyLm9yZy8w
bgYDVR0RBGcwZYIfYWRtaW4tcGFuZWwtdW5kc28uaW50ZWxzaWdodC5kZYIgYXBp
LXNvZnR3YXJlLXVuZHNvLmludGVsc2lnaHQuZGWCDWludGVsc2lnaHQuZGWCEXd3
dy5pbnRlbHNpZ2h0LmRlMBMGA1UdIAQMMAowCAYGZ4EMAQIBMC0GA1UdHwQmMCQw
IqAgoB6GHGh0dHA6Ly9lNi5jLmxlbmNyLm9yZy80MS5jcmwwggEEBgorBgEEAdZ5
AgQCBIH1BIHyAPAAdgDM+w9qhXEJZf6Vm1PO6bJ8IumFXA2XjbapflTA/kwNsAAA
AZetLYOmAAAEAwBHMEUCIB8bQYn7h64sSmHZavNbIM6ScHDBxmMWN6WqjyaTz75I
AiEArz5mC+TaVMsofIIFkEj+dOMD1/oj6w10zgVunTPb01wAdgCkQsUGSWBhVI8P
1Oqc+3otJkVNh6l/L99FWfYnTzqEVAAAAZetLYRWAAAEAwBHMEUCIFVulS2bEmSQ
HYcE2UbsHhn7WJl8MeWZJSKGG1LbtnvyAiEAsLHL/VyIfXVhOmcMf1gmPL/eu7xj
W/2JuPHVWgjUDhQwCgYIKoZIzj0EAwMDaAAwZQIxANaSy/SOYXq9+oQJNhpXIlMJ
i0HBvwebvhNVkNGJN2QodV5gE2yi4s4q19XkpFO+fQIwCCqLSQvaC+AcOTFT9XL5
6hk8bFapLf/b2EFv3DE06qKIrDVPWhtYwyEYBRT4Ii4p
-----END CERTIFICATE-----

26
SSL/chain.pem Normale Datei
Datei anzeigen

@@ -0,0 +1,26 @@
-----BEGIN CERTIFICATE-----
MIIEVzCCAj+gAwIBAgIRALBXPpFzlydw27SHyzpFKzgwDQYJKoZIhvcNAQELBQAw
TzELMAkGA1UEBhMCVVMxKTAnBgNVBAoTIEludGVybmV0IFNlY3VyaXR5IFJlc2Vh
cmNoIEdyb3VwMRUwEwYDVQQDEwxJU1JHIFJvb3QgWDEwHhcNMjQwMzEzMDAwMDAw
WhcNMjcwMzEyMjM1OTU5WjAyMQswCQYDVQQGEwJVUzEWMBQGA1UEChMNTGV0J3Mg
RW5jcnlwdDELMAkGA1UEAxMCRTYwdjAQBgcqhkjOPQIBBgUrgQQAIgNiAATZ8Z5G
h/ghcWCoJuuj+rnq2h25EqfUJtlRFLFhfHWWvyILOR/VvtEKRqotPEoJhC6+QJVV
6RlAN2Z17TJOdwRJ+HB7wxjnzvdxEP6sdNgA1O1tHHMWMxCcOrLqbGL0vbijgfgw
gfUwDgYDVR0PAQH/BAQDAgGGMB0GA1UdJQQWMBQGCCsGAQUFBwMCBggrBgEFBQcD
ATASBgNVHRMBAf8ECDAGAQH/AgEAMB0GA1UdDgQWBBSTJ0aYA6lRaI6Y1sRCSNsj
v1iU0jAfBgNVHSMEGDAWgBR5tFnme7bl5AFzgAiIyBpY9umbbjAyBggrBgEFBQcB
AQQmMCQwIgYIKwYBBQUHMAKGFmh0dHA6Ly94MS5pLmxlbmNyLm9yZy8wEwYDVR0g
BAwwCjAIBgZngQwBAgEwJwYDVR0fBCAwHjAcoBqgGIYWaHR0cDovL3gxLmMubGVu
Y3Iub3JnLzANBgkqhkiG9w0BAQsFAAOCAgEAfYt7SiA1sgWGCIpunk46r4AExIRc
MxkKgUhNlrrv1B21hOaXN/5miE+LOTbrcmU/M9yvC6MVY730GNFoL8IhJ8j8vrOL
pMY22OP6baS1k9YMrtDTlwJHoGby04ThTUeBDksS9RiuHvicZqBedQdIF65pZuhp
eDcGBcLiYasQr/EO5gxxtLyTmgsHSOVSBcFOn9lgv7LECPq9i7mfH3mpxgrRKSxH
pOoZ0KXMcB+hHuvlklHntvcI0mMMQ0mhYj6qtMFStkF1RpCG3IPdIwpVCQqu8GV7
s8ubknRzs+3C/Bm19RFOoiPpDkwvyNfvmQ14XkyqqKK5oZ8zhD32kFRQkxa8uZSu
h4aTImFxknu39waBxIRXE4jKxlAmQc4QjFZoq1KmQqQg0J/1JF8RlFvJas1VcjLv
YlvUB2t6npO6oQjB3l+PNf0DpQH7iUx3Wz5AjQCi6L25FjyE06q6BZ/QlmtYdl/8
ZYao4SRqPEs/6cAiF+Qf5zg2UkaWtDphl1LKMuTNLotvsX99HP69V2faNyegodQ0
LyTApr/vT01YPE46vNsDLgK+4cL6TrzC/a4WcmF5SRJ938zrv/duJHLXQIku5v0+
EwOy59Hdm0PT/Er/84dDV0CSjdR/2XuZM3kpysSKLgD1cKiDA+IRguODCxfO9cyY
Ig46v9mFmBvyH04=
-----END CERTIFICATE-----

49
SSL/fullchain.pem Normale Datei
Datei anzeigen

@@ -0,0 +1,49 @@
-----BEGIN CERTIFICATE-----
MIID3TCCA2OgAwIBAgISBimcX2wwj3Z1U/Qlfu5y5keoMAoGCCqGSM49BAMDMDIx
CzAJBgNVBAYTAlVTMRYwFAYDVQQKEw1MZXQncyBFbmNyeXB0MQswCQYDVQQDEwJF
NjAeFw0yNTA2MjYxNjAwMjBaFw0yNTA5MjQxNjAwMTlaMBgxFjAUBgNVBAMTDWlu
dGVsc2lnaHQuZGUwWTATBgcqhkjOPQIBBggqhkjOPQMBBwNCAATEQD6vfDoXM7Yz
iT75OmB/kvxoEebMFRBCzpTOdUZpThlFmLijjCsYnxc8DeWDn8/eLltrBWhuM4Yx
gX8tseO0o4ICcTCCAm0wDgYDVR0PAQH/BAQDAgeAMB0GA1UdJQQWMBQGCCsGAQUF
BwMBBggrBgEFBQcDAjAMBgNVHRMBAf8EAjAAMB0GA1UdDgQWBBSM5CYyn//CSmLp
JADwjccRtsnZFDAfBgNVHSMEGDAWgBSTJ0aYA6lRaI6Y1sRCSNsjv1iU0jAyBggr
BgEFBQcBAQQmMCQwIgYIKwYBBQUHMAKGFmh0dHA6Ly9lNi5pLmxlbmNyLm9yZy8w
bgYDVR0RBGcwZYIfYWRtaW4tcGFuZWwtdW5kc28uaW50ZWxzaWdodC5kZYIgYXBp
LXNvZnR3YXJlLXVuZHNvLmludGVsc2lnaHQuZGWCDWludGVsc2lnaHQuZGWCEXd3
dy5pbnRlbHNpZ2h0LmRlMBMGA1UdIAQMMAowCAYGZ4EMAQIBMC0GA1UdHwQmMCQw
IqAgoB6GHGh0dHA6Ly9lNi5jLmxlbmNyLm9yZy80MS5jcmwwggEEBgorBgEEAdZ5
AgQCBIH1BIHyAPAAdgDM+w9qhXEJZf6Vm1PO6bJ8IumFXA2XjbapflTA/kwNsAAA
AZetLYOmAAAEAwBHMEUCIB8bQYn7h64sSmHZavNbIM6ScHDBxmMWN6WqjyaTz75I
AiEArz5mC+TaVMsofIIFkEj+dOMD1/oj6w10zgVunTPb01wAdgCkQsUGSWBhVI8P
1Oqc+3otJkVNh6l/L99FWfYnTzqEVAAAAZetLYRWAAAEAwBHMEUCIFVulS2bEmSQ
HYcE2UbsHhn7WJl8MeWZJSKGG1LbtnvyAiEAsLHL/VyIfXVhOmcMf1gmPL/eu7xj
W/2JuPHVWgjUDhQwCgYIKoZIzj0EAwMDaAAwZQIxANaSy/SOYXq9+oQJNhpXIlMJ
i0HBvwebvhNVkNGJN2QodV5gE2yi4s4q19XkpFO+fQIwCCqLSQvaC+AcOTFT9XL5
6hk8bFapLf/b2EFv3DE06qKIrDVPWhtYwyEYBRT4Ii4p
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----
MIIEVzCCAj+gAwIBAgIRALBXPpFzlydw27SHyzpFKzgwDQYJKoZIhvcNAQELBQAw
TzELMAkGA1UEBhMCVVMxKTAnBgNVBAoTIEludGVybmV0IFNlY3VyaXR5IFJlc2Vh
cmNoIEdyb3VwMRUwEwYDVQQDEwxJU1JHIFJvb3QgWDEwHhcNMjQwMzEzMDAwMDAw
WhcNMjcwMzEyMjM1OTU5WjAyMQswCQYDVQQGEwJVUzEWMBQGA1UEChMNTGV0J3Mg
RW5jcnlwdDELMAkGA1UEAxMCRTYwdjAQBgcqhkjOPQIBBgUrgQQAIgNiAATZ8Z5G
h/ghcWCoJuuj+rnq2h25EqfUJtlRFLFhfHWWvyILOR/VvtEKRqotPEoJhC6+QJVV
6RlAN2Z17TJOdwRJ+HB7wxjnzvdxEP6sdNgA1O1tHHMWMxCcOrLqbGL0vbijgfgw
gfUwDgYDVR0PAQH/BAQDAgGGMB0GA1UdJQQWMBQGCCsGAQUFBwMCBggrBgEFBQcD
ATASBgNVHRMBAf8ECDAGAQH/AgEAMB0GA1UdDgQWBBSTJ0aYA6lRaI6Y1sRCSNsj
v1iU0jAfBgNVHSMEGDAWgBR5tFnme7bl5AFzgAiIyBpY9umbbjAyBggrBgEFBQcB
AQQmMCQwIgYIKwYBBQUHMAKGFmh0dHA6Ly94MS5pLmxlbmNyLm9yZy8wEwYDVR0g
BAwwCjAIBgZngQwBAgEwJwYDVR0fBCAwHjAcoBqgGIYWaHR0cDovL3gxLmMubGVu
Y3Iub3JnLzANBgkqhkiG9w0BAQsFAAOCAgEAfYt7SiA1sgWGCIpunk46r4AExIRc
MxkKgUhNlrrv1B21hOaXN/5miE+LOTbrcmU/M9yvC6MVY730GNFoL8IhJ8j8vrOL
pMY22OP6baS1k9YMrtDTlwJHoGby04ThTUeBDksS9RiuHvicZqBedQdIF65pZuhp
eDcGBcLiYasQr/EO5gxxtLyTmgsHSOVSBcFOn9lgv7LECPq9i7mfH3mpxgrRKSxH
pOoZ0KXMcB+hHuvlklHntvcI0mMMQ0mhYj6qtMFStkF1RpCG3IPdIwpVCQqu8GV7
s8ubknRzs+3C/Bm19RFOoiPpDkwvyNfvmQ14XkyqqKK5oZ8zhD32kFRQkxa8uZSu
h4aTImFxknu39waBxIRXE4jKxlAmQc4QjFZoq1KmQqQg0J/1JF8RlFvJas1VcjLv
YlvUB2t6npO6oQjB3l+PNf0DpQH7iUx3Wz5AjQCi6L25FjyE06q6BZ/QlmtYdl/8
ZYao4SRqPEs/6cAiF+Qf5zg2UkaWtDphl1LKMuTNLotvsX99HP69V2faNyegodQ0
LyTApr/vT01YPE46vNsDLgK+4cL6TrzC/a4WcmF5SRJ938zrv/duJHLXQIku5v0+
EwOy59Hdm0PT/Er/84dDV0CSjdR/2XuZM3kpysSKLgD1cKiDA+IRguODCxfO9cyY
Ig46v9mFmBvyH04=
-----END CERTIFICATE-----

5
SSL/privkey.pem Normale Datei
Datei anzeigen

@@ -0,0 +1,5 @@
-----BEGIN PRIVATE KEY-----
MIGHAgEAMBMGByqGSM49AgEGCCqGSM49AwEHBG0wawIBAQQgi8/a6iwFCHSbBe/I
2Zo6exFpcLL4icRgotOF605ZrY6hRANCAATEQD6vfDoXM7YziT75OmB/kvxoEebM
FRBCzpTOdUZpThlFmLijjCsYnxc8DeWDn8/eLltrBWhuM4YxgX8tseO0
-----END PRIVATE KEY-----

69
backup_before_cleanup.sh Normale Datei
Datei anzeigen

@@ -0,0 +1,69 @@
#!/bin/bash
# Backup-Skript vor dem Cleanup der auskommentierten Routes
# Erstellt ein vollständiges Backup des aktuellen Zustands
TIMESTAMP=$(date +%Y%m%d_%H%M%S)
BACKUP_DIR="./backups/refactoring_${TIMESTAMP}"
echo "🔒 Erstelle Backup vor Refactoring-Cleanup..."
echo " Timestamp: ${TIMESTAMP}"
# Backup-Verzeichnis erstellen
mkdir -p "${BACKUP_DIR}"
# 1. Code-Backup
echo "📁 Sichere Code..."
cp -r v2_adminpanel "${BACKUP_DIR}/v2_adminpanel_backup"
# Speziell app.py sichern
cp v2_adminpanel/app.py "${BACKUP_DIR}/app.py.backup_${TIMESTAMP}"
# 2. Git-Status dokumentieren
echo "📝 Dokumentiere Git-Status..."
git status > "${BACKUP_DIR}/git_status.txt"
git log --oneline -10 > "${BACKUP_DIR}/git_log.txt"
git diff > "${BACKUP_DIR}/git_diff.txt"
# 3. Blueprint-Übersicht erstellen
echo "📊 Erstelle Blueprint-Übersicht..."
cat > "${BACKUP_DIR}/blueprint_overview.txt" << EOF
Blueprint Migration Status - ${TIMESTAMP}
==========================================
Blueprints erstellt und registriert:
- auth_bp (9 routes) - Authentication
- admin_bp (10 routes) - Admin Dashboard
- license_bp (4 routes) - License Management
- customer_bp (7 routes) - Customer Management
- resource_bp (7 routes) - Resource Pool
- session_bp (6 routes) - Session Management
- batch_bp (4 routes) - Batch Operations
- api_bp (14 routes) - API Endpoints
- export_bp (5 routes) - Export Functions
Gesamt: 66 Routes in Blueprints
Status:
- Alle Routes aus app.py sind auskommentiert
- Blueprints sind aktiv und funktionsfähig
- Keine aktiven @app.route mehr in app.py
Nächste Schritte:
1. Auskommentierte Routes entfernen
2. Redundante Funktionen bereinigen
3. URL-Präfixe implementieren
EOF
# 4. Route-Mapping erstellen
echo "🗺️ Erstelle Route-Mapping..."
grep -n "# @app.route" v2_adminpanel/app.py > "${BACKUP_DIR}/commented_routes.txt"
# 5. Zusammenfassung
echo ""
echo "✅ Backup erstellt in: ${BACKUP_DIR}"
echo ""
echo "Inhalt:"
ls -la "${BACKUP_DIR}/"
echo ""
echo "🎯 Nächster Schritt: Auskommentierte Routes können jetzt sicher entfernt werden"
echo " Rollback möglich mit: cp ${BACKUP_DIR}/app.py.backup_${TIMESTAMP} v2_adminpanel/app.py"

1
backups/.backup_key Normale Datei
Datei anzeigen

@@ -0,0 +1 @@
vJgDckVjr3cSictLNFLGl8QIfqSXVD5skPU7kVhkyfc=

255
cloud-init.yaml Normale Datei
Datei anzeigen

@@ -0,0 +1,255 @@
#cloud-config
package_update: true
package_upgrade: true
packages:
- apt-transport-https
- ca-certificates
- curl
- gnupg
- lsb-release
- ufw
- fail2ban
- git
write_files:
- path: /root/install-docker.sh
permissions: '0755'
content: |
#!/bin/bash
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" > /etc/apt/sources.list.d/docker.list
apt-get update
apt-get install -y docker-ce docker-ce-cli containerd.io docker-compose-plugin
systemctl enable docker
systemctl start docker
- path: /etc/ssl/certs/fullchain.pem
permissions: '0644'
content: |
-----BEGIN CERTIFICATE-----
MIIFKDCCBBCgAwIBAgISA3yPyKBqrYewZDI8pFbjQgs5MA0GCSqGSIb3DQEBCwUA
MDIxCzAJBgNVBAYTAlVTMRYwFAYDVQQKEw1MZXQncyBFbmNyeXB0MQswCQYDVQQD
EwJSMzAeFw0yNTA2MjYyMjQ5MDJaFw0yNTA5MjQyMjQ5MDFaMBkxFzAVBgNVBAMT
DmludGVsc2lnaHQuZGUwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDC
1HLwsBdUBayNJaJ7Wy1n8AeM6F7K0JAw6UQdW0sI8TNtOyZKaOrfTmKBgdxpBnFx
nj7QiIVu8bUczZGcQcKoOLH6X5cJtOvUQRBGzYHlWhCGi7M3JAKjQoKyGiT2uRiZ
P4JsJaVVOJyq1eO5c77TJa9jvAA0qfuWVTzLUDWM1oIJr8zyDHNTM7gK17c1p3XB
F3gGDGCdIj5o1oXJxdNzDgLTqJeqSGKLfLwOTsFiCCjntyVjcQCHaceCdGx4tC+F
Kcx/d5p+Jc6xj7pVvQoqP0Kg1YA6VkX9hLKUCiNlSHhQJbnj8rhfLPtMfHRoZjQT
oazP3Sq6DLGdKJ7TdL2nAgMBAAGjggJNMIICSTAOBgNVHQ8BAf8EBAMCBaAwHQYD
VR0lBBYwFAYIKwYBBQUHAwEGCCsGAQUFBwMCMAwGA1UdEwEB/wQCMAAwHQYDVR0O
BBYEFHl38d4egKf7gkUvW3XKKNOmhQtzMB8GA1UdIwQYMBaAFBQusxe3WFbLrlAJ
QOYfr52LFMLGMFUGCCsGAQUFBwEBBEkwRzAhBggrBgEFBQcwAYYVaHR0cDovL3Iz
Lm8ubGVuY3Iub3JnMCIGCCsGAQUFBzAChhZodHRwOi8vcjMuaS5sZW5jci5vcmcv
MIGFBgNVHREEfjB8gg5pbnRlbHNpZ2h0LmRlgidhZG1pbi1wYW5lbC11bmRzby5p
bnRlbHNpZ2h0LmRlgidwa2ktc29mdHdhcmUtdW5kc28uaW50ZWxzaWdodC5kZYIS
d3d3LmludGVsc2lnaHQuZGWCHmNkOS03YTMyMS5pbnRlbHNpZ2h0LmRlMBMGA1Ud
IAQMMAowCAYGZ4EMAQIBMIIBBAYKKwYBBAHWeQIEAgSB9QSB8gDwAHcAzxFPn/xF
z4pBaLc8BWh7G7KQJ7WUYYJapBgTyBmOSwAAAZA2NCNCAAAEAwBIMEYCIQCb4Rfu
RJTLkAqV8aG6HqQBFJBGqsLOd5a4cQQE8aAM0QIhAKRY5M8/HuDz8oSI3w0SyAKB
IPZ1cOyEaR2BcLc8JqsEAHUA8aLLMkJi8F4QbRcE7GL7GQZQ7ypXK5Wtj5jqF1FC
H0MAAAGQNjQjQwAABAMARjBEAiAdqzfZkNGBGWGQ8kfKQtE7iiAa6FNHnEhjW1Nu
GlYAFgIgCjRD9awGfJ4lMM8e2TBaA5dKkSsEgWKtGKTjvxkz2VEwDQYJKoZIhvcN
AQELBQADggEBAJX3KxSxdOBOiqW3pJTSEsABKh0h8B8kP5vUAXRzxVcGJY3aJb5Y
DqcTI9ykBQyJM1mB1/VFWZKkINB4p5KqLoY2EBxRj2qXnAhHzNrEptYFk16VQJcc
Xfhv6XKD9yPQTMsHBnfWGQxMYOZbLa5lZM0QLo7T+f8fBOl7u8CwRJZa7wA3Z3F3
Kw0+0FHjBZOu9wt2U0B0BmUIe8GGNacTbP3JCUOQpMQJbhWnGJtVpEL8HT01qWcl
oZA3nSQm9yD1G6l5aJyIDGdQ4C3/VJ0T3ZlQGXECnQWxCuU6v2lOQXvnQGcSvN+v
kNiRMCT3tXgLhCcr/6daDKYNOJ3EAVIvNx0=
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----
MIIFFjCCAv6gAwIBAgIRAJErCErPDBinU/bWLiWnX1owDQYJKoZIhvcNAQELBQAw
TzELMAkGA1UEBhMCVVMxKTAnBgNVBAoTIEludGVybmV0IFNlY3VyaXR5IFJlc2Vh
cmNoIEdyb3VwMRUwEwYDVQQDEwxJU1JHIFJvb3QgWDEwHhcNMjAwOTA0MDAwMDAw
WhcNMjUwOTE1MTYwMDAwWjAyMQswCQYDVQQGEwJVUzEWMBQGA1UEChMNTGV0J3Mg
RW5jcnlwdDELMAkGA1UEAxMCUjMwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEK
AoIBAQC7AhUozPaglNMPEuyNVZLD+ILxmaZ6QoinXSaqtSu5xUyxr45r+XXIo9cP
R5QUVTVXjJ6oojkZ9YI8QqlObvU7wy7bjcCwXPNZOOftz2nwWgsbvsCUJCWH+jdx
sxPnHKzhm+/b5DtFUkWWqcFTzjTIUu61ru2P3mBw4qVUq7ZtDpelQDRrK9O8Zutm
NHz6a4uPVymZ+DAXXbpyb/uBxa3Shlg9F8fnCbvxK/eG3MHacV3URuPMrSXBiLxg
Z3Vms/EY96Jc5lP/Ooi2R6X/ExjqmAl3P51T+c8B5fWmcBcUr2Ok/5mzk53cU6cG
/kiFHaFpriV1uxPMUgP17VGhi9sVAgMBAAGjggEIMIIBBDAOBgNVHQ8BAf8EBAMC
AYYwHQYDVR0lBBYwFAYIKwYBBQUHAwIGCCsGAQUFBwMBMBIGA1UdEwEB/wQIMAYB
Af8CAQAwHQYDVR0OBBYEFBQusxe3WFbLrlAJQOYfr52LFMLGMB8GA1UdIwQYMBaA
FHm0WeZ7tuXkAXOACIjIGlj26ZtuMDIGCCsGAQUFBwEBBCYwJDAiBggrBgEFBQcw
AoYWaHR0cDovL3gxLmkubGVuY3Iub3JnLzAnBgNVHR8EIDAeMBygGqAYhhZodHRw
Oi8veDEuYy5sZW5jci5vcmcvMCIGA1UdIAQbMBkwCAYGZ4EMAQIBMA0GCysGAQQB
gt8TAQEBMA0GCSqGSIb3DQEBCwUAA4ICAQCFyk5HPqP3hUSFvNVneLKYY611TR6W
PTNlclQtgaDqw+34IL9fzLdwALduO/ZelN7kIJ+m74uyA+eitRY8kc607TkC53wl
ikfmZW4/RvTZ8M6UK+5UzhK8jCdLuMGYL6KvzXGRSgi3yLgjewQtCPkIVz6D2QQz
CkcheAmCJ8MqyJu5zlzyZMjAvnnAT45tRAxekrsu94sQ4egdRCnbWSDtY7kh+BIm
lJNXoB1lBMEKIq4QDUOXoRgffuDghje1WrG9ML+Hbisq/yFOGwXD9RiX8F6sw6W4
avAuvDszue5L3sz85K+EC4Y/wFVDNvZo4TYXao6Z0f+lQKc0t8DQYzk1OXVu8rp2
yJMC6alLbBfODALZvYH7n7do1AZls4I9d1P4jnkDrQoxB3UqQ9hVl3LEKQ73xF1O
yK5GhDDX8oVfGKF5u+decIsH4YaTw7mP3GFxJSqv3+0lUFJoi5Lc5da149p90Ids
hCExroL1+7mryIkXPeFM5TgO9r0rvZaBFOvV2z0gp35Z0+L4WPlbuEjN/lxPFin+
HlUjr8gRsI3qfJOQFy/9rKIJR0Y/8Omwt/8oTWgy1mdeHmmjk7j1nYsvC9JSQ6Zv
MldlTTKB3zhThV1+XWYp6rjd5JW1zbVWEkLNxE7GJThEUG3szgBVGP7pSWTUTsqX
nLRbwHOoq7hHwg==
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----
MIIFYDCCBEigAwIBAgIQQAF3ITfU6UK47naqPGQKtzANBgkqhkiG9w0BAQsFADA/
MSQwIgYDVQQKExtEaWdpdGFsIFNpZ25hdHVyZSBUcnVzdCBDby4xFzAVBgNVBAMT
DkRTVCBSb290IENBIFgzMB4XDTIxMDEyMDE5MTQwM1oXDTI0MDkzMDE4MTQwM1ow
TzELMAkGA1UEBhMCVVMxKTAnBgNVBAoTIEludGVybmV0IFNlY3VyaXR5IFJlc2Vh
cmNoIEdyb3VwMRUwEwYDVQQDEwxJU1JHIFJvb3QgWDEwggIiMA0GCSqGSIb3DQEB
AQUAA4ICDwAwggIKAoICAQCt6CRz9BQ385ueK1coHIe+3LffOJCMbjzmV6B493XC
ov71am72AE8o295ohmxEk7axY/0UEmu/H9LqMZshwLLezUmgD5HwmJAp32sIGkeG
VPMDCa/Lr+TyTjnhOWgjf7lJJhiaYFBSqygRz0t0IQ1GRomrn1Ktu3R7DJK0bhrP
4x6+wLpTABEZaHQKxZNljWhJXgxvTNKK6NXBmfAhYZ4+l4W0aMa8kU2Cz8lhCM6i
JnyYcPc9w9YaYJ2Gy1t3wgezPpNTItzPRMpT7p/NnDhqI9/gJvdFfZxgdmdPnTBw
Q5XgZbBB9X3YD8LhI8NsHL1A7a0u8UdL6fkv8R9p7RfC8IA3llXevPS11wUAZcBF
QYJxk4qN9bDYcBdQ0OZ2dOVFBLdCFPuS+iqQBFH2N5fjb9LKgIFrdWJaXEGz70kD
Dq6gIx1SBLyooZKwYvG3Di2E7GvcbnyLqHtCPF/Ky1r3eMZTLZ8PAJhyvggYgOn8
aNT1+Fo/7+yzFKP8HUlTBRBqKu+8dacN2tGHKjWuiLkahY/xGpPwlKz1wP+4lBEB
VHM9I1cLH+2d7fkBATMqQQMmIaulslYkCBVHeZCDleVQpkq7T2RgwADVb8J3stW3
e0MZF9HckdZXQPKPYK29oJi7xr5nTMPQDz3FuNhqNYY7JLdWkoLuuONFDgrHLRmd
TwIDAQABo4IBRjCCAUIwDwYDVR0TAQH/BAUwAwEB/zAOBgNVHQ8BAf8EBAMCAQYw
SwYIKwYBBQUHAQEEPzA9MDsGCCsGAQUFBzAChi9odHRwOi8vYXBwcy5pZGVudHJ1
c3QuY29tL3Jvb3RzL2RzdHJvb3RjYXgzLnA3YzAfBgNVHSMEGDAWgBTEp7Gkeyxx
+tvhS5B1/8QVYIWJEDBUBgNVHSAETTBLMAgGBmeBDAECATA/BgsrBgEEAYLfEwEB
ATAwMC4GCCsGAQUFBwIBFiJodHRwOi8vY3BzLnJvb3QteDEubGV0c2VuY3J5cHQu
b3JnMDwGA1UdHwQ1MDMwMaAvoC2GK2h0dHA6Ly9jcmwuaWRlbnRydXN0LmNvbS9E
U1RST09UQ0FYM0NSTC5jcmwwHQYDVR0OBBYEFHm0WeZ7tuXkAXOACIjIGlj26Ztu
MA0GCSqGSIb3DQEBCwUAA4IBAQBg4WZmUUxiK3EiwSr1mSWPpnDHVD1GVVxbOyZC
S8+Pf6vDf6tSgqYJ/mLDNtjfLwKy8RBcKwMxkBq5c1FqcTB4tL7IzCOLMCDH4XYP
K0LQ1d5sQNaKZBiJOUPb7oqfwJQVjDuTXl3hcqBhyz2HDvAPkCIPfcIwyhVhucHH
yN9mqPNgYWVGKF3cWQqEQ9ombqCr5ASCvSoEZL/YQM1Zv0j/RdZ5qf+ZwJttL3dP
+t4cpNAl0z7ly6XF/FMwkRFanNg56TjB8aXq0mEJPGBWQgOw7hCYPKNaBaHRPQUH
Lb6XBWI3p2gqQjFJ5KhSMN8mPgqhm8RlJmWWJUMlGsiVr3WE
-----END CERTIFICATE-----
- path: /etc/ssl/private/privkey.pem
permissions: '0600'
content: |
-----BEGIN PRIVATE KEY-----
MIIEvgIBADANBgkqhkiG9w0BAQEFAASCBKgwggSkAgEAAoIBAQDC1HLwsBdUBayN
JaJ7Wy1n8AeM6F7K0JAw6UQdW0sI8TNtOyZKaOrfTmKBgdxpBnFxnj7QiIVu8bUc
zZGcQcKoOLH6X5cJtOvUQRBGzYHlWhCGi7M3JAKjQoKyGiT2uRiZP4JsJaVVOJyq
1eO5c77TJa9jvAA0qfuWVTzLUDWM1oIJr8zyDHNTM7gK17c1p3XBF3gGDGCdIj5o
1oXJxdNzDgLTqJeqSGKLfLwOTsFiCCjntyVjcQCHaceCdGx4tC+FKcx/d5p+Jc6x
j7pVvQoqP0Kg1YA6VkX9hLKUCiNlSHhQJbnj8rhfLPtMfHRoZjQToazP3Sq6DLGd
KJ7TdL2nAgMBAAECggEAAKJosDxdA6AQ1CvwQp8N1JL9ZAVqYf4Y9c9n6s+HFOBX
wPEsABHNdNAYQJnX5X8rcdXfQhwFKRBqR/0OKtaBEJ2yh9IzO6DKHsKcAsX2aEo8
2b+DFCJz7Ty2R7LJBt2oKJxLaVCJlH7nP2VglLK3oAMv9R0+9y1u7bxp4B5Xqkzm
LXnqkiN4MrnLJWLh2eIYcf0fJvL0xUmTQNXZa6PHzv8hfRcOkdJZGLFGRgABBXzi
Ek9/fTNwH0Rg8e6eTZdPzXOgkyQdRsHLQQa3j6DHKJKzP8kI1MKJ2yQELm15LT+E
0U3QIDgxcKHBzOoKJFE/MzL+NXQ9s+vdT3f1mzLJiQKBgQDgfwOQLm2lUaNcDNgf
A+WLaL1a6ysEG2cDUiwsBSRRUH/5llMEbyFxdPw2sqdVsRkBBaHdJCINkDJGm/kI
/xvJxD3KcBVLSdmHq/qO4pbGxBDRNvzrRO5Yoaiv5xDk2rQF3lm1m3vWdI6YFhq3
j8qxE4/YjHNQOqfr7a0j+3j9dQKBgQDeBcQD2y7k7KAyRAv5Sh8AjbDSNjvFz3hE
TnJcjeeuTfmKdOBCm5mDCH+vGhBczRoHO9vVnqxLO3dJOWHqf8z7BPTBU4Bpm6zt
5CJWP5jCbQU8+S0g1vgdUBzRrXFE4I9ZxCvJ5k6mfzVOvPcb0OV2gJGcxPbg2xT5
uTn7VRTq6wKBgQCGF5yE6DVdMoqh5kjQBjjIObKtXRtJpGxuJ2VDfNYP8Klu6zAZ
zP3hKrUQO0IKJBxOwT/D8VZ4IKLK7y0q3Fb8+rsCxJzPM7J5UtKbQPPOdAbRFPCA
J4fE/YJu4g/sUpTdxq3lVqJ9P4rJyg3JJfn8aRAMOuhhNu6VJ9BlBTe3rQKBgQCv
OHXzS9VV9WMfhpN/UR4Q+LAqwQUKW0HFCkkYiDK/jJ2YNMU+m9e8JUrZOxZ9N1gF
IHJyGppZTxI5y1swCRqfGf+JuR7TKzHD7RK0L7F1q8hJwFjJA4xflg0RRvk5hfQa
WX3rA7SnC2T7b7DlxnVu+j2KNz0BnmKlhEFVOx7CnQKBgCdHRsDGXJGmGqhG1sH8
PHdT1vA0iKLiouI+/WxtJwA2Y3FKcHjzJz+lX6ucsW5V+dKZuIWKDvuJQsJb1qJb
yiuEZdWy5iLOON0m10AX3WyfxT8A5NWkCBVH6K6IYOiJcBFGVfGXpP3kc1g8NqKd
K1DU5qILAZENMZLGKJfrwyxm
-----END PRIVATE KEY-----
- path: /root/deploy.sh
permissions: '0755'
content: |
#!/bin/bash
set -e
# Clone repository
cd /opt
# IMPORTANT: Replace YOUR_GITHUB_TOKEN with a valid GitHub Personal Access Token with 'repo' permissions
GITHUB_TOKEN="YOUR_GITHUB_TOKEN"
git clone https://${GITHUB_TOKEN}@github.com/UserIsMH/v2-Docker.git
cd v2-Docker
# Remove token from git config
git remote set-url origin https://github.com/UserIsMH/v2-Docker.git
# Update nginx.conf with correct domains
sed -i 's/admin-panel-undso\.z5m7q9dk3ah2v1plx6ju\.com/admin-panel-undso.intelsight.de/g' v2_nginx/nginx.conf
sed -i 's/api-software-undso\.z5m7q9dk3ah2v1plx6ju\.com/api-software-undso.intelsight.de/g' v2_nginx/nginx.conf
# Update .env file
sed -i 's/API_DOMAIN=.*/API_DOMAIN=api-software-undso.intelsight.de/' v2/.env
sed -i 's/ADMIN_PANEL_DOMAIN=.*/ADMIN_PANEL_DOMAIN=admin-panel-undso.intelsight.de/' v2/.env
# Copy SSL certificates
mkdir -p v2_nginx/ssl
cp /etc/ssl/certs/fullchain.pem v2_nginx/ssl/
cp /etc/ssl/private/privkey.pem v2_nginx/ssl/
chmod 644 v2_nginx/ssl/fullchain.pem
chmod 600 v2_nginx/ssl/privkey.pem
# Generate DH parameters if not exist
if [ ! -f v2_nginx/ssl/dhparam.pem ]; then
openssl dhparam -out v2_nginx/ssl/dhparam.pem 2048
fi
# Start Docker services
cd v2
docker compose pull
docker compose up -d
# Wait for services to be ready
sleep 30
# Check if services are running
docker compose ps
# Enable auto-start
cat > /etc/systemd/system/docker-compose-app.service <<EOF
[Unit]
Description=Docker Compose Application Service
Requires=docker.service
After=docker.service
[Service]
Type=oneshot
RemainAfterExit=yes
WorkingDirectory=/opt/v2-Docker/v2
ExecStart=/usr/bin/docker compose up -d
ExecStop=/usr/bin/docker compose down
TimeoutStartSec=0
[Install]
WantedBy=multi-user.target
EOF
systemctl enable docker-compose-app
- path: /etc/fail2ban/jail.local
permissions: '0644'
content: |
[DEFAULT]
bantime = 3600
findtime = 600
maxretry = 5
[sshd]
enabled = true
port = ssh
filter = sshd
logpath = /var/log/auth.log
maxretry = 3
swap:
filename: /swapfile
size: 2G
maxsize: 2G
runcmd:
- chmod 600 /etc/ssl/private/privkey.pem
- /root/install-docker.sh
- ufw allow 22/tcp
- ufw allow 80/tcp
- ufw allow 443/tcp
- echo "y" | ufw enable
- systemctl enable fail2ban
- systemctl start fail2ban
- /root/deploy.sh
- echo "Deployment complete!" > /root/deployment.log
- reboot
final_message: "The system is finally up, after $UPTIME seconds"

118
create_full_backup.sh Ausführbare Datei
Datei anzeigen

@@ -0,0 +1,118 @@
#!/bin/bash
# Full Server Backup Script for V2-Docker
# Creates comprehensive backup including configs, database, volumes, and git status
set -e # Exit on error
# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
NC='\033[0m' # No Color
# Configuration
BACKUP_BASE_DIR="/opt/v2-Docker/server-backups"
TIMESTAMP=$(date +%Y%m%d_%H%M%S)
BACKUP_DIR="$BACKUP_BASE_DIR/server_backup_$TIMESTAMP"
PROJECT_ROOT="/opt/v2-Docker"
# GitHub configuration
GITHUB_REMOTE="backup"
GITHUB_BRANCH="main"
GIT_PATH="/home/root/.local/bin"
echo -e "${GREEN}Starting V2-Docker Full Server Backup...${NC}"
echo "Backup directory: $BACKUP_DIR"
# Create backup directory structure
mkdir -p "$BACKUP_DIR"/{configs,volumes}
# 1. Backup configuration files
echo -e "${YELLOW}Backing up configuration files...${NC}"
cp "$PROJECT_ROOT/v2/docker-compose.yaml" "$BACKUP_DIR/configs/" 2>/dev/null || echo "docker-compose.yaml not found"
cp "$PROJECT_ROOT/v2/.env" "$BACKUP_DIR/configs/" 2>/dev/null || echo ".env not found"
cp "$PROJECT_ROOT/v2_nginx/nginx.conf" "$BACKUP_DIR/configs/" 2>/dev/null || echo "nginx.conf not found"
# Backup SSL certificates
if [ -d "$PROJECT_ROOT/v2_nginx/ssl" ]; then
mkdir -p "$BACKUP_DIR/configs/ssl"
cp -r "$PROJECT_ROOT/v2_nginx/ssl/"* "$BACKUP_DIR/configs/ssl/" 2>/dev/null || echo "SSL files not found"
fi
# 2. Capture Git status and recent commits
echo -e "${YELLOW}Capturing Git information...${NC}"
cd "$PROJECT_ROOT"
git status > "$BACKUP_DIR/git_status.txt" 2>&1
git log --oneline -50 > "$BACKUP_DIR/git_recent_commits.txt" 2>&1
# 3. Capture Docker status
echo -e "${YELLOW}Capturing Docker status...${NC}"
docker ps -a > "$BACKUP_DIR/docker_containers.txt" 2>&1
cd "$PROJECT_ROOT/v2" && docker-compose ps > "$BACKUP_DIR/docker_compose_status.txt" 2>&1 && cd "$PROJECT_ROOT"
# 4. Backup PostgreSQL database
echo -e "${YELLOW}Backing up PostgreSQL database...${NC}"
DB_CONTAINER="db" # Korrigierter Container-Name
DB_NAME="meinedatenbank" # Korrigierter DB-Name
DB_USER="adminuser" # Korrigierter User
# Get DB password from .env or environment
if [ -f "$PROJECT_ROOT/v2/.env" ]; then
source "$PROJECT_ROOT/v2/.env"
fi
DB_PASS="${POSTGRES_PASSWORD:-supergeheimespasswort}"
# Create database dump
docker exec "$DB_CONTAINER" pg_dump -U "$DB_USER" -d "$DB_NAME" | gzip > "$BACKUP_DIR/database_backup.sql.gz"
# 5. Backup Docker volumes
echo -e "${YELLOW}Backing up Docker volumes...${NC}"
# PostgreSQL data volume
docker run --rm -v postgres_data:/data -v "$BACKUP_DIR/volumes":/backup alpine tar czf /backup/postgres_data.tar.gz -C /data .
# Create backup info file
cat > "$BACKUP_DIR/backup_info.txt" << EOF
V2-Docker Server Backup
Created: $(date)
Timestamp: $TIMESTAMP
Type: Full Server Backup
Contents:
- Configuration files (docker-compose, nginx, SSL)
- PostgreSQL database dump
- Docker volumes
- Git status and history
- Docker container status
EOF
# Calculate backup size
BACKUP_SIZE=$(du -sh "$BACKUP_DIR" | cut -f1)
echo -e "${GREEN}Backup created successfully!${NC}"
echo "Size: $BACKUP_SIZE"
# Push to GitHub if requested (default: yes)
if [ "${SKIP_GITHUB:-no}" != "yes" ]; then
echo -e "${YELLOW}Pushing backup to GitHub...${NC}"
# Create tar archive for GitHub
cd "$BACKUP_BASE_DIR"
TAR_FILE="server_backup_$TIMESTAMP.tar.gz"
tar czf "$TAR_FILE" "server_backup_$TIMESTAMP"
# Push to GitHub
cd "$PROJECT_ROOT"
PATH="$GIT_PATH:$PATH" git pull "$GITHUB_REMOTE" "$GITHUB_BRANCH" --rebase 2>/dev/null || true
PATH="$GIT_PATH:$PATH" git add "server-backups/$TAR_FILE"
PATH="$GIT_PATH:$PATH" git commit -m "Server backup $TIMESTAMP - Full system backup before changes"
PATH="$GIT_PATH:$PATH" git push "$GITHUB_REMOTE" "$GITHUB_BRANCH"
echo -e "${GREEN}Backup pushed to GitHub successfully!${NC}"
# Keep local copy for manual backups (for quick rollback)
# Automated backups will delete this later
echo -e "${YELLOW}Local backup kept at: $BACKUP_DIR${NC}"
else
echo -e "${YELLOW}Skipped GitHub push (SKIP_GITHUB=yes)${NC}"
fi
echo "Backup file: $BACKUP_DIR"
echo -e "${GREEN}Backup completed successfully!${NC}"

35
generate-secrets.py Normale Datei
Datei anzeigen

@@ -0,0 +1,35 @@
#!/usr/bin/env python3
import secrets
import string
def generate_password(length=16):
"""Generate a secure random password"""
alphabet = string.ascii_letters + string.digits + "!@#$%^&*"
return ''.join(secrets.choice(alphabet) for _ in range(length))
def generate_jwt_secret(length=64):
"""Generate a secure JWT secret"""
return secrets.token_urlsafe(length)
print("=== Generated Secure Secrets for Production ===")
print()
print("# PostgreSQL Database")
print(f"POSTGRES_PASSWORD={generate_password(20)}")
print()
print("# Admin Panel Users (save these securely!)")
print(f"ADMIN1_PASSWORD={generate_password(16)}")
print(f"ADMIN2_PASSWORD={generate_password(16)}")
print()
print("# JWT Secret")
print(f"JWT_SECRET={generate_jwt_secret()}")
print()
print("# Grafana")
print(f"GRAFANA_PASSWORD={generate_password(16)}")
print()
print("# For v2_lizenzserver/.env")
print(f"SECRET_KEY={secrets.token_hex(32)}")
print()
print("=== IMPORTANT ===")
print("1. Save these passwords securely")
print("2. Update both .env files with these values")
print("3. Never commit these to git")

30
lizenzserver/.env.example Normale Datei
Datei anzeigen

@@ -0,0 +1,30 @@
# Database Configuration
DB_PASSWORD=secure_password_change_this
# Redis Configuration
REDIS_PASSWORD=redis_password_change_this
# RabbitMQ Configuration
RABBITMQ_USER=admin
RABBITMQ_PASS=admin_password_change_this
# JWT Configuration
JWT_SECRET=change_this_very_secret_key_in_production
# Admin Configuration
ADMIN_SECRET=change_this_admin_secret
ADMIN_API_KEY=admin-key-change-in-production
# Flask Environment
FLASK_ENV=production
# Rate Limiting (optional overrides)
# DEFAULT_RATE_LIMIT_PER_MINUTE=60
# DEFAULT_RATE_LIMIT_PER_HOUR=1000
# DEFAULT_RATE_LIMIT_PER_DAY=10000
# Service URLs (for external access)
# AUTH_SERVICE_URL=http://localhost:5001
# LICENSE_API_URL=http://localhost:5002
# ANALYTICS_SERVICE_URL=http://localhost:5003
# ADMIN_API_URL=http://localhost:5004

Datei anzeigen

@@ -0,0 +1,31 @@
FROM python:3.11-slim
WORKDIR /app
# Install system dependencies
RUN apt-get update && apt-get install -y \
gcc \
postgresql-client \
curl \
&& rm -rf /var/lib/apt/lists/*
# Copy requirements
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
# Copy application code
COPY . /app/
# Create non-root user
RUN useradd -m -u 1000 appuser && chown -R appuser:appuser /app
USER appuser
# Expose port
EXPOSE 5004
# Health check
HEALTHCHECK --interval=30s --timeout=10s --start-period=5s --retries=3 \
CMD curl -f http://localhost:5004/health || exit 1
# Run the application
CMD ["python", "services/admin_api/app.py"]

Datei anzeigen

@@ -0,0 +1,31 @@
FROM python:3.11-slim
WORKDIR /app
# Install system dependencies
RUN apt-get update && apt-get install -y \
gcc \
postgresql-client \
curl \
&& rm -rf /var/lib/apt/lists/*
# Copy requirements
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
# Copy application code
COPY . /app/
# Create non-root user
RUN useradd -m -u 1000 appuser && chown -R appuser:appuser /app
USER appuser
# Expose port
EXPOSE 5003
# Health check
HEALTHCHECK --interval=30s --timeout=10s --start-period=5s --retries=3 \
CMD curl -f http://localhost:5003/health || exit 1
# Run the application
CMD ["python", "services/analytics/app.py"]

31
lizenzserver/Dockerfile.auth Normale Datei
Datei anzeigen

@@ -0,0 +1,31 @@
FROM python:3.11-slim
WORKDIR /app
# Install system dependencies
RUN apt-get update && apt-get install -y \
gcc \
postgresql-client \
curl \
&& rm -rf /var/lib/apt/lists/*
# Copy requirements
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
# Copy application code
COPY . /app/
# Create non-root user
RUN useradd -m -u 1000 appuser && chown -R appuser:appuser /app
USER appuser
# Expose port
EXPOSE 5001
# Health check
HEALTHCHECK --interval=30s --timeout=10s --start-period=5s --retries=3 \
CMD curl -f http://localhost:5001/health || exit 1
# Run the application
CMD ["python", "services/auth/app.py"]

Datei anzeigen

@@ -0,0 +1,31 @@
FROM python:3.11-slim
WORKDIR /app
# Install system dependencies
RUN apt-get update && apt-get install -y \
gcc \
postgresql-client \
curl \
&& rm -rf /var/lib/apt/lists/*
# Copy requirements
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
# Copy application code
COPY . /app/
# Create non-root user
RUN useradd -m -u 1000 appuser && chown -R appuser:appuser /app
USER appuser
# Expose port
EXPOSE 5002
# Health check
HEALTHCHECK --interval=30s --timeout=10s --start-period=5s --retries=3 \
CMD curl -f http://localhost:5002/health || exit 1
# Run the application
CMD ["python", "services/license_api/app.py"]

86
lizenzserver/Makefile Normale Datei
Datei anzeigen

@@ -0,0 +1,86 @@
.PHONY: help build up down restart logs ps clean test
# Default target
help:
@echo "License Server Management Commands:"
@echo " make build - Build all Docker images"
@echo " make up - Start all services"
@echo " make down - Stop all services"
@echo " make restart - Restart all services"
@echo " make logs - View logs from all services"
@echo " make ps - List running containers"
@echo " make clean - Remove containers and volumes"
@echo " make test - Run tests"
@echo " make init-db - Initialize database schema"
# Build all Docker images
build:
docker-compose build
# Start all services
up:
docker-compose up -d
@echo "Waiting for services to be healthy..."
@sleep 10
@echo "Services are running!"
@echo "Auth Service: http://localhost:5001"
@echo "License API: http://localhost:5002"
@echo "Analytics: http://localhost:5003"
@echo "Admin API: http://localhost:5004"
@echo "RabbitMQ Management: http://localhost:15672"
# Stop all services
down:
docker-compose down
# Restart all services
restart: down up
# View logs
logs:
docker-compose logs -f
# List containers
ps:
docker-compose ps
# Clean up everything
clean:
docker-compose down -v
docker system prune -f
# Run tests
test:
@echo "Running API tests..."
@python tests/test_api.py
# Initialize database
init-db:
@echo "Initializing database schema..."
docker-compose exec postgres psql -U license_admin -d licenses -f /docker-entrypoint-initdb.d/init.sql
# Service-specific commands
logs-auth:
docker-compose logs -f auth_service
logs-license:
docker-compose logs -f license_api
logs-analytics:
docker-compose logs -f analytics_service
logs-admin:
docker-compose logs -f admin_api
# Development commands
dev:
docker-compose -f docker-compose.yml -f docker-compose.dev.yml up
shell-auth:
docker-compose exec auth_service /bin/bash
shell-license:
docker-compose exec license_api /bin/bash
shell-db:
docker-compose exec postgres psql -U license_admin -d licenses

89
lizenzserver/config.py Normale Datei
Datei anzeigen

@@ -0,0 +1,89 @@
import os
from datetime import timedelta
class Config:
"""Base configuration with sensible defaults"""
# Database
DATABASE_URL = os.getenv('DATABASE_URL', 'postgresql://admin:adminpass@localhost:5432/v2')
# Redis
REDIS_URL = os.getenv('REDIS_URL', 'redis://localhost:6379')
# RabbitMQ
RABBITMQ_URL = os.getenv('RABBITMQ_URL', 'amqp://guest:guest@localhost:5672')
# JWT
JWT_SECRET = os.getenv('JWT_SECRET', 'change-this-in-production')
JWT_ALGORITHM = 'HS256'
JWT_ACCESS_TOKEN_EXPIRES = timedelta(hours=1)
JWT_REFRESH_TOKEN_EXPIRES = timedelta(days=30)
# API Rate Limiting
DEFAULT_RATE_LIMIT_PER_MINUTE = 60
DEFAULT_RATE_LIMIT_PER_HOUR = 1000
DEFAULT_RATE_LIMIT_PER_DAY = 10000
# Offline tokens
MAX_OFFLINE_TOKEN_DURATION_HOURS = 72
DEFAULT_OFFLINE_TOKEN_DURATION_HOURS = 24
# Heartbeat settings
HEARTBEAT_INTERVAL_SECONDS = 300 # 5 minutes
HEARTBEAT_TIMEOUT_SECONDS = 900 # 15 minutes
# Session settings
MAX_CONCURRENT_SESSIONS = 1
SESSION_TIMEOUT_MINUTES = 30
# Cache TTL
CACHE_TTL_VALIDATION = 300 # 5 minutes
CACHE_TTL_LICENSE_STATUS = 60 # 1 minute
CACHE_TTL_DEVICE_LIST = 300 # 5 minutes
# Anomaly detection thresholds
ANOMALY_RAPID_HARDWARE_CHANGE_MINUTES = 10
ANOMALY_MULTIPLE_IPS_THRESHOLD = 5
ANOMALY_GEO_DISTANCE_KM = 1000
# Logging
LOG_LEVEL = os.getenv('LOG_LEVEL', 'INFO')
LOG_FORMAT = '%(asctime)s - %(name)s - %(levelname)s - %(message)s'
# Service ports
AUTH_SERVICE_PORT = int(os.getenv('PORT', 5001))
LICENSE_API_PORT = int(os.getenv('PORT', 5002))
ANALYTICS_SERVICE_PORT = int(os.getenv('PORT', 5003))
ADMIN_API_PORT = int(os.getenv('PORT', 5004))
class DevelopmentConfig(Config):
"""Development configuration"""
DEBUG = True
TESTING = False
class ProductionConfig(Config):
"""Production configuration"""
DEBUG = False
TESTING = False
# Override with production values
JWT_SECRET = os.environ['JWT_SECRET'] # Required in production
class TestingConfig(Config):
"""Testing configuration"""
DEBUG = True
TESTING = True
DATABASE_URL = 'postgresql://admin:adminpass@localhost:5432/v2_test'
# Configuration dictionary
config = {
'development': DevelopmentConfig,
'production': ProductionConfig,
'testing': TestingConfig,
'default': DevelopmentConfig
}
def get_config():
"""Get configuration based on environment"""
env = os.getenv('FLASK_ENV', 'development')
return config.get(env, config['default'])

Datei anzeigen

@@ -0,0 +1,123 @@
version: '3.8'
services:
license-auth:
build: ./services/auth
container_name: license-auth
environment:
- JWT_SECRET=${JWT_SECRET:-your-secret-key-change-in-production}
- DATABASE_URL=postgresql://admin:adminpass@postgres:5432/v2
- REDIS_URL=redis://redis:6379
- PORT=5001
ports:
- "5001:5001"
depends_on:
- postgres
- redis
networks:
- v2_network
restart: unless-stopped
license-api:
build: ./services/license_api
container_name: license-api
environment:
- DATABASE_URL=postgresql://admin:adminpass@postgres:5432/v2
- REDIS_URL=redis://redis:6379
- RABBITMQ_URL=amqp://guest:guest@rabbitmq:5672
- JWT_SECRET=${JWT_SECRET:-your-secret-key-change-in-production}
- PORT=5002
ports:
- "5002:5002"
depends_on:
- postgres
- redis
- rabbitmq
networks:
- v2_network
restart: unless-stopped
license-analytics:
build: ./services/analytics
container_name: license-analytics
environment:
- DATABASE_URL=postgresql://admin:adminpass@postgres:5432/v2
- REDIS_URL=redis://redis:6379
- RABBITMQ_URL=amqp://guest:guest@rabbitmq:5672
- PORT=5003
ports:
- "5003:5003"
depends_on:
- postgres
- redis
- rabbitmq
networks:
- v2_network
restart: unless-stopped
license-admin-api:
build: ./services/admin_api
container_name: license-admin-api
environment:
- DATABASE_URL=postgresql://admin:adminpass@postgres:5432/v2
- REDIS_URL=redis://redis:6379
- RABBITMQ_URL=amqp://guest:guest@rabbitmq:5672
- JWT_SECRET=${JWT_SECRET:-your-secret-key-change-in-production}
- PORT=5004
ports:
- "5004:5004"
depends_on:
- postgres
- redis
- rabbitmq
networks:
- v2_network
restart: unless-stopped
postgres:
image: postgres:15-alpine
container_name: license-postgres
environment:
- POSTGRES_DB=v2
- POSTGRES_USER=admin
- POSTGRES_PASSWORD=adminpass
volumes:
- postgres_data:/var/lib/postgresql/data
- ./init.sql:/docker-entrypoint-initdb.d/init.sql
networks:
- v2_network
restart: unless-stopped
redis:
image: redis:7-alpine
container_name: license-redis
command: redis-server --appendonly yes
volumes:
- redis_data:/data
networks:
- v2_network
restart: unless-stopped
rabbitmq:
image: rabbitmq:3-management-alpine
container_name: license-rabbitmq
environment:
- RABBITMQ_DEFAULT_USER=guest
- RABBITMQ_DEFAULT_PASS=guest
ports:
- "5672:5672"
- "15672:15672"
volumes:
- rabbitmq_data:/var/lib/rabbitmq
networks:
- v2_network
restart: unless-stopped
volumes:
postgres_data:
redis_data:
rabbitmq_data:
networks:
v2_network:
external: true

Datei anzeigen

@@ -0,0 +1,191 @@
version: '3.8'
services:
# PostgreSQL Database
postgres:
image: postgres:15-alpine
container_name: license_postgres
environment:
POSTGRES_DB: licenses
POSTGRES_USER: license_admin
POSTGRES_PASSWORD: ${DB_PASSWORD:-secure_password}
volumes:
- postgres_data:/var/lib/postgresql/data
- ./init.sql:/docker-entrypoint-initdb.d/init.sql
ports:
- "5432:5432"
healthcheck:
test: ["CMD-SHELL", "pg_isready -U license_admin -d licenses"]
interval: 10s
timeout: 5s
retries: 5
# Redis Cache
redis:
image: redis:7-alpine
container_name: license_redis
command: redis-server --requirepass ${REDIS_PASSWORD:-redis_password}
ports:
- "6379:6379"
volumes:
- redis_data:/data
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 10s
timeout: 5s
retries: 5
# RabbitMQ Message Broker
rabbitmq:
image: rabbitmq:3-management-alpine
container_name: license_rabbitmq
environment:
RABBITMQ_DEFAULT_USER: ${RABBITMQ_USER:-admin}
RABBITMQ_DEFAULT_PASS: ${RABBITMQ_PASS:-admin_password}
ports:
- "5672:5672"
- "15672:15672" # Management UI
volumes:
- rabbitmq_data:/var/lib/rabbitmq
healthcheck:
test: ["CMD", "rabbitmq-diagnostics", "ping"]
interval: 10s
timeout: 5s
retries: 5
# Auth Service
auth_service:
build:
context: .
dockerfile: Dockerfile.auth
container_name: license_auth
environment:
DATABASE_URL: postgresql://license_admin:${DB_PASSWORD:-secure_password}@postgres:5432/licenses
REDIS_URL: redis://:${REDIS_PASSWORD:-redis_password}@redis:6379
RABBITMQ_URL: amqp://${RABBITMQ_USER:-admin}:${RABBITMQ_PASS:-admin_password}@rabbitmq:5672
JWT_SECRET: ${JWT_SECRET:-change_this_in_production}
ADMIN_SECRET: ${ADMIN_SECRET:-change_this_admin_secret}
FLASK_ENV: ${FLASK_ENV:-production}
PORT: 5001
ports:
- "5001:5001"
depends_on:
postgres:
condition: service_healthy
redis:
condition: service_healthy
rabbitmq:
condition: service_healthy
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:5001/health"]
interval: 30s
timeout: 10s
retries: 3
# License API Service
license_api:
build:
context: .
dockerfile: Dockerfile.license
container_name: license_api
environment:
DATABASE_URL: postgresql://license_admin:${DB_PASSWORD:-secure_password}@postgres:5432/licenses
REDIS_URL: redis://:${REDIS_PASSWORD:-redis_password}@redis:6379
RABBITMQ_URL: amqp://${RABBITMQ_USER:-admin}:${RABBITMQ_PASS:-admin_password}@rabbitmq:5672
JWT_SECRET: ${JWT_SECRET:-change_this_in_production}
FLASK_ENV: ${FLASK_ENV:-production}
PORT: 5002
ports:
- "5002:5002"
depends_on:
postgres:
condition: service_healthy
redis:
condition: service_healthy
rabbitmq:
condition: service_healthy
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:5002/health"]
interval: 30s
timeout: 10s
retries: 3
# Analytics Service
analytics_service:
build:
context: .
dockerfile: Dockerfile.analytics
container_name: license_analytics
environment:
DATABASE_URL: postgresql://license_admin:${DB_PASSWORD:-secure_password}@postgres:5432/licenses
REDIS_URL: redis://:${REDIS_PASSWORD:-redis_password}@redis:6379
RABBITMQ_URL: amqp://${RABBITMQ_USER:-admin}:${RABBITMQ_PASS:-admin_password}@rabbitmq:5672
FLASK_ENV: ${FLASK_ENV:-production}
PORT: 5003
ports:
- "5003:5003"
depends_on:
postgres:
condition: service_healthy
redis:
condition: service_healthy
rabbitmq:
condition: service_healthy
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:5003/health"]
interval: 30s
timeout: 10s
retries: 3
# Admin API Service
admin_api:
build:
context: .
dockerfile: Dockerfile.admin
container_name: license_admin_api
environment:
DATABASE_URL: postgresql://license_admin:${DB_PASSWORD:-secure_password}@postgres:5432/licenses
REDIS_URL: redis://:${REDIS_PASSWORD:-redis_password}@redis:6379
RABBITMQ_URL: amqp://${RABBITMQ_USER:-admin}:${RABBITMQ_PASS:-admin_password}@rabbitmq:5672
ADMIN_API_KEY: ${ADMIN_API_KEY:-admin-key-change-in-production}
FLASK_ENV: ${FLASK_ENV:-production}
PORT: 5004
ports:
- "5004:5004"
depends_on:
postgres:
condition: service_healthy
redis:
condition: service_healthy
rabbitmq:
condition: service_healthy
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:5004/health"]
interval: 30s
timeout: 10s
retries: 3
# Nginx Reverse Proxy
nginx:
image: nginx:alpine
container_name: license_nginx
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf:ro
ports:
- "80:80"
- "443:443"
depends_on:
- auth_service
- license_api
- analytics_service
- admin_api
healthcheck:
test: ["CMD", "nginx", "-t"]
interval: 30s
timeout: 10s
retries: 3
volumes:
postgres_data:
redis_data:
rabbitmq_data:

Datei anzeigen

@@ -0,0 +1 @@
# Events Module

Datei anzeigen

@@ -0,0 +1,191 @@
import json
import logging
from typing import Dict, Any, Callable, List
from datetime import datetime
import pika
from pika.exceptions import AMQPConnectionError
import threading
from collections import defaultdict
logger = logging.getLogger(__name__)
class Event:
"""Base event class"""
def __init__(self, event_type: str, data: Dict[str, Any], source: str = "unknown"):
self.id = self._generate_id()
self.type = event_type
self.data = data
self.source = source
self.timestamp = datetime.utcnow().isoformat()
def _generate_id(self) -> str:
import uuid
return str(uuid.uuid4())
def to_dict(self) -> Dict[str, Any]:
return {
"id": self.id,
"type": self.type,
"data": self.data,
"source": self.source,
"timestamp": self.timestamp
}
def to_json(self) -> str:
return json.dumps(self.to_dict())
class EventBus:
"""Event bus for pub/sub pattern with RabbitMQ backend"""
def __init__(self, rabbitmq_url: str):
self.rabbitmq_url = rabbitmq_url
self.connection = None
self.channel = None
self.exchange_name = "license_events"
self.local_handlers: Dict[str, List[Callable]] = defaultdict(list)
self._connect()
def _connect(self):
"""Establish connection to RabbitMQ"""
try:
parameters = pika.URLParameters(self.rabbitmq_url)
self.connection = pika.BlockingConnection(parameters)
self.channel = self.connection.channel()
# Declare exchange
self.channel.exchange_declare(
exchange=self.exchange_name,
exchange_type='topic',
durable=True
)
logger.info("Connected to RabbitMQ")
except AMQPConnectionError as e:
logger.error(f"Failed to connect to RabbitMQ: {e}")
# Fallback to local-only event handling
self.connection = None
self.channel = None
def publish(self, event: Event):
"""Publish an event"""
try:
# Publish to RabbitMQ if connected
if self.channel and not self.channel.is_closed:
self.channel.basic_publish(
exchange=self.exchange_name,
routing_key=event.type,
body=event.to_json(),
properties=pika.BasicProperties(
delivery_mode=2, # Make message persistent
content_type='application/json'
)
)
logger.debug(f"Published event: {event.type}")
# Also handle local subscribers
self._handle_local_event(event)
except Exception as e:
logger.error(f"Error publishing event: {e}")
# Ensure local handlers still get called
self._handle_local_event(event)
def subscribe(self, event_type: str, handler: Callable):
"""Subscribe to an event type locally"""
self.local_handlers[event_type].append(handler)
logger.debug(f"Subscribed to {event_type}")
def subscribe_queue(self, event_types: List[str], queue_name: str, handler: Callable):
"""Subscribe to events via RabbitMQ queue"""
if not self.channel:
logger.warning("RabbitMQ not connected, falling back to local subscription")
for event_type in event_types:
self.subscribe(event_type, handler)
return
try:
# Declare queue
self.channel.queue_declare(queue=queue_name, durable=True)
# Bind queue to exchange for each event type
for event_type in event_types:
self.channel.queue_bind(
exchange=self.exchange_name,
queue=queue_name,
routing_key=event_type
)
# Set up consumer
def callback(ch, method, properties, body):
try:
event_data = json.loads(body)
event = Event(
event_type=event_data['type'],
data=event_data['data'],
source=event_data['source']
)
handler(event)
ch.basic_ack(delivery_tag=method.delivery_tag)
except Exception as e:
logger.error(f"Error handling event: {e}")
ch.basic_nack(delivery_tag=method.delivery_tag, requeue=True)
self.channel.basic_consume(queue=queue_name, on_message_callback=callback)
# Start consuming in a separate thread
consumer_thread = threading.Thread(target=self.channel.start_consuming)
consumer_thread.daemon = True
consumer_thread.start()
logger.info(f"Started consuming from queue: {queue_name}")
except Exception as e:
logger.error(f"Error setting up queue subscription: {e}")
def _handle_local_event(self, event: Event):
"""Handle event with local subscribers"""
handlers = self.local_handlers.get(event.type, [])
for handler in handlers:
try:
handler(event)
except Exception as e:
logger.error(f"Error in event handler: {e}")
def close(self):
"""Close RabbitMQ connection"""
if self.connection and not self.connection.is_closed:
self.connection.close()
logger.info("Closed RabbitMQ connection")
# Event types
class EventTypes:
"""Centralized event type definitions"""
# License events
LICENSE_VALIDATED = "license.validated"
LICENSE_VALIDATION_FAILED = "license.validation.failed"
LICENSE_ACTIVATED = "license.activated"
LICENSE_DEACTIVATED = "license.deactivated"
LICENSE_TRANSFERRED = "license.transferred"
LICENSE_EXPIRED = "license.expired"
LICENSE_CREATED = "license.created"
LICENSE_UPDATED = "license.updated"
# Device events
DEVICE_ADDED = "device.added"
DEVICE_REMOVED = "device.removed"
DEVICE_BLOCKED = "device.blocked"
DEVICE_DEACTIVATED = "device.deactivated"
# Anomaly events
ANOMALY_DETECTED = "anomaly.detected"
ANOMALY_RESOLVED = "anomaly.resolved"
# Session events
SESSION_STARTED = "session.started"
SESSION_ENDED = "session.ended"
SESSION_EXPIRED = "session.expired"
# System events
RATE_LIMIT_EXCEEDED = "system.rate_limit_exceeded"
API_ERROR = "system.api_error"

177
lizenzserver/init.sql Normale Datei
Datei anzeigen

@@ -0,0 +1,177 @@
-- License Server Database Schema
-- Following best practices: snake_case for DB fields, clear naming conventions
-- Enable UUID extension
CREATE EXTENSION IF NOT EXISTS "uuid-ossp";
-- License tokens for offline validation
CREATE TABLE IF NOT EXISTS license_tokens (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
license_id UUID REFERENCES licenses(id) ON DELETE CASCADE,
token VARCHAR(512) NOT NULL UNIQUE,
hardware_id VARCHAR(255) NOT NULL,
valid_until TIMESTAMP NOT NULL,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
last_validated TIMESTAMP,
validation_count INTEGER DEFAULT 0
);
CREATE INDEX idx_token ON license_tokens(token);
CREATE INDEX idx_hardware ON license_tokens(hardware_id);
CREATE INDEX idx_valid_until ON license_tokens(valid_until);
-- Heartbeat tracking with partitioning support
CREATE TABLE IF NOT EXISTS license_heartbeats (
id BIGSERIAL,
license_id UUID REFERENCES licenses(id) ON DELETE CASCADE,
hardware_id VARCHAR(255) NOT NULL,
ip_address INET,
user_agent VARCHAR(500),
app_version VARCHAR(50),
timestamp TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
session_data JSONB,
PRIMARY KEY (id, timestamp)
) PARTITION BY RANGE (timestamp);
-- Create partitions for the current and next month
CREATE TABLE license_heartbeats_2025_01 PARTITION OF license_heartbeats
FOR VALUES FROM ('2025-01-01') TO ('2025-02-01');
CREATE TABLE license_heartbeats_2025_02 PARTITION OF license_heartbeats
FOR VALUES FROM ('2025-02-01') TO ('2025-03-01');
CREATE INDEX idx_heartbeat_license_time ON license_heartbeats(license_id, timestamp DESC);
CREATE INDEX idx_heartbeat_hardware_time ON license_heartbeats(hardware_id, timestamp DESC);
-- Activation events tracking
CREATE TABLE IF NOT EXISTS activation_events (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
license_id UUID REFERENCES licenses(id) ON DELETE CASCADE,
event_type VARCHAR(50) NOT NULL CHECK (event_type IN ('activation', 'deactivation', 'reactivation', 'transfer')),
hardware_id VARCHAR(255),
previous_hardware_id VARCHAR(255),
ip_address INET,
user_agent VARCHAR(500),
success BOOLEAN DEFAULT true,
error_message TEXT,
metadata JSONB,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);
CREATE INDEX idx_license_events ON activation_events(license_id, created_at DESC);
CREATE INDEX idx_event_type ON activation_events(event_type, created_at DESC);
-- API rate limiting
CREATE TABLE IF NOT EXISTS api_rate_limits (
id SERIAL PRIMARY KEY,
api_key VARCHAR(255) NOT NULL UNIQUE,
requests_per_minute INTEGER DEFAULT 60,
requests_per_hour INTEGER DEFAULT 1000,
requests_per_day INTEGER DEFAULT 10000,
burst_size INTEGER DEFAULT 100,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);
-- Anomaly detection
CREATE TABLE IF NOT EXISTS anomaly_detections (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
license_id UUID REFERENCES licenses(id),
anomaly_type VARCHAR(100) NOT NULL CHECK (anomaly_type IN ('multiple_ips', 'rapid_hardware_change', 'suspicious_pattern', 'concurrent_use', 'geo_anomaly')),
severity VARCHAR(20) NOT NULL CHECK (severity IN ('low', 'medium', 'high', 'critical')),
details JSONB NOT NULL,
detected_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
resolved BOOLEAN DEFAULT false,
resolved_at TIMESTAMP,
resolved_by VARCHAR(255),
action_taken TEXT
);
CREATE INDEX idx_unresolved ON anomaly_detections(resolved, severity, detected_at DESC);
CREATE INDEX idx_license_anomalies ON anomaly_detections(license_id, detected_at DESC);
-- API clients for authentication
CREATE TABLE IF NOT EXISTS api_clients (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
client_name VARCHAR(255) NOT NULL,
api_key VARCHAR(255) NOT NULL UNIQUE,
secret_key VARCHAR(255) NOT NULL,
is_active BOOLEAN DEFAULT true,
allowed_endpoints TEXT[],
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);
-- Feature flags for gradual rollout
CREATE TABLE IF NOT EXISTS feature_flags (
id SERIAL PRIMARY KEY,
feature_name VARCHAR(100) NOT NULL UNIQUE,
is_enabled BOOLEAN DEFAULT false,
rollout_percentage INTEGER DEFAULT 0 CHECK (rollout_percentage >= 0 AND rollout_percentage <= 100),
whitelist_license_ids UUID[],
blacklist_license_ids UUID[],
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);
-- Insert default feature flags
INSERT INTO feature_flags (feature_name, is_enabled, rollout_percentage) VALUES
('anomaly_detection', true, 100),
('offline_tokens', true, 100),
('advanced_analytics', false, 0),
('geo_restriction', false, 0)
ON CONFLICT (feature_name) DO NOTHING;
-- Session management for concurrent use tracking
CREATE TABLE IF NOT EXISTS active_sessions (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
license_id UUID REFERENCES licenses(id) ON DELETE CASCADE,
hardware_id VARCHAR(255) NOT NULL,
session_token VARCHAR(512) NOT NULL UNIQUE,
ip_address INET,
started_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
last_seen TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
expires_at TIMESTAMP NOT NULL
);
CREATE INDEX idx_session_license ON active_sessions(license_id);
CREATE INDEX idx_session_expires ON active_sessions(expires_at);
-- Update trigger for updated_at columns
CREATE OR REPLACE FUNCTION update_updated_at_column()
RETURNS TRIGGER AS $$
BEGIN
NEW.updated_at = CURRENT_TIMESTAMP;
RETURN NEW;
END;
$$ language 'plpgsql';
CREATE TRIGGER update_api_rate_limits_updated_at BEFORE UPDATE ON api_rate_limits
FOR EACH ROW EXECUTE FUNCTION update_updated_at_column();
CREATE TRIGGER update_api_clients_updated_at BEFORE UPDATE ON api_clients
FOR EACH ROW EXECUTE FUNCTION update_updated_at_column();
CREATE TRIGGER update_feature_flags_updated_at BEFORE UPDATE ON feature_flags
FOR EACH ROW EXECUTE FUNCTION update_updated_at_column();
-- Function to automatically create monthly partitions for heartbeats
CREATE OR REPLACE FUNCTION create_monthly_partition()
RETURNS void AS $$
DECLARE
start_date date;
end_date date;
partition_name text;
BEGIN
start_date := date_trunc('month', CURRENT_DATE + interval '1 month');
end_date := start_date + interval '1 month';
partition_name := 'license_heartbeats_' || to_char(start_date, 'YYYY_MM');
EXECUTE format('CREATE TABLE IF NOT EXISTS %I PARTITION OF license_heartbeats FOR VALUES FROM (%L) TO (%L)',
partition_name, start_date, end_date);
END;
$$ LANGUAGE plpgsql;
-- Create a scheduled job to create partitions (requires pg_cron extension)
-- This is a placeholder - actual scheduling depends on your PostgreSQL setup
-- SELECT cron.schedule('create-partitions', '0 0 1 * *', 'SELECT create_monthly_partition();');

Datei anzeigen

@@ -0,0 +1 @@
# Middleware Module

Datei anzeigen

@@ -0,0 +1,158 @@
import time
from functools import wraps
from flask import request, jsonify
import redis
from typing import Optional, Tuple
import logging
logger = logging.getLogger(__name__)
class RateLimiter:
"""Rate limiting middleware using Redis"""
def __init__(self, redis_url: str):
self.redis_client = None
try:
self.redis_client = redis.from_url(redis_url, decode_responses=True)
self.redis_client.ping()
logger.info("Connected to Redis for rate limiting")
except Exception as e:
logger.warning(f"Redis not available for rate limiting: {e}")
def limit(self, requests_per_minute: int = 60, requests_per_hour: int = 1000):
"""Decorator for rate limiting endpoints"""
def decorator(f):
@wraps(f)
def decorated_function(*args, **kwargs):
if not self.redis_client:
# Redis not available, skip rate limiting
return f(*args, **kwargs)
# Get client identifier (API key or IP)
client_id = self._get_client_id()
# Check rate limits
is_allowed, retry_after = self._check_rate_limit(
client_id,
requests_per_minute,
requests_per_hour
)
if not is_allowed:
response = jsonify({
"error": "Rate limit exceeded",
"retry_after": retry_after
})
response.status_code = 429
response.headers['Retry-After'] = str(retry_after)
response.headers['X-RateLimit-Limit'] = str(requests_per_minute)
return response
# Add rate limit headers
response = f(*args, **kwargs)
if hasattr(response, 'headers'):
response.headers['X-RateLimit-Limit'] = str(requests_per_minute)
response.headers['X-RateLimit-Remaining'] = str(
self._get_remaining_requests(client_id, requests_per_minute)
)
return response
return decorated_function
return decorator
def _get_client_id(self) -> str:
"""Get client identifier from request"""
# First try API key
api_key = request.headers.get('X-API-Key')
if api_key:
return f"api_key:{api_key}"
# Then try auth token
auth_header = request.headers.get('Authorization')
if auth_header and auth_header.startswith('Bearer '):
return f"token:{auth_header[7:32]}" # Use first 32 chars of token
# Fallback to IP
if request.headers.get('X-Forwarded-For'):
ip = request.headers.get('X-Forwarded-For').split(',')[0]
else:
ip = request.remote_addr
return f"ip:{ip}"
def _check_rate_limit(self, client_id: str,
requests_per_minute: int,
requests_per_hour: int) -> Tuple[bool, Optional[int]]:
"""Check if request is within rate limits"""
now = int(time.time())
# Check minute limit
minute_key = f"rate_limit:minute:{client_id}:{now // 60}"
minute_count = self.redis_client.incr(minute_key)
self.redis_client.expire(minute_key, 60)
if minute_count > requests_per_minute:
retry_after = 60 - (now % 60)
return False, retry_after
# Check hour limit
hour_key = f"rate_limit:hour:{client_id}:{now // 3600}"
hour_count = self.redis_client.incr(hour_key)
self.redis_client.expire(hour_key, 3600)
if hour_count > requests_per_hour:
retry_after = 3600 - (now % 3600)
return False, retry_after
return True, None
def _get_remaining_requests(self, client_id: str, limit: int) -> int:
"""Get remaining requests in current minute"""
now = int(time.time())
minute_key = f"rate_limit:minute:{client_id}:{now // 60}"
try:
current_count = int(self.redis_client.get(minute_key) or 0)
return max(0, limit - current_count)
except:
return limit
class APIKeyRateLimiter(RateLimiter):
"""Extended rate limiter with API key specific limits"""
def __init__(self, redis_url: str, db_repo):
super().__init__(redis_url)
self.db_repo = db_repo
def limit_by_api_key(self):
"""Rate limit based on API key configuration"""
def decorator(f):
@wraps(f)
def decorated_function(*args, **kwargs):
api_key = request.headers.get('X-API-Key')
if not api_key:
# Use default limits for non-API key requests
return self.limit()(f)(*args, **kwargs)
# Get API key configuration from database
query = """
SELECT rate_limit_per_minute, rate_limit_per_hour
FROM api_clients
WHERE api_key = %s AND is_active = true
"""
client = self.db_repo.execute_one(query, (api_key,))
if not client:
return jsonify({"error": "Invalid API key"}), 401
# Use custom limits or defaults
rpm = client.get('rate_limit_per_minute', 60)
rph = client.get('rate_limit_per_hour', 1000)
return self.limit(rpm, rph)(f)(*args, **kwargs)
return decorated_function
return decorator

Datei anzeigen

@@ -0,0 +1,127 @@
from datetime import datetime
from typing import Optional, List, Dict, Any
from dataclasses import dataclass, field
from enum import Enum
class EventType(Enum):
"""License event types"""
ACTIVATION = "activation"
DEACTIVATION = "deactivation"
REACTIVATION = "reactivation"
TRANSFER = "transfer"
class AnomalyType(Enum):
"""Anomaly detection types"""
MULTIPLE_IPS = "multiple_ips"
RAPID_HARDWARE_CHANGE = "rapid_hardware_change"
SUSPICIOUS_PATTERN = "suspicious_pattern"
CONCURRENT_USE = "concurrent_use"
GEO_ANOMALY = "geo_anomaly"
class Severity(Enum):
"""Anomaly severity levels"""
LOW = "low"
MEDIUM = "medium"
HIGH = "high"
CRITICAL = "critical"
@dataclass
class License:
"""License domain model"""
id: str
license_key: str
customer_id: str
max_devices: int
is_active: bool
is_test: bool
created_at: datetime
updated_at: datetime
expires_at: Optional[datetime] = None
features: List[str] = field(default_factory=list)
metadata: Dict[str, Any] = field(default_factory=dict)
@dataclass
class LicenseToken:
"""Offline validation token"""
id: str
license_id: str
token: str
hardware_id: str
valid_until: datetime
created_at: datetime
last_validated: Optional[datetime] = None
validation_count: int = 0
@dataclass
class Heartbeat:
"""License heartbeat"""
id: int
license_id: str
hardware_id: str
ip_address: Optional[str]
user_agent: Optional[str]
app_version: Optional[str]
timestamp: datetime
session_data: Dict[str, Any] = field(default_factory=dict)
@dataclass
class ActivationEvent:
"""License activation event"""
id: str
license_id: str
event_type: EventType
hardware_id: Optional[str]
previous_hardware_id: Optional[str]
ip_address: Optional[str]
user_agent: Optional[str]
success: bool
error_message: Optional[str]
metadata: Dict[str, Any] = field(default_factory=dict)
created_at: datetime
@dataclass
class AnomalyDetection:
"""Detected anomaly"""
id: str
license_id: str
anomaly_type: AnomalyType
severity: Severity
details: Dict[str, Any]
detected_at: datetime
resolved: bool = False
resolved_at: Optional[datetime] = None
resolved_by: Optional[str] = None
action_taken: Optional[str] = None
@dataclass
class Session:
"""Active session"""
id: str
license_id: str
hardware_id: str
session_token: str
ip_address: Optional[str]
started_at: datetime
last_seen: datetime
expires_at: datetime
@dataclass
class ValidationRequest:
"""License validation request"""
license_key: str
hardware_id: str
app_version: Optional[str] = None
ip_address: Optional[str] = None
user_agent: Optional[str] = None
@dataclass
class ValidationResponse:
"""License validation response"""
valid: bool
license_id: Optional[str] = None
token: Optional[str] = None
expires_at: Optional[datetime] = None
features: List[str] = field(default_factory=list)
limits: Dict[str, Any] = field(default_factory=dict)
error: Optional[str] = None
error_code: Optional[str] = None

167
lizenzserver/nginx.conf Normale Datei
Datei anzeigen

@@ -0,0 +1,167 @@
events {
worker_connections 1024;
}
http {
# Basic settings
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
# Logging
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
# Gzip compression
gzip on;
gzip_vary on;
gzip_min_length 1024;
gzip_types text/plain text/css text/xml application/json application/javascript application/xml+rss;
# Rate limiting zones
limit_req_zone $binary_remote_addr zone=api_limit:10m rate=10r/s;
limit_req_zone $http_x_api_key zone=key_limit:10m rate=100r/s;
# Upstream services
upstream auth_service {
server auth_service:5001;
}
upstream license_api {
server license_api:5002;
}
upstream analytics_service {
server analytics_service:5003;
}
upstream admin_api {
server admin_api:5004;
}
# Main server block
server {
listen 80;
server_name localhost;
# Security headers
add_header X-Content-Type-Options nosniff;
add_header X-Frame-Options DENY;
add_header X-XSS-Protection "1; mode=block";
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;
# API versioning and routing
location /api/v1/auth/ {
limit_req zone=api_limit burst=20 nodelay;
proxy_pass http://auth_service/api/v1/auth/;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
# CORS headers
add_header 'Access-Control-Allow-Origin' '*' always;
add_header 'Access-Control-Allow-Methods' 'GET, POST, PUT, DELETE, OPTIONS' always;
add_header 'Access-Control-Allow-Headers' 'DNT,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Range,Authorization,X-API-Key' always;
if ($request_method = 'OPTIONS') {
return 204;
}
}
location /api/v1/license/ {
limit_req zone=key_limit burst=50 nodelay;
proxy_pass http://license_api/api/v1/license/;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
# CORS headers
add_header 'Access-Control-Allow-Origin' '*' always;
add_header 'Access-Control-Allow-Methods' 'GET, POST, PUT, DELETE, OPTIONS' always;
add_header 'Access-Control-Allow-Headers' 'DNT,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Range,Authorization,X-API-Key' always;
if ($request_method = 'OPTIONS') {
return 204;
}
}
location /api/v1/analytics/ {
limit_req zone=key_limit burst=30 nodelay;
proxy_pass http://analytics_service/api/v1/analytics/;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
# CORS headers
add_header 'Access-Control-Allow-Origin' '*' always;
add_header 'Access-Control-Allow-Methods' 'GET, POST, PUT, DELETE, OPTIONS' always;
add_header 'Access-Control-Allow-Headers' 'DNT,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Range,Authorization,X-API-Key' always;
if ($request_method = 'OPTIONS') {
return 204;
}
}
location /api/v1/admin/ {
limit_req zone=key_limit burst=30 nodelay;
# Additional security for admin endpoints
# In production, add IP whitelisting here
proxy_pass http://admin_api/api/v1/admin/;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
# CORS headers (more restrictive for admin)
add_header 'Access-Control-Allow-Origin' '$http_origin' always;
add_header 'Access-Control-Allow-Methods' 'GET, POST, PUT, PATCH, DELETE, OPTIONS' always;
add_header 'Access-Control-Allow-Headers' 'DNT,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Range,Authorization,X-Admin-API-Key' always;
add_header 'Access-Control-Allow-Credentials' 'true' always;
if ($request_method = 'OPTIONS') {
return 204;
}
}
# Health check endpoint
location /health {
access_log off;
return 200 "healthy\n";
add_header Content-Type text/plain;
}
# Root redirect
location / {
return 301 /api/v1/;
}
# API documentation
location /api/v1/ {
return 200 '{"message": "License Server API v1", "documentation": "/api/v1/docs"}';
add_header Content-Type application/json;
}
}
# HTTPS server block (for production)
# server {
# listen 443 ssl http2;
# server_name your-domain.com;
#
# ssl_certificate /etc/nginx/ssl/cert.pem;
# ssl_certificate_key /etc/nginx/ssl/key.pem;
# ssl_protocols TLSv1.2 TLSv1.3;
# ssl_ciphers HIGH:!aNULL:!MD5;
#
# # Same location blocks as above
# }
}

Datei anzeigen

@@ -0,0 +1,94 @@
from abc import ABC, abstractmethod
from typing import Optional, List, Dict, Any
import psycopg2
from psycopg2.extras import RealDictCursor
from contextlib import contextmanager
import logging
logger = logging.getLogger(__name__)
class BaseRepository(ABC):
"""Base repository with common database operations"""
def __init__(self, db_url: str):
self.db_url = db_url
@contextmanager
def get_db_connection(self):
"""Get database connection with automatic cleanup"""
conn = None
try:
conn = psycopg2.connect(self.db_url)
yield conn
except Exception as e:
if conn:
conn.rollback()
logger.error(f"Database error: {e}")
raise
finally:
if conn:
conn.close()
@contextmanager
def get_db_cursor(self, conn):
"""Get database cursor with dict results"""
cursor = None
try:
cursor = conn.cursor(cursor_factory=RealDictCursor)
yield cursor
finally:
if cursor:
cursor.close()
def execute_query(self, query: str, params: tuple = None) -> List[Dict[str, Any]]:
"""Execute SELECT query and return results"""
with self.get_db_connection() as conn:
with self.get_db_cursor(conn) as cursor:
cursor.execute(query, params)
return cursor.fetchall()
def execute_one(self, query: str, params: tuple = None) -> Optional[Dict[str, Any]]:
"""Execute query and return single result"""
with self.get_db_connection() as conn:
with self.get_db_cursor(conn) as cursor:
cursor.execute(query, params)
return cursor.fetchone()
def execute_insert(self, query: str, params: tuple = None) -> Optional[str]:
"""Execute INSERT query and return ID"""
with self.get_db_connection() as conn:
with self.get_db_cursor(conn) as cursor:
cursor.execute(query + " RETURNING id", params)
result = cursor.fetchone()
conn.commit()
return result['id'] if result else None
def execute_update(self, query: str, params: tuple = None) -> int:
"""Execute UPDATE query and return affected rows"""
with self.get_db_connection() as conn:
with self.get_db_cursor(conn) as cursor:
cursor.execute(query, params)
affected = cursor.rowcount
conn.commit()
return affected
def execute_delete(self, query: str, params: tuple = None) -> int:
"""Execute DELETE query and return affected rows"""
with self.get_db_connection() as conn:
with self.get_db_cursor(conn) as cursor:
cursor.execute(query, params)
affected = cursor.rowcount
conn.commit()
return affected
def execute_batch(self, queries: List[tuple]) -> None:
"""Execute multiple queries in a transaction"""
with self.get_db_connection() as conn:
with self.get_db_cursor(conn) as cursor:
try:
for query, params in queries:
cursor.execute(query, params)
conn.commit()
except Exception as e:
conn.rollback()
raise

Datei anzeigen

@@ -0,0 +1,178 @@
import redis
import json
import logging
from typing import Optional, Any, Dict, List
from datetime import timedelta
logger = logging.getLogger(__name__)
class CacheRepository:
"""Redis cache repository"""
def __init__(self, redis_url: str):
self.redis_url = redis_url
self._connect()
def _connect(self):
"""Connect to Redis"""
try:
self.redis = redis.from_url(self.redis_url, decode_responses=True)
self.redis.ping()
logger.info("Connected to Redis")
except Exception as e:
logger.error(f"Failed to connect to Redis: {e}")
self.redis = None
def _make_key(self, prefix: str, *args) -> str:
"""Create cache key"""
parts = [prefix] + [str(arg) for arg in args]
return ":".join(parts)
def get(self, key: str) -> Optional[Any]:
"""Get value from cache"""
if not self.redis:
return None
try:
value = self.redis.get(key)
if value:
return json.loads(value)
return None
except Exception as e:
logger.error(f"Cache get error: {e}")
return None
def set(self, key: str, value: Any, ttl: int = 300) -> bool:
"""Set value in cache with TTL in seconds"""
if not self.redis:
return False
try:
json_value = json.dumps(value)
return self.redis.setex(key, ttl, json_value)
except Exception as e:
logger.error(f"Cache set error: {e}")
return False
def delete(self, key: str) -> bool:
"""Delete key from cache"""
if not self.redis:
return False
try:
return bool(self.redis.delete(key))
except Exception as e:
logger.error(f"Cache delete error: {e}")
return False
def delete_pattern(self, pattern: str) -> int:
"""Delete all keys matching pattern"""
if not self.redis:
return 0
try:
keys = self.redis.keys(pattern)
if keys:
return self.redis.delete(*keys)
return 0
except Exception as e:
logger.error(f"Cache delete pattern error: {e}")
return 0
# License-specific cache methods
def get_license_validation(self, license_key: str, hardware_id: str) -> Optional[Dict[str, Any]]:
"""Get cached license validation result"""
key = self._make_key("license:validation", license_key, hardware_id)
return self.get(key)
def set_license_validation(self, license_key: str, hardware_id: str,
result: Dict[str, Any], ttl: int = 300) -> bool:
"""Cache license validation result"""
key = self._make_key("license:validation", license_key, hardware_id)
return self.set(key, result, ttl)
def get_license_status(self, license_id: str) -> Optional[Dict[str, Any]]:
"""Get cached license status"""
key = self._make_key("license:status", license_id)
return self.get(key)
def set_license_status(self, license_id: str, status: Dict[str, Any],
ttl: int = 60) -> bool:
"""Cache license status"""
key = self._make_key("license:status", license_id)
return self.set(key, status, ttl)
def get_device_list(self, license_id: str) -> Optional[List[Dict[str, Any]]]:
"""Get cached device list"""
key = self._make_key("license:devices", license_id)
return self.get(key)
def set_device_list(self, license_id: str, devices: List[Dict[str, Any]],
ttl: int = 300) -> bool:
"""Cache device list"""
key = self._make_key("license:devices", license_id)
return self.set(key, devices, ttl)
def invalidate_license_cache(self, license_id: str) -> None:
"""Invalidate all cache entries for a license"""
patterns = [
f"license:validation:*:{license_id}",
f"license:status:{license_id}",
f"license:devices:{license_id}"
]
for pattern in patterns:
self.delete_pattern(pattern)
# Rate limiting methods
def check_rate_limit(self, key: str, limit: int, window: int) -> tuple[bool, int]:
"""Check if rate limit is exceeded
Returns: (is_allowed, current_count)
"""
if not self.redis:
return True, 0
try:
pipe = self.redis.pipeline()
now = int(time.time())
window_start = now - window
# Remove old entries
pipe.zremrangebyscore(key, 0, window_start)
# Count requests in current window
pipe.zcard(key)
# Add current request
pipe.zadd(key, {str(now): now})
# Set expiry
pipe.expire(key, window + 1)
results = pipe.execute()
current_count = results[1]
return current_count < limit, current_count + 1
except Exception as e:
logger.error(f"Rate limit check error: {e}")
return True, 0
def increment_counter(self, key: str, window: int = 3600) -> int:
"""Increment counter with expiry"""
if not self.redis:
return 0
try:
pipe = self.redis.pipeline()
pipe.incr(key)
pipe.expire(key, window)
results = pipe.execute()
return results[0]
except Exception as e:
logger.error(f"Counter increment error: {e}")
return 0
import time # Add this import at the top

Datei anzeigen

@@ -0,0 +1,228 @@
from typing import Optional, List, Dict, Any
from datetime import datetime, timedelta
from .base import BaseRepository
from ..models import License, LicenseToken, ActivationEvent, EventType
import logging
logger = logging.getLogger(__name__)
class LicenseRepository(BaseRepository):
"""Repository for license-related database operations"""
def get_license_by_key(self, license_key: str) -> Optional[Dict[str, Any]]:
"""Get license by key"""
query = """
SELECT l.*, c.name as customer_name, c.email as customer_email
FROM licenses l
JOIN customers c ON l.customer_id = c.id
WHERE l.license_key = %s
"""
return self.execute_one(query, (license_key,))
def get_license_by_id(self, license_id: str) -> Optional[Dict[str, Any]]:
"""Get license by ID"""
query = """
SELECT l.*, c.name as customer_name, c.email as customer_email
FROM licenses l
JOIN customers c ON l.customer_id = c.id
WHERE l.id = %s
"""
return self.execute_one(query, (license_id,))
def get_active_devices(self, license_id: str) -> List[Dict[str, Any]]:
"""Get active devices for a license"""
query = """
SELECT DISTINCT ON (hardware_id)
hardware_id,
ip_address,
user_agent,
app_version,
timestamp as last_seen
FROM license_heartbeats
WHERE license_id = %s
AND timestamp > NOW() - INTERVAL '15 minutes'
ORDER BY hardware_id, timestamp DESC
"""
return self.execute_query(query, (license_id,))
def get_device_count(self, license_id: str) -> int:
"""Get count of active devices"""
query = """
SELECT COUNT(DISTINCT hardware_id) as device_count
FROM license_heartbeats
WHERE license_id = %s
AND timestamp > NOW() - INTERVAL '15 minutes'
"""
result = self.execute_one(query, (license_id,))
return result['device_count'] if result else 0
def create_license_token(self, license_id: str, hardware_id: str,
valid_hours: int = 24) -> Optional[str]:
"""Create offline validation token"""
import secrets
token = secrets.token_urlsafe(64)
valid_until = datetime.utcnow() + timedelta(hours=valid_hours)
query = """
INSERT INTO license_tokens (license_id, token, hardware_id, valid_until)
VALUES (%s, %s, %s, %s)
RETURNING id
"""
result = self.execute_insert(query, (license_id, token, hardware_id, valid_until))
return token if result else None
def validate_token(self, token: str) -> Optional[Dict[str, Any]]:
"""Validate offline token"""
query = """
SELECT lt.*, l.license_key, l.is_active, l.expires_at
FROM license_tokens lt
JOIN licenses l ON lt.license_id = l.id
WHERE lt.token = %s
AND lt.valid_until > NOW()
AND l.is_active = true
"""
result = self.execute_one(query, (token,))
if result:
# Update validation count and timestamp
update_query = """
UPDATE license_tokens
SET validation_count = validation_count + 1,
last_validated = NOW()
WHERE token = %s
"""
self.execute_update(update_query, (token,))
return result
def record_heartbeat(self, license_id: str, hardware_id: str,
ip_address: str = None, user_agent: str = None,
app_version: str = None, session_data: Dict = None) -> None:
"""Record license heartbeat"""
query = """
INSERT INTO license_heartbeats
(license_id, hardware_id, ip_address, user_agent, app_version, session_data)
VALUES (%s, %s, %s, %s, %s, %s)
"""
import json
session_json = json.dumps(session_data) if session_data else None
self.execute_insert(query, (
license_id, hardware_id, ip_address,
user_agent, app_version, session_json
))
def record_activation_event(self, license_id: str, event_type: EventType,
hardware_id: str = None, previous_hardware_id: str = None,
ip_address: str = None, user_agent: str = None,
success: bool = True, error_message: str = None,
metadata: Dict = None) -> str:
"""Record activation event"""
query = """
INSERT INTO activation_events
(license_id, event_type, hardware_id, previous_hardware_id,
ip_address, user_agent, success, error_message, metadata)
VALUES (%s, %s, %s, %s, %s, %s, %s, %s, %s)
RETURNING id
"""
import json
metadata_json = json.dumps(metadata) if metadata else None
return self.execute_insert(query, (
license_id, event_type.value, hardware_id, previous_hardware_id,
ip_address, user_agent, success, error_message, metadata_json
))
def get_recent_activations(self, license_id: str, hours: int = 24) -> List[Dict[str, Any]]:
"""Get recent activation events"""
query = """
SELECT * FROM activation_events
WHERE license_id = %s
AND created_at > NOW() - INTERVAL '%s hours'
ORDER BY created_at DESC
"""
return self.execute_query(query, (license_id, hours))
def check_hardware_id_exists(self, license_id: str, hardware_id: str) -> bool:
"""Check if hardware ID is already registered"""
query = """
SELECT 1 FROM activation_events
WHERE license_id = %s
AND hardware_id = %s
AND event_type IN ('activation', 'reactivation')
AND success = true
LIMIT 1
"""
result = self.execute_one(query, (license_id, hardware_id))
return result is not None
def deactivate_device(self, license_id: str, hardware_id: str) -> bool:
"""Deactivate a device"""
# Record deactivation event
self.record_activation_event(
license_id=license_id,
event_type=EventType.DEACTIVATION,
hardware_id=hardware_id,
success=True
)
# Remove any active tokens for this device
query = """
DELETE FROM license_tokens
WHERE license_id = %s AND hardware_id = %s
"""
affected = self.execute_delete(query, (license_id, hardware_id))
return affected > 0
def transfer_license(self, license_id: str, from_hardware_id: str,
to_hardware_id: str, ip_address: str = None) -> bool:
"""Transfer license from one device to another"""
try:
# Deactivate old device
self.deactivate_device(license_id, from_hardware_id)
# Record transfer event
self.record_activation_event(
license_id=license_id,
event_type=EventType.TRANSFER,
hardware_id=to_hardware_id,
previous_hardware_id=from_hardware_id,
ip_address=ip_address,
success=True
)
return True
except Exception as e:
logger.error(f"License transfer failed: {e}")
return False
def get_license_usage_stats(self, license_id: str, days: int = 30) -> Dict[str, Any]:
"""Get usage statistics for a license"""
query = """
WITH daily_stats AS (
SELECT
DATE(timestamp) as date,
COUNT(*) as validations,
COUNT(DISTINCT hardware_id) as unique_devices,
COUNT(DISTINCT ip_address) as unique_ips
FROM license_heartbeats
WHERE license_id = %s
AND timestamp > NOW() - INTERVAL '%s days'
GROUP BY DATE(timestamp)
)
SELECT
COUNT(*) as total_days,
SUM(validations) as total_validations,
AVG(validations) as avg_daily_validations,
MAX(unique_devices) as max_devices,
MAX(unique_ips) as max_ips
FROM daily_stats
"""
return self.execute_one(query, (license_id, days)) or {}

Datei anzeigen

@@ -0,0 +1,31 @@
# Flask and extensions
Flask==3.0.0
Flask-CORS==4.0.0
flask-limiter==3.5.0
# Database
psycopg2-binary==2.9.9
SQLAlchemy==2.0.23
# Redis
redis==5.0.1
# RabbitMQ
pika==1.3.2
# JWT
PyJWT==2.8.0
# Validation
marshmallow==3.20.1
# Monitoring
prometheus-flask-exporter==0.23.0
# Utilities
python-dateutil==2.8.2
pytz==2023.3
requests==2.31.0
# Development
python-dotenv==1.0.0

Datei anzeigen

@@ -0,0 +1 @@
# Admin API Service

Datei anzeigen

@@ -0,0 +1,666 @@
import os
import sys
from flask import Flask, request, jsonify
from flask_cors import CORS
import logging
from functools import wraps
from marshmallow import Schema, fields, ValidationError
from datetime import datetime, timedelta
import secrets
# Add parent directory to path for imports
sys.path.append(os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__)))))
from config import get_config
from repositories.license_repo import LicenseRepository
from repositories.cache_repo import CacheRepository
from events.event_bus import EventBus, Event, EventTypes
from models import EventType, AnomalyType, Severity
# Configure logging
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)
# Initialize Flask app
app = Flask(__name__)
config = get_config()
app.config.from_object(config)
CORS(app)
# Initialize dependencies
license_repo = LicenseRepository(config.DATABASE_URL)
cache_repo = CacheRepository(config.REDIS_URL)
event_bus = EventBus(config.RABBITMQ_URL)
# Validation schemas
class CreateLicenseSchema(Schema):
customer_id = fields.Str(required=True)
max_devices = fields.Int(missing=1, validate=lambda x: x > 0)
expires_in_days = fields.Int(allow_none=True)
features = fields.List(fields.Str(), missing=[])
is_test = fields.Bool(missing=False)
metadata = fields.Dict(missing={})
class UpdateLicenseSchema(Schema):
max_devices = fields.Int(validate=lambda x: x > 0)
is_active = fields.Bool()
expires_at = fields.DateTime()
features = fields.List(fields.Str())
metadata = fields.Dict()
class DeactivateDeviceSchema(Schema):
hardware_id = fields.Str(required=True)
reason = fields.Str()
class TransferLicenseSchema(Schema):
from_hardware_id = fields.Str(required=True)
to_hardware_id = fields.Str(required=True)
class SearchLicensesSchema(Schema):
customer_id = fields.Str()
is_active = fields.Bool()
is_test = fields.Bool()
created_after = fields.DateTime()
created_before = fields.DateTime()
expires_after = fields.DateTime()
expires_before = fields.DateTime()
page = fields.Int(missing=1, validate=lambda x: x > 0)
per_page = fields.Int(missing=50, validate=lambda x: 0 < x <= 100)
def require_admin_auth(f):
"""Decorator to require admin authentication"""
@wraps(f)
def decorated_function(*args, **kwargs):
# Check for admin API key
api_key = request.headers.get('X-Admin-API-Key')
if not api_key:
return jsonify({"error": "Missing admin API key"}), 401
# In production, validate against database
# For now, check environment variable
if api_key != os.getenv('ADMIN_API_KEY', 'admin-key-change-in-production'):
return jsonify({"error": "Invalid admin API key"}), 401
return f(*args, **kwargs)
return decorated_function
@app.route('/health', methods=['GET'])
def health_check():
"""Health check endpoint"""
return jsonify({
"status": "healthy",
"service": "admin-api",
"timestamp": datetime.utcnow().isoformat()
})
@app.route('/api/v1/admin/licenses', methods=['POST'])
@require_admin_auth
def create_license():
"""Create new license"""
schema = CreateLicenseSchema()
try:
data = schema.load(request.get_json())
except ValidationError as e:
return jsonify({"error": "Invalid request", "details": e.messages}), 400
# Generate license key
license_key = f"LIC-{secrets.token_urlsafe(16).upper()}"
# Calculate expiration
expires_at = None
if data.get('expires_in_days'):
expires_at = datetime.utcnow() + timedelta(days=data['expires_in_days'])
# Create license in database
query = """
INSERT INTO licenses
(license_key, customer_id, max_devices, is_active, is_test, expires_at, features, metadata)
VALUES (%s, %s, %s, true, %s, %s, %s, %s)
RETURNING id
"""
import json
license_id = license_repo.execute_insert(query, (
license_key,
data['customer_id'],
data['max_devices'],
data['is_test'],
expires_at,
json.dumps(data['features']),
json.dumps(data['metadata'])
))
if not license_id:
return jsonify({"error": "Failed to create license"}), 500
# Publish event
event_bus.publish(Event(
EventTypes.LICENSE_CREATED,
{
"license_id": license_id,
"customer_id": data['customer_id'],
"license_key": license_key
},
"admin-api"
))
return jsonify({
"id": license_id,
"license_key": license_key,
"customer_id": data['customer_id'],
"max_devices": data['max_devices'],
"is_test": data['is_test'],
"expires_at": expires_at.isoformat() if expires_at else None,
"features": data['features']
}), 201
@app.route('/api/v1/admin/licenses/<license_id>', methods=['GET'])
@require_admin_auth
def get_license(license_id):
"""Get license details"""
license = license_repo.get_license_by_id(license_id)
if not license:
return jsonify({"error": "License not found"}), 404
# Get additional statistics
active_devices = license_repo.get_active_devices(license_id)
usage_stats = license_repo.get_license_usage_stats(license_id)
recent_events = license_repo.get_recent_activations(license_id)
# Format response
license['active_devices'] = active_devices
license['usage_stats'] = usage_stats
license['recent_events'] = recent_events
return jsonify(license)
@app.route('/api/v1/admin/licenses/<license_id>', methods=['PATCH'])
@require_admin_auth
def update_license(license_id):
"""Update license"""
schema = UpdateLicenseSchema()
try:
data = schema.load(request.get_json())
except ValidationError as e:
return jsonify({"error": "Invalid request", "details": e.messages}), 400
# Build update query dynamically
updates = []
params = []
if 'max_devices' in data:
updates.append("max_devices = %s")
params.append(data['max_devices'])
if 'is_active' in data:
updates.append("is_active = %s")
params.append(data['is_active'])
if 'expires_at' in data:
updates.append("expires_at = %s")
params.append(data['expires_at'])
if 'features' in data:
updates.append("features = %s")
params.append(json.dumps(data['features']))
if 'metadata' in data:
updates.append("metadata = %s")
params.append(json.dumps(data['metadata']))
if not updates:
return jsonify({"error": "No fields to update"}), 400
# Add updated_at
updates.append("updated_at = NOW()")
# Add license_id to params
params.append(license_id)
query = f"""
UPDATE licenses
SET {', '.join(updates)}
WHERE id = %s
RETURNING *
"""
result = license_repo.execute_one(query, params)
if not result:
return jsonify({"error": "License not found"}), 404
# Invalidate cache
cache_repo.invalidate_license_cache(license_id)
# Publish event
event_bus.publish(Event(
EventTypes.LICENSE_UPDATED,
{
"license_id": license_id,
"changes": list(data.keys())
},
"admin-api"
))
return jsonify(result)
@app.route('/api/v1/admin/licenses/<license_id>', methods=['DELETE'])
@require_admin_auth
def delete_license(license_id):
"""Soft delete license (deactivate)"""
query = """
UPDATE licenses
SET is_active = false, updated_at = NOW()
WHERE id = %s
RETURNING id
"""
result = license_repo.execute_one(query, (license_id,))
if not result:
return jsonify({"error": "License not found"}), 404
# Invalidate cache
cache_repo.invalidate_license_cache(license_id)
# Publish event
event_bus.publish(Event(
EventTypes.LICENSE_DEACTIVATED,
{"license_id": license_id},
"admin-api"
))
return jsonify({"success": True, "message": "License deactivated"})
@app.route('/api/v1/admin/licenses/<license_id>/devices', methods=['GET'])
@require_admin_auth
def get_license_devices(license_id):
"""Get all devices for a license"""
# Get active devices
active_devices = license_repo.get_active_devices(license_id)
# Get all registered devices from activation events
query = """
SELECT DISTINCT ON (hardware_id)
hardware_id,
event_type,
ip_address,
user_agent,
created_at as registered_at,
metadata
FROM activation_events
WHERE license_id = %s
AND event_type IN ('activation', 'reactivation', 'transfer')
AND success = true
ORDER BY hardware_id, created_at DESC
"""
all_devices = license_repo.execute_query(query, (license_id,))
# Mark active devices
active_hw_ids = {d['hardware_id'] for d in active_devices}
for device in all_devices:
device['is_active'] = device['hardware_id'] in active_hw_ids
if device['is_active']:
# Add last_seen from active_devices
active_device = next((d for d in active_devices if d['hardware_id'] == device['hardware_id']), None)
if active_device:
device['last_seen'] = active_device['last_seen']
return jsonify({
"license_id": license_id,
"total_devices": len(all_devices),
"active_devices": len(active_devices),
"devices": all_devices
})
@app.route('/api/v1/admin/licenses/<license_id>/devices/deactivate', methods=['POST'])
@require_admin_auth
def deactivate_device(license_id):
"""Deactivate a device"""
schema = DeactivateDeviceSchema()
try:
data = schema.load(request.get_json())
except ValidationError as e:
return jsonify({"error": "Invalid request", "details": e.messages}), 400
success = license_repo.deactivate_device(license_id, data['hardware_id'])
if not success:
return jsonify({"error": "Failed to deactivate device"}), 500
# Invalidate cache
cache_repo.invalidate_license_cache(license_id)
# Publish event
event_bus.publish(Event(
EventTypes.DEVICE_DEACTIVATED,
{
"license_id": license_id,
"hardware_id": data['hardware_id'],
"reason": data.get('reason', 'Admin action')
},
"admin-api"
))
return jsonify({"success": True, "message": "Device deactivated"})
@app.route('/api/v1/admin/licenses/<license_id>/transfer', methods=['POST'])
@require_admin_auth
def transfer_license(license_id):
"""Transfer license between devices"""
schema = TransferLicenseSchema()
try:
data = schema.load(request.get_json())
except ValidationError as e:
return jsonify({"error": "Invalid request", "details": e.messages}), 400
# Get client IP
ip_address = request.headers.get('X-Forwarded-For', request.remote_addr)
success = license_repo.transfer_license(
license_id,
data['from_hardware_id'],
data['to_hardware_id'],
ip_address
)
if not success:
return jsonify({"error": "Failed to transfer license"}), 500
# Invalidate cache
cache_repo.invalidate_license_cache(license_id)
# Publish event
event_bus.publish(Event(
EventTypes.LICENSE_TRANSFERRED,
{
"license_id": license_id,
"from_hardware_id": data['from_hardware_id'],
"to_hardware_id": data['to_hardware_id']
},
"admin-api"
))
return jsonify({"success": True, "message": "License transferred successfully"})
@app.route('/api/v1/admin/licenses', methods=['GET'])
@require_admin_auth
def search_licenses():
"""Search and list licenses"""
schema = SearchLicensesSchema()
try:
filters = schema.load(request.args)
except ValidationError as e:
return jsonify({"error": "Invalid request", "details": e.messages}), 400
# Build query
where_clauses = []
params = []
if filters.get('customer_id'):
where_clauses.append("customer_id = %s")
params.append(filters['customer_id'])
if 'is_active' in filters:
where_clauses.append("is_active = %s")
params.append(filters['is_active'])
if 'is_test' in filters:
where_clauses.append("is_test = %s")
params.append(filters['is_test'])
if filters.get('created_after'):
where_clauses.append("created_at >= %s")
params.append(filters['created_after'])
if filters.get('created_before'):
where_clauses.append("created_at <= %s")
params.append(filters['created_before'])
if filters.get('expires_after'):
where_clauses.append("expires_at >= %s")
params.append(filters['expires_after'])
if filters.get('expires_before'):
where_clauses.append("expires_at <= %s")
params.append(filters['expires_before'])
where_sql = " AND ".join(where_clauses) if where_clauses else "1=1"
# Count total
count_query = f"SELECT COUNT(*) as total FROM licenses WHERE {where_sql}"
total_result = license_repo.execute_one(count_query, params)
total = total_result['total'] if total_result else 0
# Get paginated results
page = filters['page']
per_page = filters['per_page']
offset = (page - 1) * per_page
query = f"""
SELECT l.*, c.name as customer_name, c.email as customer_email
FROM licenses l
JOIN customers c ON l.customer_id = c.id
WHERE {where_sql}
ORDER BY l.created_at DESC
LIMIT %s OFFSET %s
"""
params.extend([per_page, offset])
licenses = license_repo.execute_query(query, params)
return jsonify({
"licenses": licenses,
"pagination": {
"total": total,
"page": page,
"per_page": per_page,
"pages": (total + per_page - 1) // per_page
}
})
@app.route('/api/v1/admin/licenses/<license_id>/events', methods=['GET'])
@require_admin_auth
def get_license_events(license_id):
"""Get all events for a license"""
hours = request.args.get('hours', 24, type=int)
events = license_repo.get_recent_activations(license_id, hours)
return jsonify({
"license_id": license_id,
"hours": hours,
"total_events": len(events),
"events": events
})
@app.route('/api/v1/admin/licenses/<license_id>/usage', methods=['GET'])
@require_admin_auth
def get_license_usage(license_id):
"""Get usage statistics for a license"""
days = request.args.get('days', 30, type=int)
stats = license_repo.get_license_usage_stats(license_id, days)
# Get daily breakdown
query = """
SELECT
DATE(timestamp) as date,
COUNT(*) as validations,
COUNT(DISTINCT hardware_id) as unique_devices,
COUNT(DISTINCT ip_address) as unique_ips
FROM license_heartbeats
WHERE license_id = %s
AND timestamp > NOW() - INTERVAL '%s days'
GROUP BY DATE(timestamp)
ORDER BY date DESC
"""
daily_stats = license_repo.execute_query(query, (license_id, days))
return jsonify({
"license_id": license_id,
"days": days,
"summary": stats,
"daily": daily_stats
})
@app.route('/api/v1/admin/licenses/<license_id>/anomalies', methods=['GET'])
@require_admin_auth
def get_license_anomalies(license_id):
"""Get detected anomalies for a license"""
query = """
SELECT * FROM anomaly_detections
WHERE license_id = %s
ORDER BY detected_at DESC
LIMIT 100
"""
anomalies = license_repo.execute_query(query, (license_id,))
return jsonify({
"license_id": license_id,
"total_anomalies": len(anomalies),
"anomalies": anomalies
})
@app.route('/api/v1/admin/licenses/<license_id>/anomalies/<anomaly_id>/resolve', methods=['POST'])
@require_admin_auth
def resolve_anomaly(license_id, anomaly_id):
"""Mark anomaly as resolved"""
data = request.get_json() or {}
action_taken = data.get('action_taken', 'Resolved by admin')
query = """
UPDATE anomaly_detections
SET resolved = true,
resolved_at = NOW(),
resolved_by = 'admin',
action_taken = %s
WHERE id = %s AND license_id = %s
RETURNING id
"""
result = license_repo.execute_one(query, (action_taken, anomaly_id, license_id))
if not result:
return jsonify({"error": "Anomaly not found"}), 404
return jsonify({"success": True, "message": "Anomaly resolved"})
@app.route('/api/v1/admin/licenses/bulk-create', methods=['POST'])
@require_admin_auth
def bulk_create_licenses():
"""Create multiple licenses at once"""
data = request.get_json()
if not data or 'licenses' not in data:
return jsonify({"error": "Missing licenses array"}), 400
schema = CreateLicenseSchema()
created_licenses = []
errors = []
for idx, license_data in enumerate(data['licenses']):
try:
validated_data = schema.load(license_data)
# Generate license key
license_key = f"LIC-{secrets.token_urlsafe(16).upper()}"
# Calculate expiration
expires_at = None
if validated_data.get('expires_in_days'):
expires_at = datetime.utcnow() + timedelta(days=validated_data['expires_in_days'])
# Create license
query = """
INSERT INTO licenses
(license_key, customer_id, max_devices, is_active, is_test, expires_at, features, metadata)
VALUES (%s, %s, %s, true, %s, %s, %s, %s)
RETURNING id
"""
import json
license_id = license_repo.execute_insert(query, (
license_key,
validated_data['customer_id'],
validated_data['max_devices'],
validated_data['is_test'],
expires_at,
json.dumps(validated_data['features']),
json.dumps(validated_data['metadata'])
))
if license_id:
created_licenses.append({
"id": license_id,
"license_key": license_key,
"customer_id": validated_data['customer_id']
})
except Exception as e:
errors.append({
"index": idx,
"error": str(e)
})
return jsonify({
"created": len(created_licenses),
"failed": len(errors),
"licenses": created_licenses,
"errors": errors
}), 201 if created_licenses else 400
@app.route('/api/v1/admin/statistics', methods=['GET'])
@require_admin_auth
def get_statistics():
"""Get overall license statistics"""
query = """
WITH stats AS (
SELECT
COUNT(*) as total_licenses,
COUNT(*) FILTER (WHERE is_active = true) as active_licenses,
COUNT(*) FILTER (WHERE is_test = true) as test_licenses,
COUNT(*) FILTER (WHERE expires_at < NOW()) as expired_licenses,
COUNT(DISTINCT customer_id) as total_customers
FROM licenses
),
device_stats AS (
SELECT COUNT(DISTINCT hardware_id) as total_devices
FROM license_heartbeats
WHERE timestamp > NOW() - INTERVAL '15 minutes'
),
validation_stats AS (
SELECT
COUNT(*) as validations_today,
COUNT(DISTINCT license_id) as licenses_used_today
FROM license_heartbeats
WHERE timestamp > CURRENT_DATE
)
SELECT * FROM stats, device_stats, validation_stats
"""
stats = license_repo.execute_one(query)
return jsonify(stats or {})
@app.errorhandler(404)
def not_found(error):
return jsonify({"error": "Not found"}), 404
@app.errorhandler(500)
def internal_error(error):
logger.error(f"Internal error: {error}")
return jsonify({"error": "Internal server error"}), 500
if __name__ == '__main__':
app.run(host='0.0.0.0', port=5004, debug=True)

Datei anzeigen

@@ -0,0 +1 @@
# Analytics Service

Datei anzeigen

@@ -0,0 +1,478 @@
import os
import sys
from flask import Flask, request, jsonify
from flask_cors import CORS
import logging
from functools import wraps
from datetime import datetime, timedelta
import asyncio
from concurrent.futures import ThreadPoolExecutor
# Add parent directory to path for imports
sys.path.append(os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__)))))
from config import get_config
from repositories.license_repo import LicenseRepository
from repositories.cache_repo import CacheRepository
from events.event_bus import EventBus, Event, EventTypes
from models import AnomalyType, Severity
# Configure logging
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)
# Initialize Flask app
app = Flask(__name__)
config = get_config()
app.config.from_object(config)
CORS(app)
# Initialize dependencies
license_repo = LicenseRepository(config.DATABASE_URL)
cache_repo = CacheRepository(config.REDIS_URL)
event_bus = EventBus(config.RABBITMQ_URL)
# Thread pool for async operations
executor = ThreadPoolExecutor(max_workers=10)
def require_auth(f):
"""Decorator to require authentication"""
@wraps(f)
def decorated_function(*args, **kwargs):
api_key = request.headers.get('X-API-Key')
if not api_key:
return jsonify({"error": "Missing API key"}), 401
# Simple validation for now
if not api_key.startswith('sk_'):
return jsonify({"error": "Invalid API key"}), 401
return f(*args, **kwargs)
return decorated_function
@app.route('/health', methods=['GET'])
def health_check():
"""Health check endpoint"""
return jsonify({
"status": "healthy",
"service": "analytics",
"timestamp": datetime.utcnow().isoformat()
})
@app.route('/api/v1/analytics/licenses/<license_id>/patterns', methods=['GET'])
@require_auth
def analyze_license_patterns(license_id):
"""Analyze usage patterns for a license"""
days = request.args.get('days', 30, type=int)
# Get usage data
query = """
WITH hourly_usage AS (
SELECT
DATE_TRUNC('hour', timestamp) as hour,
COUNT(*) as validations,
COUNT(DISTINCT hardware_id) as devices,
COUNT(DISTINCT ip_address) as ips
FROM license_heartbeats
WHERE license_id = %s
AND timestamp > NOW() - INTERVAL '%s days'
GROUP BY DATE_TRUNC('hour', timestamp)
),
daily_patterns AS (
SELECT
EXTRACT(DOW FROM hour) as day_of_week,
EXTRACT(HOUR FROM hour) as hour_of_day,
AVG(validations) as avg_validations,
MAX(devices) as max_devices
FROM hourly_usage
GROUP BY day_of_week, hour_of_day
)
SELECT * FROM daily_patterns
ORDER BY day_of_week, hour_of_day
"""
patterns = license_repo.execute_query(query, (license_id, days))
# Detect anomalies
anomalies = detect_usage_anomalies(license_id, patterns)
return jsonify({
"license_id": license_id,
"days_analyzed": days,
"patterns": patterns,
"anomalies": anomalies
})
@app.route('/api/v1/analytics/licenses/<license_id>/anomalies/detect', methods=['POST'])
@require_auth
def detect_anomalies(license_id):
"""Manually trigger anomaly detection for a license"""
# Run multiple anomaly detection checks
anomalies = []
# Check for multiple IPs
ip_anomalies = check_multiple_ips(license_id)
anomalies.extend(ip_anomalies)
# Check for rapid hardware changes
hw_anomalies = check_rapid_hardware_changes(license_id)
anomalies.extend(hw_anomalies)
# Check for concurrent usage
concurrent_anomalies = check_concurrent_usage(license_id)
anomalies.extend(concurrent_anomalies)
# Check for geographic anomalies
geo_anomalies = check_geographic_anomalies(license_id)
anomalies.extend(geo_anomalies)
# Store detected anomalies
for anomaly in anomalies:
store_anomaly(license_id, anomaly)
return jsonify({
"license_id": license_id,
"anomalies_detected": len(anomalies),
"anomalies": anomalies
})
@app.route('/api/v1/analytics/licenses/<license_id>/risk-score', methods=['GET'])
@require_auth
def get_risk_score(license_id):
"""Calculate risk score for a license"""
# Get recent anomalies
query = """
SELECT anomaly_type, severity, detected_at
FROM anomaly_detections
WHERE license_id = %s
AND detected_at > NOW() - INTERVAL '30 days'
AND resolved = false
"""
anomalies = license_repo.execute_query(query, (license_id,))
# Calculate risk score
risk_score = 0
severity_weights = {
'low': 10,
'medium': 25,
'high': 50,
'critical': 100
}
for anomaly in anomalies:
weight = severity_weights.get(anomaly['severity'], 0)
# Recent anomalies have higher weight
days_old = (datetime.utcnow() - anomaly['detected_at']).days
recency_factor = max(0.5, 1 - (days_old / 30))
risk_score += weight * recency_factor
# Normalize to 0-100
risk_score = min(100, risk_score)
# Determine risk level
if risk_score < 20:
risk_level = "low"
elif risk_score < 50:
risk_level = "medium"
elif risk_score < 80:
risk_level = "high"
else:
risk_level = "critical"
return jsonify({
"license_id": license_id,
"risk_score": round(risk_score, 2),
"risk_level": risk_level,
"active_anomalies": len(anomalies),
"factors": anomalies
})
@app.route('/api/v1/analytics/reports/usage', methods=['GET'])
@require_auth
def generate_usage_report():
"""Generate usage report for all licenses"""
days = request.args.get('days', 30, type=int)
query = """
WITH license_stats AS (
SELECT
l.id,
l.license_key,
l.customer_id,
c.name as customer_name,
l.max_devices,
l.is_test,
l.expires_at,
COUNT(DISTINCT lh.hardware_id) as active_devices,
COUNT(lh.*) as total_validations,
MAX(lh.timestamp) as last_validation
FROM licenses l
LEFT JOIN customers c ON l.customer_id = c.id
LEFT JOIN license_heartbeats lh ON l.id = lh.license_id
AND lh.timestamp > NOW() - INTERVAL '%s days'
WHERE l.is_active = true
GROUP BY l.id, l.license_key, l.customer_id, c.name, l.max_devices, l.is_test, l.expires_at
)
SELECT
*,
CASE
WHEN total_validations = 0 THEN 'inactive'
WHEN active_devices > max_devices THEN 'over_limit'
WHEN expires_at < NOW() THEN 'expired'
ELSE 'active'
END as status,
ROUND((active_devices::numeric / NULLIF(max_devices, 0)) * 100, 2) as device_utilization
FROM license_stats
ORDER BY total_validations DESC
"""
report = license_repo.execute_query(query, (days,))
# Summary statistics
summary = {
"total_licenses": len(report),
"active_licenses": len([r for r in report if r['status'] == 'active']),
"inactive_licenses": len([r for r in report if r['status'] == 'inactive']),
"over_limit_licenses": len([r for r in report if r['status'] == 'over_limit']),
"expired_licenses": len([r for r in report if r['status'] == 'expired']),
"total_validations": sum(r['total_validations'] for r in report),
"average_device_utilization": sum(r['device_utilization'] or 0 for r in report) / len(report) if report else 0
}
return jsonify({
"period_days": days,
"generated_at": datetime.utcnow().isoformat(),
"summary": summary,
"licenses": report
})
@app.route('/api/v1/analytics/reports/revenue', methods=['GET'])
@require_auth
def generate_revenue_report():
"""Generate revenue analytics report"""
# This would need pricing information in the database
# For now, return a placeholder
return jsonify({
"message": "Revenue reporting requires pricing data integration",
"placeholder": True
})
def detect_usage_anomalies(license_id, patterns):
"""Detect anomalies in usage patterns"""
anomalies = []
if not patterns:
return anomalies
# Calculate statistics
validations = [p['avg_validations'] for p in patterns]
if validations:
avg_validations = sum(validations) / len(validations)
max_validations = max(validations)
# Detect spikes
for pattern in patterns:
if pattern['avg_validations'] > avg_validations * 3:
anomalies.append({
"type": AnomalyType.SUSPICIOUS_PATTERN.value,
"severity": Severity.MEDIUM.value,
"details": {
"day": pattern['day_of_week'],
"hour": pattern['hour_of_day'],
"validations": pattern['avg_validations'],
"average": avg_validations
}
})
return anomalies
def check_multiple_ips(license_id):
"""Check for multiple IP addresses"""
query = """
SELECT
COUNT(DISTINCT ip_address) as ip_count,
array_agg(DISTINCT ip_address) as ips
FROM license_heartbeats
WHERE license_id = %s
AND timestamp > NOW() - INTERVAL '1 hour'
"""
result = license_repo.execute_one(query, (license_id,))
anomalies = []
if result and result['ip_count'] > config.ANOMALY_MULTIPLE_IPS_THRESHOLD:
anomalies.append({
"type": AnomalyType.MULTIPLE_IPS.value,
"severity": Severity.HIGH.value,
"details": {
"ip_count": result['ip_count'],
"ips": result['ips'][:10], # Limit to 10 IPs
"threshold": config.ANOMALY_MULTIPLE_IPS_THRESHOLD
}
})
return anomalies
def check_rapid_hardware_changes(license_id):
"""Check for rapid hardware ID changes"""
query = """
SELECT
hardware_id,
created_at
FROM activation_events
WHERE license_id = %s
AND event_type IN ('activation', 'transfer')
AND created_at > NOW() - INTERVAL '1 hour'
AND success = true
ORDER BY created_at DESC
"""
events = license_repo.execute_query(query, (license_id,))
anomalies = []
if len(events) > 1:
# Check time between changes
for i in range(len(events) - 1):
time_diff = (events[i]['created_at'] - events[i+1]['created_at']).total_seconds() / 60
if time_diff < config.ANOMALY_RAPID_HARDWARE_CHANGE_MINUTES:
anomalies.append({
"type": AnomalyType.RAPID_HARDWARE_CHANGE.value,
"severity": Severity.HIGH.value,
"details": {
"hardware_ids": [events[i]['hardware_id'], events[i+1]['hardware_id']],
"time_difference_minutes": round(time_diff, 2),
"threshold_minutes": config.ANOMALY_RAPID_HARDWARE_CHANGE_MINUTES
}
})
return anomalies
def check_concurrent_usage(license_id):
"""Check for concurrent usage from different devices"""
query = """
WITH concurrent_sessions AS (
SELECT
h1.hardware_id as hw1,
h2.hardware_id as hw2,
h1.timestamp as time1,
h2.timestamp as time2
FROM license_heartbeats h1
JOIN license_heartbeats h2 ON h1.license_id = h2.license_id
WHERE h1.license_id = %s
AND h2.license_id = %s
AND h1.hardware_id != h2.hardware_id
AND h1.timestamp > NOW() - INTERVAL '15 minutes'
AND h2.timestamp > NOW() - INTERVAL '15 minutes'
AND ABS(EXTRACT(EPOCH FROM h1.timestamp - h2.timestamp)) < 300
)
SELECT COUNT(*) as concurrent_count
FROM concurrent_sessions
"""
result = license_repo.execute_one(query, (license_id, license_id))
anomalies = []
if result and result['concurrent_count'] > 0:
anomalies.append({
"type": AnomalyType.CONCURRENT_USE.value,
"severity": Severity.CRITICAL.value,
"details": {
"concurrent_sessions": result['concurrent_count'],
"timeframe_minutes": 5
}
})
return anomalies
def check_geographic_anomalies(license_id):
"""Check for geographic anomalies (requires IP geolocation)"""
# This would require IP geolocation service integration
# For now, return empty list
return []
def store_anomaly(license_id, anomaly):
"""Store detected anomaly in database"""
query = """
INSERT INTO anomaly_detections
(license_id, anomaly_type, severity, details)
VALUES (%s, %s, %s, %s)
ON CONFLICT (license_id, anomaly_type, details) DO NOTHING
"""
import json
license_repo.execute_insert(query, (
license_id,
anomaly['type'],
anomaly['severity'],
json.dumps(anomaly['details'])
))
# Publish event
event_bus.publish(Event(
EventTypes.ANOMALY_DETECTED,
{
"license_id": license_id,
"anomaly": anomaly
},
"analytics"
))
@app.route('/api/v1/analytics/dashboard', methods=['GET'])
@require_auth
def get_dashboard_data():
"""Get analytics dashboard data"""
query = """
WITH current_stats AS (
SELECT
COUNT(DISTINCT license_id) as active_licenses,
COUNT(DISTINCT hardware_id) as active_devices,
COUNT(*) as validations_today
FROM license_heartbeats
WHERE timestamp > CURRENT_DATE
),
anomaly_stats AS (
SELECT
COUNT(*) as total_anomalies,
COUNT(*) FILTER (WHERE severity = 'critical') as critical_anomalies,
COUNT(*) FILTER (WHERE resolved = false) as unresolved_anomalies
FROM anomaly_detections
WHERE detected_at > CURRENT_DATE - INTERVAL '7 days'
),
trend_data AS (
SELECT
DATE(timestamp) as date,
COUNT(*) as validations,
COUNT(DISTINCT license_id) as licenses,
COUNT(DISTINCT hardware_id) as devices
FROM license_heartbeats
WHERE timestamp > CURRENT_DATE - INTERVAL '7 days'
GROUP BY DATE(timestamp)
ORDER BY date
)
SELECT
cs.*,
ans.*,
(SELECT json_agg(td.*) FROM trend_data td) as trends
FROM current_stats cs, anomaly_stats ans
"""
dashboard_data = license_repo.execute_one(query)
return jsonify(dashboard_data or {})
@app.errorhandler(404)
def not_found(error):
return jsonify({"error": "Not found"}), 404
@app.errorhandler(500)
def internal_error(error):
logger.error(f"Internal error: {error}")
return jsonify({"error": "Internal server error"}), 500
if __name__ == '__main__':
app.run(host='0.0.0.0', port=5003, debug=True)

Datei anzeigen

@@ -0,0 +1,25 @@
FROM python:3.11-slim
WORKDIR /app
# Install system dependencies
RUN apt-get update && apt-get install -y \
gcc \
&& rm -rf /var/lib/apt/lists/*
# Copy requirements
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
# Copy application code
COPY . .
# Create non-root user
RUN useradd -m -u 1000 appuser && chown -R appuser:appuser /app
USER appuser
# Expose port
EXPOSE 5001
# Run with gunicorn
CMD ["gunicorn", "--bind", "0.0.0.0:5001", "--workers", "4", "--timeout", "120", "app:app"]

Datei anzeigen

@@ -0,0 +1,279 @@
import os
import sys
from flask import Flask, request, jsonify
from flask_cors import CORS
import jwt
from datetime import datetime, timedelta
import logging
from functools import wraps
from prometheus_flask_exporter import PrometheusMetrics
# Add parent directory to path for imports
sys.path.append(os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__)))))
from config import get_config
from repositories.base import BaseRepository
# Configure logging
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)
# Initialize Flask app
app = Flask(__name__)
config = get_config()
app.config.from_object(config)
CORS(app)
# Initialize Prometheus metrics
metrics = PrometheusMetrics(app)
metrics.info('auth_service_info', 'Auth Service Information', version='1.0.0')
# Initialize repository
db_repo = BaseRepository(config.DATABASE_URL)
def create_token(payload: dict, expires_delta: timedelta) -> str:
"""Create JWT token"""
to_encode = payload.copy()
expire = datetime.utcnow() + expires_delta
to_encode.update({"exp": expire, "iat": datetime.utcnow()})
return jwt.encode(
to_encode,
config.JWT_SECRET,
algorithm=config.JWT_ALGORITHM
)
def decode_token(token: str) -> dict:
"""Decode and validate JWT token"""
try:
payload = jwt.decode(
token,
config.JWT_SECRET,
algorithms=[config.JWT_ALGORITHM]
)
return payload
except jwt.ExpiredSignatureError:
raise ValueError("Token has expired")
except jwt.InvalidTokenError:
raise ValueError("Invalid token")
def require_api_key(f):
"""Decorator to require API key"""
@wraps(f)
def decorated_function(*args, **kwargs):
api_key = request.headers.get('X-API-Key')
if not api_key:
return jsonify({"error": "Missing API key"}), 401
# Validate API key
query = """
SELECT id, client_name, allowed_endpoints
FROM api_clients
WHERE api_key = %s AND is_active = true
"""
client = db_repo.execute_one(query, (api_key,))
if not client:
return jsonify({"error": "Invalid API key"}), 401
# Check if endpoint is allowed
endpoint = request.endpoint
allowed = client.get('allowed_endpoints', [])
if allowed and endpoint not in allowed:
return jsonify({"error": "Endpoint not allowed"}), 403
# Add client info to request
request.api_client = client
return f(*args, **kwargs)
return decorated_function
@app.route('/health', methods=['GET'])
def health_check():
"""Health check endpoint"""
return jsonify({
"status": "healthy",
"service": "auth",
"timestamp": datetime.utcnow().isoformat()
})
@app.route('/api/v1/auth/token', methods=['POST'])
@require_api_key
def create_access_token():
"""Create access token for license validation"""
data = request.get_json()
if not data or 'license_id' not in data:
return jsonify({"error": "Missing license_id"}), 400
license_id = data['license_id']
hardware_id = data.get('hardware_id')
# Verify license exists and is active
query = """
SELECT id, is_active, max_devices
FROM licenses
WHERE id = %s
"""
license = db_repo.execute_one(query, (license_id,))
if not license:
return jsonify({"error": "License not found"}), 404
if not license['is_active']:
return jsonify({"error": "License is not active"}), 403
# Create token payload
payload = {
"sub": license_id,
"hwid": hardware_id,
"client_id": request.api_client['id'],
"type": "access"
}
# Add features and limits based on license
payload["features"] = data.get('features', [])
payload["limits"] = {
"api_calls": config.DEFAULT_RATE_LIMIT_PER_HOUR,
"concurrent_sessions": config.MAX_CONCURRENT_SESSIONS
}
# Create tokens
access_token = create_token(payload, config.JWT_ACCESS_TOKEN_EXPIRES)
# Create refresh token
refresh_payload = {
"sub": license_id,
"client_id": request.api_client['id'],
"type": "refresh"
}
refresh_token = create_token(refresh_payload, config.JWT_REFRESH_TOKEN_EXPIRES)
return jsonify({
"access_token": access_token,
"refresh_token": refresh_token,
"token_type": "Bearer",
"expires_in": int(config.JWT_ACCESS_TOKEN_EXPIRES.total_seconds())
})
@app.route('/api/v1/auth/refresh', methods=['POST'])
def refresh_access_token():
"""Refresh access token"""
data = request.get_json()
if not data or 'refresh_token' not in data:
return jsonify({"error": "Missing refresh_token"}), 400
try:
# Decode refresh token
payload = decode_token(data['refresh_token'])
if payload.get('type') != 'refresh':
return jsonify({"error": "Invalid token type"}), 400
license_id = payload['sub']
# Verify license still active
query = "SELECT is_active FROM licenses WHERE id = %s"
license = db_repo.execute_one(query, (license_id,))
if not license or not license['is_active']:
return jsonify({"error": "License is not active"}), 403
# Create new access token
access_payload = {
"sub": license_id,
"client_id": payload['client_id'],
"type": "access"
}
access_token = create_token(access_payload, config.JWT_ACCESS_TOKEN_EXPIRES)
return jsonify({
"access_token": access_token,
"token_type": "Bearer",
"expires_in": int(config.JWT_ACCESS_TOKEN_EXPIRES.total_seconds())
})
except ValueError as e:
return jsonify({"error": str(e)}), 401
@app.route('/api/v1/auth/verify', methods=['POST'])
def verify_token():
"""Verify token validity"""
auth_header = request.headers.get('Authorization')
if not auth_header or not auth_header.startswith('Bearer '):
return jsonify({"error": "Missing or invalid authorization header"}), 401
token = auth_header.split(' ')[1]
try:
payload = decode_token(token)
return jsonify({
"valid": True,
"license_id": payload['sub'],
"expires_at": datetime.fromtimestamp(payload['exp']).isoformat()
})
except ValueError as e:
return jsonify({
"valid": False,
"error": str(e)
}), 401
@app.route('/api/v1/auth/api-key', methods=['POST'])
def create_api_key():
"""Create new API key (admin only)"""
# This endpoint should be protected by admin authentication
# For now, we'll use a simple secret header
admin_secret = request.headers.get('X-Admin-Secret')
if admin_secret != os.getenv('ADMIN_SECRET', 'change-this-admin-secret'):
return jsonify({"error": "Unauthorized"}), 401
data = request.get_json()
if not data or 'client_name' not in data:
return jsonify({"error": "Missing client_name"}), 400
import secrets
api_key = f"sk_{secrets.token_urlsafe(32)}"
secret_key = secrets.token_urlsafe(64)
query = """
INSERT INTO api_clients (client_name, api_key, secret_key, allowed_endpoints)
VALUES (%s, %s, %s, %s)
RETURNING id
"""
allowed_endpoints = data.get('allowed_endpoints', [])
client_id = db_repo.execute_insert(
query,
(data['client_name'], api_key, secret_key, allowed_endpoints)
)
if not client_id:
return jsonify({"error": "Failed to create API key"}), 500
return jsonify({
"client_id": client_id,
"api_key": api_key,
"secret_key": secret_key,
"client_name": data['client_name']
}), 201
@app.errorhandler(404)
def not_found(error):
return jsonify({"error": "Not found"}), 404
@app.errorhandler(500)
def internal_error(error):
logger.error(f"Internal error: {error}")
return jsonify({"error": "Internal server error"}), 500
if __name__ == '__main__':
app.run(host='0.0.0.0', port=5001, debug=True)

Datei anzeigen

@@ -0,0 +1,15 @@
import os
from datetime import timedelta
def get_config():
"""Get configuration from environment variables"""
return {
'DATABASE_URL': os.getenv('DATABASE_URL', 'postgresql://postgres:password@postgres:5432/v2_adminpanel'),
'REDIS_URL': os.getenv('REDIS_URL', 'redis://redis:6379/1'),
'JWT_SECRET': os.getenv('JWT_SECRET', 'dev-secret-key'),
'JWT_ALGORITHM': 'HS256',
'ACCESS_TOKEN_EXPIRE_MINUTES': 30,
'REFRESH_TOKEN_EXPIRE_DAYS': 7,
'FLASK_ENV': os.getenv('FLASK_ENV', 'production'),
'LOG_LEVEL': os.getenv('LOG_LEVEL', 'INFO'),
}

Datei anzeigen

@@ -0,0 +1,9 @@
flask==3.0.0
flask-cors==4.0.0
pyjwt==2.8.0
psycopg2-binary==2.9.9
redis==5.0.1
python-dotenv==1.0.0
gunicorn==21.2.0
marshmallow==3.20.1
prometheus-flask-exporter==0.23.0

Datei anzeigen

@@ -0,0 +1,25 @@
FROM python:3.11-slim
WORKDIR /app
# Install system dependencies
RUN apt-get update && apt-get install -y \
gcc \
&& rm -rf /var/lib/apt/lists/*
# Copy requirements
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
# Copy application code
COPY . .
# Create non-root user
RUN useradd -m -u 1000 appuser && chown -R appuser:appuser /app
USER appuser
# Expose port
EXPOSE 5002
# Run with gunicorn
CMD ["gunicorn", "--bind", "0.0.0.0:5002", "--workers", "4", "--timeout", "120", "app:app"]

Datei anzeigen

@@ -0,0 +1,409 @@
import os
import sys
from flask import Flask, request, jsonify
from flask_cors import CORS
import jwt
from datetime import datetime, timedelta
import logging
from functools import wraps
from marshmallow import Schema, fields, ValidationError
# Add parent directory to path for imports
sys.path.append(os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__)))))
from config import get_config
from repositories.license_repo import LicenseRepository
from repositories.cache_repo import CacheRepository
from events.event_bus import EventBus, Event, EventTypes
from models import EventType, ValidationRequest, ValidationResponse
# Configure logging
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)
# Initialize Flask app
app = Flask(__name__)
config = get_config()
app.config.from_object(config)
CORS(app)
# Initialize dependencies
license_repo = LicenseRepository(config.DATABASE_URL)
cache_repo = CacheRepository(config.REDIS_URL)
event_bus = EventBus(config.RABBITMQ_URL)
# Validation schemas
class ValidateSchema(Schema):
license_key = fields.Str(required=True)
hardware_id = fields.Str(required=True)
app_version = fields.Str()
class ActivateSchema(Schema):
license_key = fields.Str(required=True)
hardware_id = fields.Str(required=True)
device_name = fields.Str()
os_info = fields.Dict()
class HeartbeatSchema(Schema):
session_data = fields.Dict()
class OfflineTokenSchema(Schema):
duration_hours = fields.Int(missing=24, validate=lambda x: 0 < x <= 72)
def require_api_key(f):
"""Decorator to require API key"""
@wraps(f)
def decorated_function(*args, **kwargs):
api_key = request.headers.get('X-API-Key')
if not api_key:
return jsonify({"error": "Missing API key"}), 401
# For now, accept any API key starting with 'sk_'
# In production, validate against database
if not api_key.startswith('sk_'):
return jsonify({"error": "Invalid API key"}), 401
return f(*args, **kwargs)
return decorated_function
def require_auth_token(f):
"""Decorator to require JWT token"""
@wraps(f)
def decorated_function(*args, **kwargs):
auth_header = request.headers.get('Authorization')
if not auth_header or not auth_header.startswith('Bearer '):
return jsonify({"error": "Missing or invalid authorization header"}), 401
token = auth_header.split(' ')[1]
try:
payload = jwt.decode(
token,
config.JWT_SECRET,
algorithms=[config.JWT_ALGORITHM]
)
request.token_payload = payload
return f(*args, **kwargs)
except jwt.ExpiredSignatureError:
return jsonify({"error": "Token has expired"}), 401
except jwt.InvalidTokenError:
return jsonify({"error": "Invalid token"}), 401
return decorated_function
def get_client_ip():
"""Get client IP address"""
if request.headers.get('X-Forwarded-For'):
return request.headers.get('X-Forwarded-For').split(',')[0]
return request.remote_addr
@app.route('/health', methods=['GET'])
def health_check():
"""Health check endpoint"""
return jsonify({
"status": "healthy",
"service": "license-api",
"timestamp": datetime.utcnow().isoformat()
})
@app.route('/api/v1/license/validate', methods=['POST'])
@require_api_key
def validate_license():
"""Validate license key with hardware ID"""
schema = ValidateSchema()
try:
data = schema.load(request.get_json())
except ValidationError as e:
return jsonify({"error": "Invalid request", "details": e.messages}), 400
license_key = data['license_key']
hardware_id = data['hardware_id']
app_version = data.get('app_version')
# Check cache first
cached_result = cache_repo.get_license_validation(license_key, hardware_id)
if cached_result:
logger.info(f"Cache hit for license validation: {license_key[:8]}...")
return jsonify(cached_result)
# Get license from database
license = license_repo.get_license_by_key(license_key)
if not license:
event_bus.publish(Event(
EventTypes.LICENSE_VALIDATION_FAILED,
{"license_key": license_key, "reason": "not_found"},
"license-api"
))
return jsonify({
"valid": False,
"error": "License not found",
"error_code": "LICENSE_NOT_FOUND"
}), 404
# Check if license is active
if not license['is_active']:
event_bus.publish(Event(
EventTypes.LICENSE_VALIDATION_FAILED,
{"license_id": license['id'], "reason": "inactive"},
"license-api"
))
return jsonify({
"valid": False,
"error": "License is not active",
"error_code": "LICENSE_INACTIVE"
}), 403
# Check expiration
if license['expires_at'] and datetime.utcnow() > license['expires_at']:
event_bus.publish(Event(
EventTypes.LICENSE_EXPIRED,
{"license_id": license['id']},
"license-api"
))
return jsonify({
"valid": False,
"error": "License has expired",
"error_code": "LICENSE_EXPIRED"
}), 403
# Check device limit
device_count = license_repo.get_device_count(license['id'])
if device_count >= license['max_devices']:
# Check if this device is already registered
if not license_repo.check_hardware_id_exists(license['id'], hardware_id):
return jsonify({
"valid": False,
"error": "Device limit exceeded",
"error_code": "DEVICE_LIMIT_EXCEEDED",
"current_devices": device_count,
"max_devices": license['max_devices']
}), 403
# Record heartbeat
license_repo.record_heartbeat(
license_id=license['id'],
hardware_id=hardware_id,
ip_address=get_client_ip(),
user_agent=request.headers.get('User-Agent'),
app_version=app_version
)
# Create response
response = {
"valid": True,
"license_id": license['id'],
"expires_at": license['expires_at'].isoformat() if license['expires_at'] else None,
"features": license.get('features', []),
"limits": {
"max_devices": license['max_devices'],
"current_devices": device_count
}
}
# Cache the result
cache_repo.set_license_validation(
license_key,
hardware_id,
response,
config.CACHE_TTL_VALIDATION
)
# Publish success event
event_bus.publish(Event(
EventTypes.LICENSE_VALIDATED,
{
"license_id": license['id'],
"hardware_id": hardware_id,
"ip_address": get_client_ip()
},
"license-api"
))
return jsonify(response)
@app.route('/api/v1/license/activate', methods=['POST'])
@require_api_key
def activate_license():
"""Activate license on a new device"""
schema = ActivateSchema()
try:
data = schema.load(request.get_json())
except ValidationError as e:
return jsonify({"error": "Invalid request", "details": e.messages}), 400
license_key = data['license_key']
hardware_id = data['hardware_id']
device_name = data.get('device_name')
os_info = data.get('os_info', {})
# Get license
license = license_repo.get_license_by_key(license_key)
if not license:
return jsonify({
"error": "License not found",
"error_code": "LICENSE_NOT_FOUND"
}), 404
if not license['is_active']:
return jsonify({
"error": "License is not active",
"error_code": "LICENSE_INACTIVE"
}), 403
# Check if already activated on this device
if license_repo.check_hardware_id_exists(license['id'], hardware_id):
return jsonify({
"error": "License already activated on this device",
"error_code": "ALREADY_ACTIVATED"
}), 400
# Check device limit
device_count = license_repo.get_device_count(license['id'])
if device_count >= license['max_devices']:
return jsonify({
"error": "Device limit exceeded",
"error_code": "DEVICE_LIMIT_EXCEEDED",
"current_devices": device_count,
"max_devices": license['max_devices']
}), 403
# Record activation
license_repo.record_activation_event(
license_id=license['id'],
event_type=EventType.ACTIVATION,
hardware_id=hardware_id,
ip_address=get_client_ip(),
user_agent=request.headers.get('User-Agent'),
success=True,
metadata={
"device_name": device_name,
"os_info": os_info
}
)
# Invalidate cache
cache_repo.invalidate_license_cache(license['id'])
# Publish event
event_bus.publish(Event(
EventTypes.LICENSE_ACTIVATED,
{
"license_id": license['id'],
"hardware_id": hardware_id,
"device_name": device_name
},
"license-api"
))
return jsonify({
"success": True,
"license_id": license['id'],
"message": "License activated successfully"
}), 201
@app.route('/api/v1/license/heartbeat', methods=['POST'])
@require_auth_token
def heartbeat():
"""Record license heartbeat"""
schema = HeartbeatSchema()
try:
data = schema.load(request.get_json() or {})
except ValidationError as e:
return jsonify({"error": "Invalid request", "details": e.messages}), 400
license_id = request.token_payload['sub']
hardware_id = request.token_payload.get('hwid')
# Record heartbeat
license_repo.record_heartbeat(
license_id=license_id,
hardware_id=hardware_id,
ip_address=get_client_ip(),
user_agent=request.headers.get('User-Agent'),
session_data=data.get('session_data', {})
)
return jsonify({
"success": True,
"timestamp": datetime.utcnow().isoformat()
})
@app.route('/api/v1/license/offline-token', methods=['POST'])
@require_auth_token
def create_offline_token():
"""Create offline validation token"""
schema = OfflineTokenSchema()
try:
data = schema.load(request.get_json() or {})
except ValidationError as e:
return jsonify({"error": "Invalid request", "details": e.messages}), 400
license_id = request.token_payload['sub']
hardware_id = request.token_payload.get('hwid')
duration_hours = data['duration_hours']
if not hardware_id:
return jsonify({"error": "Hardware ID required"}), 400
# Create offline token
token = license_repo.create_license_token(
license_id=license_id,
hardware_id=hardware_id,
valid_hours=duration_hours
)
if not token:
return jsonify({"error": "Failed to create token"}), 500
valid_until = datetime.utcnow() + timedelta(hours=duration_hours)
return jsonify({
"token": token,
"valid_until": valid_until.isoformat(),
"duration_hours": duration_hours
})
@app.route('/api/v1/license/validate-offline', methods=['POST'])
def validate_offline_token():
"""Validate offline token"""
data = request.get_json()
if not data or 'token' not in data:
return jsonify({"error": "Missing token"}), 400
# Validate token
result = license_repo.validate_token(data['token'])
if not result:
return jsonify({
"valid": False,
"error": "Invalid or expired token"
}), 401
return jsonify({
"valid": True,
"license_id": result['license_id'],
"hardware_id": result['hardware_id'],
"expires_at": result['valid_until'].isoformat()
})
@app.errorhandler(404)
def not_found(error):
return jsonify({"error": "Not found"}), 404
@app.errorhandler(500)
def internal_error(error):
logger.error(f"Internal error: {error}")
return jsonify({"error": "Internal server error"}), 500
if __name__ == '__main__':
app.run(host='0.0.0.0', port=5002, debug=True)

Datei anzeigen

@@ -0,0 +1,10 @@
flask==3.0.0
flask-cors==4.0.0
pyjwt==2.8.0
psycopg2-binary==2.9.9
redis==5.0.1
pika==1.3.2
python-dotenv==1.0.0
gunicorn==21.2.0
marshmallow==3.20.1
requests==2.31.0

Datei anzeigen

@@ -0,0 +1,28 @@
-- Add GitHub backup tracking columns to backup_history table
-- These columns track GitHub upload status and local file management
-- Add column to track if backup was uploaded to GitHub
ALTER TABLE backup_history
ADD COLUMN IF NOT EXISTS github_uploaded BOOLEAN DEFAULT FALSE;
-- Add column to track if local file was deleted after GitHub upload
ALTER TABLE backup_history
ADD COLUMN IF NOT EXISTS local_deleted BOOLEAN DEFAULT FALSE;
-- Add column to store GitHub path/filename
ALTER TABLE backup_history
ADD COLUMN IF NOT EXISTS github_path TEXT;
-- Add column to identify server backups vs database backups
ALTER TABLE backup_history
ADD COLUMN IF NOT EXISTS is_server_backup BOOLEAN DEFAULT FALSE;
-- Create index for faster queries on GitHub status
CREATE INDEX IF NOT EXISTS idx_backup_history_github_status
ON backup_history(github_uploaded, local_deleted);
-- Add comments for documentation
COMMENT ON COLUMN backup_history.github_uploaded IS 'Whether backup was uploaded to GitHub repository';
COMMENT ON COLUMN backup_history.local_deleted IS 'Whether local backup file was deleted after GitHub upload';
COMMENT ON COLUMN backup_history.github_path IS 'Path/filename of backup in GitHub repository';
COMMENT ON COLUMN backup_history.is_server_backup IS 'True for full server backups, False for database-only backups';

158
restore_full_backup.sh Ausführbare Datei
Datei anzeigen

@@ -0,0 +1,158 @@
#!/bin/bash
# Full Server Restore Script for V2-Docker
# Restores server from a backup created by create_full_backup.sh
set -e # Exit on error
# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
NC='\033[0m' # No Color
# Check if backup path is provided
if [ $# -eq 0 ]; then
echo -e "${RED}Error: No backup specified${NC}"
echo "Usage: $0 <backup_path_or_name>"
echo ""
echo "Examples:"
echo " $0 server_backup_20250628_171705"
echo " $0 /opt/v2-Docker/server-backups/server_backup_20250628_171705"
echo " $0 server_backup_20250628_171705.tar.gz"
exit 1
fi
BACKUP_INPUT="$1"
PROJECT_ROOT="/opt/v2-Docker"
BACKUP_BASE_DIR="$PROJECT_ROOT/server-backups"
# Determine backup directory
if [ -d "$BACKUP_INPUT" ]; then
# Full path provided
BACKUP_DIR="$BACKUP_INPUT"
elif [ -d "$BACKUP_BASE_DIR/$BACKUP_INPUT" ]; then
# Backup name provided
BACKUP_DIR="$BACKUP_BASE_DIR/$BACKUP_INPUT"
elif [ -f "$BACKUP_BASE_DIR/$BACKUP_INPUT" ] && [[ "$BACKUP_INPUT" == *.tar.gz ]]; then
# Tar file provided - extract first
echo -e "${YELLOW}Extracting backup archive...${NC}"
TEMP_DIR=$(mktemp -d)
tar xzf "$BACKUP_BASE_DIR/$BACKUP_INPUT" -C "$TEMP_DIR"
BACKUP_DIR=$(find "$TEMP_DIR" -maxdepth 1 -type d -name "server_backup_*" | head -1)
if [ -z "$BACKUP_DIR" ]; then
echo -e "${RED}Error: Could not find backup directory in archive${NC}"
rm -rf "$TEMP_DIR"
exit 1
fi
else
echo -e "${RED}Error: Backup not found: $BACKUP_INPUT${NC}"
exit 1
fi
echo -e "${GREEN}Starting V2-Docker Full Server Restore...${NC}"
echo "Restoring from: $BACKUP_DIR"
# Verify backup directory structure
if [ ! -f "$BACKUP_DIR/database_backup.sql.gz" ]; then
echo -e "${RED}Error: Invalid backup - database_backup.sql.gz not found${NC}"
exit 1
fi
# Warning prompt
echo -e "${YELLOW}WARNING: This will restore the entire server state!${NC}"
echo "This includes:"
echo " - Configuration files"
echo " - SSL certificates"
echo " - Database (all data will be replaced)"
echo " - Docker volumes"
echo ""
read -p "Are you sure you want to continue? (yes/no): " CONFIRM
if [ "$CONFIRM" != "yes" ]; then
echo "Restore cancelled."
exit 0
fi
# 1. Stop all services
echo -e "${YELLOW}Stopping Docker services...${NC}"
cd "$PROJECT_ROOT"
docker-compose down
# 2. Restore configuration files
echo -e "${YELLOW}Restoring configuration files...${NC}"
if [ -f "$BACKUP_DIR/configs/docker-compose.yaml" ]; then
cp "$BACKUP_DIR/configs/docker-compose.yaml" "$PROJECT_ROOT/"
fi
if [ -f "$BACKUP_DIR/configs/.env" ]; then
cp "$BACKUP_DIR/configs/.env" "$PROJECT_ROOT/"
fi
if [ -f "$BACKUP_DIR/configs/nginx.conf" ]; then
mkdir -p "$PROJECT_ROOT/v2_nginx"
cp "$BACKUP_DIR/configs/nginx.conf" "$PROJECT_ROOT/v2_nginx/"
fi
# Restore SSL certificates
if [ -d "$BACKUP_DIR/configs/ssl" ]; then
mkdir -p "$PROJECT_ROOT/v2_nginx/ssl"
cp -r "$BACKUP_DIR/configs/ssl/"* "$PROJECT_ROOT/v2_nginx/ssl/"
fi
# 3. Restore Docker volumes
echo -e "${YELLOW}Restoring Docker volumes...${NC}"
# Remove old volume
docker volume rm v2_postgres_data 2>/dev/null || true
# Create new volume
docker volume create v2_postgres_data
# Restore volume data
docker run --rm -v v2_postgres_data:/data -v "$BACKUP_DIR/volumes":/backup alpine tar xzf /backup/v2_postgres_data.tar.gz -C /data
# 4. Start database service only
echo -e "${YELLOW}Starting database service...${NC}"
docker-compose up -d db
# Wait for database to be ready
echo "Waiting for database to be ready..."
sleep 10
# 5. Restore database
echo -e "${YELLOW}Restoring database...${NC}"
DB_CONTAINER="v2_postgres"
DB_NAME="v2_license_db"
DB_USER="v2_user"
# Get DB password from restored .env
if [ -f "$PROJECT_ROOT/.env" ]; then
source "$PROJECT_ROOT/.env"
fi
DB_PASS="${POSTGRES_PASSWORD:-${DB_PASS:-v2_password}}"
# Drop and recreate database
docker exec "$DB_CONTAINER" psql -U "$DB_USER" -c "DROP DATABASE IF EXISTS $DB_NAME;"
docker exec "$DB_CONTAINER" psql -U "$DB_USER" -c "CREATE DATABASE $DB_NAME;"
# Restore database from backup
gunzip -c "$BACKUP_DIR/database_backup.sql.gz" | docker exec -i "$DB_CONTAINER" psql -U "$DB_USER" -d "$DB_NAME"
# 6. Start all services
echo -e "${YELLOW}Starting all services...${NC}"
docker-compose up -d
# Wait for services to be ready
echo "Waiting for services to start..."
sleep 15
# 7. Verify services are running
echo -e "${YELLOW}Verifying services...${NC}"
docker-compose ps
# Clean up temporary directory if used
if [ -n "$TEMP_DIR" ] && [ -d "$TEMP_DIR" ]; then
rm -rf "$TEMP_DIR"
fi
echo -e "${GREEN}Restore completed successfully!${NC}"
echo ""
echo "Next steps:"
echo "1. Verify the application is working at https://admin-panel-undso.intelsight.de"
echo "2. Check logs: docker-compose logs -f"
echo "3. If there are issues, check the backup info: cat $BACKUP_DIR/backup_info.txt"

38
scripts/reset-to-dhcp.ps1 Normale Datei
Datei anzeigen

@@ -0,0 +1,38 @@
# PowerShell Script zum Zurücksetzen auf DHCP
# MUSS ALS ADMINISTRATOR AUSGEFÜHRT WERDEN!
Write-Host "=== Zurücksetzen auf DHCP (automatische IP) ===" -ForegroundColor Green
# Aktive WLAN-Adapter finden
$adapters = Get-NetAdapter | Where-Object {$_.Status -eq 'Up' -and ($_.Name -like '*WLAN*' -or $_.Name -like '*Wi-Fi*')}
if ($adapters.Count -eq 0) {
Write-Host "Kein aktiver WLAN-Adapter gefunden!" -ForegroundColor Red
exit
}
# Den ersten aktiven WLAN-Adapter nehmen
$adapter = $adapters[0]
Write-Host "`nSetze Adapter zurück auf DHCP: $($adapter.Name)" -ForegroundColor Cyan
# Statische IP entfernen
Write-Host "`nEntferne statische IP-Konfiguration..." -ForegroundColor Yellow
Remove-NetIPAddress -InterfaceIndex $adapter.InterfaceIndex -AddressFamily IPv4 -Confirm:$false -ErrorAction SilentlyContinue
Remove-NetRoute -InterfaceIndex $adapter.InterfaceIndex -AddressFamily IPv4 -Confirm:$false -ErrorAction SilentlyContinue
# DHCP aktivieren
Write-Host "`nAktiviere DHCP..." -ForegroundColor Green
Set-NetIPInterface -InterfaceIndex $adapter.InterfaceIndex -Dhcp Enabled
# DNS auf automatisch setzen
Write-Host "`nSetze DNS auf automatisch..." -ForegroundColor Green
Set-DnsClientServerAddress -InterfaceIndex $adapter.InterfaceIndex -ResetServerAddresses
Write-Host "`n✅ Fertig! Der Adapter nutzt jetzt wieder DHCP (automatische IP-Vergabe)" -ForegroundColor Green
# Kurz warten
Start-Sleep -Seconds 3
# Neue IP anzeigen
Write-Host "`nNeue IP-Adresse:" -ForegroundColor Yellow
Get-NetIPAddress -InterfaceIndex $adapter.InterfaceIndex -AddressFamily IPv4 | Format-Table IPAddress, PrefixLength

54
scripts/set-static-ip.ps1 Normale Datei
Datei anzeigen

@@ -0,0 +1,54 @@
# PowerShell Script für statische IP-Konfiguration
# MUSS ALS ADMINISTRATOR AUSGEFÜHRT WERDEN!
Write-Host "=== Statische IP 192.168.178.88 einrichten ===" -ForegroundColor Green
# Aktive WLAN-Adapter finden
$adapters = Get-NetAdapter | Where-Object {$_.Status -eq 'Up' -and ($_.Name -like '*WLAN*' -or $_.Name -like '*Wi-Fi*')}
if ($adapters.Count -eq 0) {
Write-Host "Kein aktiver WLAN-Adapter gefunden!" -ForegroundColor Red
exit
}
Write-Host "`nGefundene WLAN-Adapter:" -ForegroundColor Yellow
$adapters | Format-Table Name, Status, InterfaceIndex
# Den ersten aktiven WLAN-Adapter nehmen
$adapter = $adapters[0]
Write-Host "`nKonfiguriere Adapter: $($adapter.Name)" -ForegroundColor Cyan
# Aktuelle Konfiguration anzeigen
Write-Host "`nAktuelle IP-Konfiguration:" -ForegroundColor Yellow
Get-NetIPAddress -InterfaceIndex $adapter.InterfaceIndex -AddressFamily IPv4 | Format-Table IPAddress, PrefixLength
# Alte IP-Konfiguration entfernen
Write-Host "`nEntferne alte IP-Konfiguration..." -ForegroundColor Yellow
Remove-NetIPAddress -InterfaceIndex $adapter.InterfaceIndex -AddressFamily IPv4 -Confirm:$false -ErrorAction SilentlyContinue
Remove-NetRoute -InterfaceIndex $adapter.InterfaceIndex -AddressFamily IPv4 -Confirm:$false -ErrorAction SilentlyContinue
# Neue statische IP setzen
Write-Host "`nSetze neue statische IP: 192.168.178.88" -ForegroundColor Green
New-NetIPAddress -InterfaceIndex $adapter.InterfaceIndex -IPAddress "192.168.178.88" -PrefixLength 24 -DefaultGateway "192.168.178.1" -AddressFamily IPv4
# DNS-Server setzen (FRITZ!Box und Google)
Write-Host "`nSetze DNS-Server..." -ForegroundColor Green
Set-DnsClientServerAddress -InterfaceIndex $adapter.InterfaceIndex -ServerAddresses "192.168.178.1", "8.8.8.8"
# Neue Konfiguration anzeigen
Write-Host "`nNeue IP-Konfiguration:" -ForegroundColor Green
Get-NetIPAddress -InterfaceIndex $adapter.InterfaceIndex -AddressFamily IPv4 | Format-Table IPAddress, PrefixLength
Get-NetRoute -InterfaceIndex $adapter.InterfaceIndex -DestinationPrefix "0.0.0.0/0" | Format-Table DestinationPrefix, NextHop
Write-Host "`n✅ Fertig! Ihre IP ist jetzt: 192.168.178.88" -ForegroundColor Green
Write-Host "Die FRITZ!Box Port-Weiterleitung sollte jetzt funktionieren!" -ForegroundColor Green
# Test
Write-Host "`nTeste Internetverbindung..." -ForegroundColor Yellow
Test-NetConnection google.com -Port 80 -InformationLevel Quiet
if ($?) {
Write-Host "✅ Internetverbindung funktioniert!" -ForegroundColor Green
} else {
Write-Host "❌ Keine Internetverbindung - prüfen Sie die Einstellungen!" -ForegroundColor Red
}

16
scripts/setup-firewall.ps1 Normale Datei
Datei anzeigen

@@ -0,0 +1,16 @@
# PowerShell-Script für Windows Firewall Konfiguration
# Als Administrator ausführen!
Write-Host "Konfiguriere Windows Firewall für Docker..." -ForegroundColor Green
# Firewall-Regeln für HTTP und HTTPS
New-NetFirewallRule -DisplayName "Docker HTTP" -Direction Inbound -Protocol TCP -LocalPort 80 -Action Allow -ErrorAction SilentlyContinue
New-NetFirewallRule -DisplayName "Docker HTTPS" -Direction Inbound -Protocol TCP -LocalPort 443 -Action Allow -ErrorAction SilentlyContinue
Write-Host "Firewall-Regeln erstellt." -ForegroundColor Green
# Docker-Service neustarten (optional)
Write-Host "Starte Docker-Service neu..." -ForegroundColor Yellow
Restart-Service docker
Write-Host "Fertig! Ports 80 und 443 sind jetzt offen." -ForegroundColor Green

Datei anzeigen

@@ -0,0 +1,70 @@
# PostgreSQL-Datenbank
POSTGRES_DB=meinedatenbank
POSTGRES_USER=adminuser
POSTGRES_PASSWORD=supergeheimespasswort
# Admin-Panel Zugangsdaten
ADMIN1_USERNAME=rac00n
ADMIN1_PASSWORD=1248163264
ADMIN2_USERNAME=w@rh@mm3r
ADMIN2_PASSWORD=Warhammer123!
# Lizenzserver API Key für Authentifizierung
# Domains (können von der App ausgewertet werden, z.B. für Links oder CORS)
API_DOMAIN=api-software-undso.intelsight.de
ADMIN_PANEL_DOMAIN=admin-panel-undso.intelsight.de
# ===================== OPTIONALE VARIABLEN =====================
# JWT für API-Auth (WICHTIG: Für sichere Token-Verschlüsselung!)
JWT_SECRET=xY9ZmK2pL7nQ4wF6jH8vB3tG5aZ1dE7fR9hT2kM4nP6qS8uW0xC3yA5bD7eF9gH2jK4
# E-Mail Konfiguration (z.B. bei Ablaufwarnungen)
# MAIL_SERVER=smtp.meinedomain.de
# MAIL_PORT=587
# MAIL_USERNAME=deinemail
# MAIL_PASSWORD=geheim
# MAIL_FROM=no-reply@meinedomain.de
# Logging
# LOG_LEVEL=info
# Erlaubte CORS-Domains (für Web-Frontend)
# ALLOWED_ORIGINS=https://admin.meinedomain.de
# ===================== VERSION =====================
# Serverseitig gepflegte aktuelle Software-Version
# Diese wird vom Lizenzserver genutzt, um die Kundenversion zu vergleichen
LATEST_CLIENT_VERSION=1.0.0
# ===================== BACKUP KONFIGURATION =====================
# E-Mail für Backup-Benachrichtigungen
EMAIL_ENABLED=false
# Backup-Verschlüsselung (optional, wird automatisch generiert wenn leer)
# BACKUP_ENCRYPTION_KEY=
# ===================== CAPTCHA KONFIGURATION =====================
# Google reCAPTCHA v2 Keys (https://www.google.com/recaptcha/admin)
# Für PoC-Phase auskommentiert - CAPTCHA wird übersprungen wenn Keys fehlen
# RECAPTCHA_SITE_KEY=your-site-key-here
# RECAPTCHA_SECRET_KEY=your-secret-key-here
# ===================== MONITORING KONFIGURATION =====================
# Grafana Admin Credentials
GRAFANA_USER=admin
GRAFANA_PASSWORD=admin
# SMTP Settings for Alertmanager (optional)
# SMTP_USERNAME=your-email@gmail.com
# SMTP_PASSWORD=your-app-password
# Webhook URLs for critical alerts (optional)
# WEBHOOK_CRITICAL=https://your-webhook-url/critical
# WEBHOOK_SECURITY=https://your-webhook-url/security

Datei anzeigen

@@ -0,0 +1,164 @@
services:
postgres:
build:
context: ../v2_postgres
container_name: db
restart: always
env_file: .env
environment:
POSTGRES_HOST: postgres
POSTGRES_INITDB_ARGS: '--encoding=UTF8 --locale=de_DE.UTF-8'
POSTGRES_COLLATE: 'de_DE.UTF-8'
POSTGRES_CTYPE: 'de_DE.UTF-8'
TZ: Europe/Berlin
PGTZ: Europe/Berlin
volumes:
# Persistente Speicherung der Datenbank auf dem Windows-Host
- postgres_data:/var/lib/postgresql/data
# Init-Skript für Tabellen
- ../v2_adminpanel/init.sql:/docker-entrypoint-initdb.d/init.sql
networks:
- internal_net
deploy:
resources:
limits:
cpus: '2'
memory: 4g
license-server:
build:
context: ../v2_lizenzserver
container_name: license-server
restart: always
# Port-Mapping entfernt - nur noch über Nginx erreichbar
env_file: .env
environment:
TZ: Europe/Berlin
depends_on:
- postgres
networks:
- internal_net
deploy:
resources:
limits:
cpus: '2'
memory: 4g
# auth-service:
# build:
# context: ../lizenzserver/services/auth
# container_name: auth-service
# restart: always
# # Port 5001 - nur intern erreichbar
# env_file: .env
# environment:
# TZ: Europe/Berlin
# DATABASE_URL: postgresql://postgres:${POSTGRES_PASSWORD}@postgres:5432/v2_adminpanel
# REDIS_URL: redis://redis:6379/1
# JWT_SECRET: ${JWT_SECRET}
# FLASK_ENV: production
# depends_on:
# - postgres
# - redis
# networks:
# - internal_net
# deploy:
# resources:
# limits:
# cpus: '1'
# memory: 1g
# analytics-service:
# build:
# context: ../lizenzserver/services/analytics
# container_name: analytics-service
# restart: always
# # Port 5003 - nur intern erreichbar
# env_file: .env
# environment:
# TZ: Europe/Berlin
# DATABASE_URL: postgresql://postgres:${POSTGRES_PASSWORD}@postgres:5432/v2_adminpanel
# REDIS_URL: redis://redis:6379/2
# JWT_SECRET: ${JWT_SECRET}
# FLASK_ENV: production
# depends_on:
# - postgres
# - redis
# - rabbitmq
# networks:
# - internal_net
# deploy:
# resources:
# limits:
# cpus: '1'
# memory: 2g
# admin-api-service:
# build:
# context: ../lizenzserver/services/admin_api
# container_name: admin-api-service
# restart: always
# # Port 5004 - nur intern erreichbar
# env_file: .env
# environment:
# TZ: Europe/Berlin
# DATABASE_URL: postgresql://postgres:${POSTGRES_PASSWORD}@postgres:5432/v2_adminpanel
# REDIS_URL: redis://redis:6379/3
# JWT_SECRET: ${JWT_SECRET}
# FLASK_ENV: production
# depends_on:
# - postgres
# - redis
# - rabbitmq
# networks:
# - internal_net
# deploy:
# resources:
# limits:
# cpus: '1'
# memory: 2g
admin-panel:
build:
context: ../v2_adminpanel
container_name: admin-panel
restart: always
# Port-Mapping entfernt - nur über nginx erreichbar
env_file: .env
environment:
TZ: Europe/Berlin
depends_on:
- postgres
networks:
- internal_net
volumes:
# Backup-Verzeichnis
- ../backups:/app/backups
deploy:
resources:
limits:
cpus: '2'
memory: 4g
nginx:
build:
context: ../v2_nginx
container_name: nginx-proxy
restart: always
ports:
- "80:80"
- "443:443"
environment:
TZ: Europe/Berlin
depends_on:
- admin-panel
- license-server
networks:
- internal_net
networks:
internal_net:
driver: bridge
volumes:
postgres_data:

Datei anzeigen

@@ -0,0 +1,122 @@
events {
worker_connections 1024;
}
http {
# Enable nginx status page for monitoring
server {
listen 8080;
server_name localhost;
location /nginx_status {
stub_status on;
access_log off;
allow 127.0.0.1;
allow 172.16.0.0/12; # Docker networks
deny all;
}
}
# Moderne SSL-Einstellungen für maximale Sicherheit
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers 'ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384';
ssl_prefer_server_ciphers off;
# SSL Session Einstellungen
ssl_session_timeout 1d;
ssl_session_cache shared:SSL:10m;
ssl_session_tickets off;
# OCSP Stapling
ssl_stapling on;
ssl_stapling_verify on;
resolver 8.8.8.8 8.8.4.4 valid=300s;
resolver_timeout 5s;
# DH parameters für Perfect Forward Secrecy
ssl_dhparam /etc/nginx/ssl/dhparam.pem;
# Admin Panel
server {
listen 80;
server_name admin-panel-undso.intelsight.de;
# Redirect HTTP to HTTPS
return 301 https://$server_name$request_uri;
}
server {
listen 443 ssl;
server_name admin-panel-undso.intelsight.de;
# SSL-Zertifikate (echte Zertifikate)
ssl_certificate /etc/nginx/ssl/fullchain.pem;
ssl_certificate_key /etc/nginx/ssl/privkey.pem;
# Security Headers
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;
add_header X-Content-Type-Options "nosniff" always;
add_header X-Frame-Options "SAMEORIGIN" always;
add_header X-XSS-Protection "1; mode=block" always;
add_header Referrer-Policy "strict-origin-when-cross-origin" always;
# Proxy-Einstellungen
location / {
proxy_pass http://admin-panel:5000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
# WebSocket support (falls benötigt)
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
# Auth Service API (internal only) - temporarily disabled
# location /api/v1/auth/ {
# proxy_pass http://auth-service:5001/api/v1/auth/;
# proxy_set_header Host $host;
# proxy_set_header X-Real-IP $remote_addr;
# proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
# proxy_set_header X-Forwarded-Proto $scheme;
# proxy_set_header Authorization $http_authorization;
# }
}
# API Server (für später)
server {
listen 80;
server_name api-software-undso.intelsight.de;
return 301 https://$server_name$request_uri;
}
server {
listen 443 ssl;
server_name api-software-undso.intelsight.de;
ssl_certificate /etc/nginx/ssl/fullchain.pem;
ssl_certificate_key /etc/nginx/ssl/privkey.pem;
# Security Headers
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;
add_header X-Content-Type-Options "nosniff" always;
add_header X-Frame-Options "SAMEORIGIN" always;
add_header X-XSS-Protection "1; mode=block" always;
add_header Referrer-Policy "strict-origin-when-cross-origin" always;
location / {
proxy_pass http://license-server:8443;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
# WebSocket support (falls benötigt)
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
}
}

Datei anzeigen

@@ -0,0 +1,10 @@
# Ignore all SSL certificates
*.pem
*.crt
*.key
*.p12
*.pfx
# But keep the README
!README.md
!.gitignore

Datei anzeigen

@@ -0,0 +1,29 @@
# SSL Certificate Directory
This directory should contain the following files for SSL to work:
1. **fullchain.pem** - The full certificate chain
2. **privkey.pem** - The private key (keep this secure!)
3. **dhparam.pem** - Diffie-Hellman parameters for enhanced security
## For intelsight.de deployment:
Copy your SSL certificates here:
```bash
cp /path/to/fullchain.pem ./
cp /path/to/privkey.pem ./
```
Generate dhparam.pem if not exists:
```bash
openssl dhparam -out dhparam.pem 2048
```
## File Permissions:
```bash
chmod 644 fullchain.pem
chmod 600 privkey.pem
chmod 644 dhparam.pem
```
**IMPORTANT**: Never commit actual SSL certificates to the repository!

Datei anzeigen

@@ -0,0 +1,5 @@
NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS
admin-panel v2-admin-panel "python app.py" admin-panel 21 hours ago Up 21 hours 5000/tcp
db v2-postgres "docker-entrypoint.s…" postgres 21 hours ago Up 21 hours 5432/tcp
license-server v2-license-server "uvicorn app.main:ap…" license-server 21 hours ago Up 21 hours 8443/tcp
nginx-proxy v2-nginx "/docker-entrypoint.…" nginx 21 hours ago Up 21 hours 0.0.0.0:80->80/tcp, [::]:80->80/tcp, 0.0.0.0:443->443/tcp, [::]:443->443/tcp

Datei anzeigen

@@ -0,0 +1,5 @@
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
2e19a609cc5c v2-nginx "/docker-entrypoint.…" 21 hours ago Up 21 hours 0.0.0.0:80->80/tcp, [::]:80->80/tcp, 0.0.0.0:443->443/tcp, [::]:443->443/tcp nginx-proxy
60acd5642854 v2-admin-panel "python app.py" 21 hours ago Up 21 hours 5000/tcp admin-panel
d2aa58e670bc v2-license-server "uvicorn app.main:ap…" 21 hours ago Up 21 hours 8443/tcp license-server
6f40b240e975 v2-postgres "docker-entrypoint.s…" 21 hours ago Up 21 hours 5432/tcp db

Datei anzeigen

@@ -0,0 +1,10 @@
bad7324 Backup nach Import von Lizenzen und Ressourcen (77 Lizenzen, 31 Ressourcen)
b28b60e nur backups
f105039 Backup nach Wiederherstellung der Kundendaten aus altem Backup
a77c34c Backup nach User-Migration zu Datenbank
85c7499 Add full server backup with Git LFS
8aa79c6 Merge branch 'main' of https://github.com/UserIsMH/v2-Docker
4ab51a7 Hetzner Deploy Version (hoffentlich)
35fd8fd Aktualisieren von SYSTEM_DOCUMENTATION.md
5b71a1d Namenskonsistenz + Ablauf der Lizenzen
cdf81e2 Dashboard angepasst

Datei anzeigen

@@ -0,0 +1,10 @@
V2-Docker Server Backup
Created: Sat Jun 28 08:39:06 PM UTC 2025
Timestamp: 20250628_203904
Type: Full Server Backup
Contents:
- Configuration files (docker-compose, nginx, SSL)
- PostgreSQL database dump
- Docker volumes
- Git status and history
- Docker container status

Datei anzeigen

@@ -0,0 +1,70 @@
# PostgreSQL-Datenbank
POSTGRES_DB=meinedatenbank
POSTGRES_USER=adminuser
POSTGRES_PASSWORD=supergeheimespasswort
# Admin-Panel Zugangsdaten
ADMIN1_USERNAME=rac00n
ADMIN1_PASSWORD=1248163264
ADMIN2_USERNAME=w@rh@mm3r
ADMIN2_PASSWORD=Warhammer123!
# Lizenzserver API Key für Authentifizierung
# Domains (können von der App ausgewertet werden, z.B. für Links oder CORS)
API_DOMAIN=api-software-undso.intelsight.de
ADMIN_PANEL_DOMAIN=admin-panel-undso.intelsight.de
# ===================== OPTIONALE VARIABLEN =====================
# JWT für API-Auth (WICHTIG: Für sichere Token-Verschlüsselung!)
JWT_SECRET=xY9ZmK2pL7nQ4wF6jH8vB3tG5aZ1dE7fR9hT2kM4nP6qS8uW0xC3yA5bD7eF9gH2jK4
# E-Mail Konfiguration (z.B. bei Ablaufwarnungen)
# MAIL_SERVER=smtp.meinedomain.de
# MAIL_PORT=587
# MAIL_USERNAME=deinemail
# MAIL_PASSWORD=geheim
# MAIL_FROM=no-reply@meinedomain.de
# Logging
# LOG_LEVEL=info
# Erlaubte CORS-Domains (für Web-Frontend)
# ALLOWED_ORIGINS=https://admin.meinedomain.de
# ===================== VERSION =====================
# Serverseitig gepflegte aktuelle Software-Version
# Diese wird vom Lizenzserver genutzt, um die Kundenversion zu vergleichen
LATEST_CLIENT_VERSION=1.0.0
# ===================== BACKUP KONFIGURATION =====================
# E-Mail für Backup-Benachrichtigungen
EMAIL_ENABLED=false
# Backup-Verschlüsselung (optional, wird automatisch generiert wenn leer)
# BACKUP_ENCRYPTION_KEY=
# ===================== CAPTCHA KONFIGURATION =====================
# Google reCAPTCHA v2 Keys (https://www.google.com/recaptcha/admin)
# Für PoC-Phase auskommentiert - CAPTCHA wird übersprungen wenn Keys fehlen
# RECAPTCHA_SITE_KEY=your-site-key-here
# RECAPTCHA_SECRET_KEY=your-secret-key-here
# ===================== MONITORING KONFIGURATION =====================
# Grafana Admin Credentials
GRAFANA_USER=admin
GRAFANA_PASSWORD=admin
# SMTP Settings for Alertmanager (optional)
# SMTP_USERNAME=your-email@gmail.com
# SMTP_PASSWORD=your-app-password
# Webhook URLs for critical alerts (optional)
# WEBHOOK_CRITICAL=https://your-webhook-url/critical
# WEBHOOK_SECURITY=https://your-webhook-url/security

Datei anzeigen

@@ -0,0 +1,164 @@
services:
postgres:
build:
context: ../v2_postgres
container_name: db
restart: always
env_file: .env
environment:
POSTGRES_HOST: postgres
POSTGRES_INITDB_ARGS: '--encoding=UTF8 --locale=de_DE.UTF-8'
POSTGRES_COLLATE: 'de_DE.UTF-8'
POSTGRES_CTYPE: 'de_DE.UTF-8'
TZ: Europe/Berlin
PGTZ: Europe/Berlin
volumes:
# Persistente Speicherung der Datenbank auf dem Windows-Host
- postgres_data:/var/lib/postgresql/data
# Init-Skript für Tabellen
- ../v2_adminpanel/init.sql:/docker-entrypoint-initdb.d/init.sql
networks:
- internal_net
deploy:
resources:
limits:
cpus: '2'
memory: 4g
license-server:
build:
context: ../v2_lizenzserver
container_name: license-server
restart: always
# Port-Mapping entfernt - nur noch über Nginx erreichbar
env_file: .env
environment:
TZ: Europe/Berlin
depends_on:
- postgres
networks:
- internal_net
deploy:
resources:
limits:
cpus: '2'
memory: 4g
# auth-service:
# build:
# context: ../lizenzserver/services/auth
# container_name: auth-service
# restart: always
# # Port 5001 - nur intern erreichbar
# env_file: .env
# environment:
# TZ: Europe/Berlin
# DATABASE_URL: postgresql://postgres:${POSTGRES_PASSWORD}@postgres:5432/v2_adminpanel
# REDIS_URL: redis://redis:6379/1
# JWT_SECRET: ${JWT_SECRET}
# FLASK_ENV: production
# depends_on:
# - postgres
# - redis
# networks:
# - internal_net
# deploy:
# resources:
# limits:
# cpus: '1'
# memory: 1g
# analytics-service:
# build:
# context: ../lizenzserver/services/analytics
# container_name: analytics-service
# restart: always
# # Port 5003 - nur intern erreichbar
# env_file: .env
# environment:
# TZ: Europe/Berlin
# DATABASE_URL: postgresql://postgres:${POSTGRES_PASSWORD}@postgres:5432/v2_adminpanel
# REDIS_URL: redis://redis:6379/2
# JWT_SECRET: ${JWT_SECRET}
# FLASK_ENV: production
# depends_on:
# - postgres
# - redis
# - rabbitmq
# networks:
# - internal_net
# deploy:
# resources:
# limits:
# cpus: '1'
# memory: 2g
# admin-api-service:
# build:
# context: ../lizenzserver/services/admin_api
# container_name: admin-api-service
# restart: always
# # Port 5004 - nur intern erreichbar
# env_file: .env
# environment:
# TZ: Europe/Berlin
# DATABASE_URL: postgresql://postgres:${POSTGRES_PASSWORD}@postgres:5432/v2_adminpanel
# REDIS_URL: redis://redis:6379/3
# JWT_SECRET: ${JWT_SECRET}
# FLASK_ENV: production
# depends_on:
# - postgres
# - redis
# - rabbitmq
# networks:
# - internal_net
# deploy:
# resources:
# limits:
# cpus: '1'
# memory: 2g
admin-panel:
build:
context: ../v2_adminpanel
container_name: admin-panel
restart: always
# Port-Mapping entfernt - nur über nginx erreichbar
env_file: .env
environment:
TZ: Europe/Berlin
depends_on:
- postgres
networks:
- internal_net
volumes:
# Backup-Verzeichnis
- ../backups:/app/backups
deploy:
resources:
limits:
cpus: '2'
memory: 4g
nginx:
build:
context: ../v2_nginx
container_name: nginx-proxy
restart: always
ports:
- "80:80"
- "443:443"
environment:
TZ: Europe/Berlin
depends_on:
- admin-panel
- license-server
networks:
- internal_net
networks:
internal_net:
driver: bridge
volumes:
postgres_data:

Datei anzeigen

@@ -0,0 +1,122 @@
events {
worker_connections 1024;
}
http {
# Enable nginx status page for monitoring
server {
listen 8080;
server_name localhost;
location /nginx_status {
stub_status on;
access_log off;
allow 127.0.0.1;
allow 172.16.0.0/12; # Docker networks
deny all;
}
}
# Moderne SSL-Einstellungen für maximale Sicherheit
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers 'ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384';
ssl_prefer_server_ciphers off;
# SSL Session Einstellungen
ssl_session_timeout 1d;
ssl_session_cache shared:SSL:10m;
ssl_session_tickets off;
# OCSP Stapling
ssl_stapling on;
ssl_stapling_verify on;
resolver 8.8.8.8 8.8.4.4 valid=300s;
resolver_timeout 5s;
# DH parameters für Perfect Forward Secrecy
ssl_dhparam /etc/nginx/ssl/dhparam.pem;
# Admin Panel
server {
listen 80;
server_name admin-panel-undso.intelsight.de;
# Redirect HTTP to HTTPS
return 301 https://$server_name$request_uri;
}
server {
listen 443 ssl;
server_name admin-panel-undso.intelsight.de;
# SSL-Zertifikate (echte Zertifikate)
ssl_certificate /etc/nginx/ssl/fullchain.pem;
ssl_certificate_key /etc/nginx/ssl/privkey.pem;
# Security Headers
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;
add_header X-Content-Type-Options "nosniff" always;
add_header X-Frame-Options "SAMEORIGIN" always;
add_header X-XSS-Protection "1; mode=block" always;
add_header Referrer-Policy "strict-origin-when-cross-origin" always;
# Proxy-Einstellungen
location / {
proxy_pass http://admin-panel:5000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
# WebSocket support (falls benötigt)
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
# Auth Service API (internal only) - temporarily disabled
# location /api/v1/auth/ {
# proxy_pass http://auth-service:5001/api/v1/auth/;
# proxy_set_header Host $host;
# proxy_set_header X-Real-IP $remote_addr;
# proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
# proxy_set_header X-Forwarded-Proto $scheme;
# proxy_set_header Authorization $http_authorization;
# }
}
# API Server (für später)
server {
listen 80;
server_name api-software-undso.intelsight.de;
return 301 https://$server_name$request_uri;
}
server {
listen 443 ssl;
server_name api-software-undso.intelsight.de;
ssl_certificate /etc/nginx/ssl/fullchain.pem;
ssl_certificate_key /etc/nginx/ssl/privkey.pem;
# Security Headers
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;
add_header X-Content-Type-Options "nosniff" always;
add_header X-Frame-Options "SAMEORIGIN" always;
add_header X-XSS-Protection "1; mode=block" always;
add_header Referrer-Policy "strict-origin-when-cross-origin" always;
location / {
proxy_pass http://license-server:8443;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
# WebSocket support (falls benötigt)
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
}
}

Datei anzeigen

@@ -0,0 +1,8 @@
-----BEGIN DH PARAMETERS-----
MIIBCAKCAQEA3UNy/iKdzjC78mqJ39+w9uotmnI9yglBXI7N/+t42KSX19TCsE5I
Dw+bToiUJHAqu+BG2ZNZhvB4+NStFVkPAnEm1I4UOXR9skWgOqwhqotPUpHduOLC
wooKpMUe26dGszM/tQduYoupzfwbVU8ENamLKXOqrzz/CBmo8r1uvPNAM0AljVSO
mOCMIu8C0KBm5u6I1USjp2xNi8xTeasBsLc1iRbxKLKNLNQW4dL9fO7NQIDPghOi
5YTMiNoO14TsCrzzPIF4AFWnBW2XTGwYlx5CuAR/ZUmbzdEVD7ACka2MP6PSnjLK
SIjlM7dTQQHASm81JazbNFqYBBk69/GuZwIBAg==
-----END DH PARAMETERS-----

Datei anzeigen

@@ -0,0 +1 @@
./create_full_backup.sh: line 51: docker-compose: command not found

Datei anzeigen

@@ -0,0 +1,5 @@
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
2e19a609cc5c v2-nginx "/docker-entrypoint.…" 25 hours ago Up About an hour 0.0.0.0:80->80/tcp, [::]:80->80/tcp, 0.0.0.0:443->443/tcp, [::]:443->443/tcp nginx-proxy
60acd5642854 v2-admin-panel "python app.py" 25 hours ago Up About an hour 5000/tcp admin-panel
d2aa58e670bc v2-license-server "uvicorn app.main:ap…" 25 hours ago Up About an hour 8443/tcp license-server
6f40b240e975 v2-postgres "docker-entrypoint.s…" 25 hours ago Up About an hour 5432/tcp db

Datei anzeigen

@@ -0,0 +1,50 @@
553c376 Test backup
98bee9c Backup vor Admin Panel Backup-System Erweiterung
bad7324 Backup nach Import von Lizenzen und Ressourcen (77 Lizenzen, 31 Ressourcen)
b28b60e nur backups
f105039 Backup nach Wiederherstellung der Kundendaten aus altem Backup
a77c34c Backup nach User-Migration zu Datenbank
85c7499 Add full server backup with Git LFS
8aa79c6 Merge branch 'main' of https://github.com/UserIsMH/v2-Docker
4ab51a7 Hetzner Deploy Version (hoffentlich)
35fd8fd Aktualisieren von SYSTEM_DOCUMENTATION.md
5b71a1d Namenskonsistenz + Ablauf der Lizenzen
cdf81e2 Dashboard angepasst
4a13946 Lead Management Usability Upgrade
45e236f Lead Management - Zwischenstand
8cb483a Documentation gerade gezogen
4b093fa log Benutzer Fix
b9b943e Export Button geht jetzt
74391e6 Lizenzübersicjht DB Data Problem Fix
9982f14 Lizenzübersicht fix
ce03b90 Lizenzübersicht besser
1146406 BUG fix - API
4ed8889 API-Key - Fix - Nicht mehr mehrere
889a7b4 Documentation Update
1b5b7d0 API Key Config ist fertig
b420452 lizenzserver API gedöns
6d1577c Create TODO_LIZENZSERVER_CONFIG.md
20be02d CLAUDE.md als Richtlinie
75c2f0d Monitoring fix
0a994fa Error handling
08e4e93 Die UNterscheidung von Test und Echt Lizenzen ist strikter
fdf74c1 Monitoring Anpassung
3d02c7a Service Status im Dashboard
e2b5247 System Status - License Server fix
1e6012a Unnötige Reddis und Rabbit MQ entfernt
e6799c6 Garfana und sowas aufgeräumt
3d899b1 Test zu Fake geändert, weil Namensproblem
fec588b Löschen Lizenz Schutz
1451a23 Alle Lkzenzen in der Navbar
627c6c3 Dashboard zeigt Realdaten
fff82f4 Session zu Aktive Nutzung im Dashboard
ae30b74 Lizenzserver (Backend) - Erstellt
afa2b52 Kunden & Lizenzen Fix
b822504 Kontakte - Telefonnummern und E-Mail-Adressen Bearbeiten ist drin
9e5843a Übersicht der Kontakte
0e79e5e Alle .md einmal aufgeräumt
f73c64a Notizen kann man bearbeiten
72e328a Leads sind integriert
c349469 Stand geupdatet
f82131b Vorläufig fertiger server
c30d974 Zwischenstand - ohne Prometheus

Datei anzeigen

@@ -0,0 +1,39 @@
On branch main
Changes not staged for commit:
(use "git add/rm <file>..." to update what will be committed)
(use "git restore <file>..." to discard changes in working directory)
deleted: server-backups/server_backup_20250628_145911.tar.gz
deleted: server-backups/server_backup_20250628_153152.tar.gz
deleted: server-backups/server_backup_20250628_160032.tar.gz
deleted: server-backups/server_backup_20250628_165902.tar.gz
deleted: server-backups/server_backup_20250628_171741.tar.gz
deleted: server-backups/server_backup_20250628_190433.tar.gz
Untracked files:
(use "git add <file>..." to include in what will be committed)
.gitattributes
API_REFERENCE.md
JOURNAL.md
SSL/
backup_before_cleanup.sh
backups/
cloud-init.yaml
create_full_backup.sh
generate-secrets.py
lizenzserver/
migrations/
restore_full_backup.sh
scripts/
server-backups/server_backup_20250628_171705/
server-backups/server_backup_20250628_203904/
setup_backup_cron.sh
v2/
v2_adminpanel/
v2_lizenzserver/
v2_nginx/
v2_postgreSQL/
v2_postgres/
v2_testing/
verify-deployment.sh
no changes added to commit (use "git add" and/or "git commit -a")

21
setup_backup_cron.sh Ausführbare Datei
Datei anzeigen

@@ -0,0 +1,21 @@
#!/bin/bash
# Setup daily backup cron job at 3:00 AM
CRON_JOB="0 3 * * * cd /opt/v2-Docker && /usr/bin/python3 /opt/v2-Docker/v2_adminpanel/scheduled_backup.py >> /opt/v2-Docker/logs/cron_backup.log 2>&1"
# Check if cron job already exists
if crontab -l 2>/dev/null | grep -q "scheduled_backup.py"; then
echo "Backup cron job already exists"
else
# Add the cron job
(crontab -l 2>/dev/null; echo "$CRON_JOB") | crontab -
echo "Backup cron job added successfully"
fi
# Create logs directory if it doesn't exist
mkdir -p /opt/v2-Docker/logs
# Show current crontab
echo "Current crontab:"
crontab -l

70
v2/.env Normale Datei
Datei anzeigen

@@ -0,0 +1,70 @@
# PostgreSQL-Datenbank
POSTGRES_DB=meinedatenbank
POSTGRES_USER=adminuser
POSTGRES_PASSWORD=supergeheimespasswort
# Admin-Panel Zugangsdaten
ADMIN1_USERNAME=rac00n
ADMIN1_PASSWORD=1248163264
ADMIN2_USERNAME=w@rh@mm3r
ADMIN2_PASSWORD=Warhammer123!
# Lizenzserver API Key für Authentifizierung
# Domains (können von der App ausgewertet werden, z.B. für Links oder CORS)
API_DOMAIN=api-software-undso.intelsight.de
ADMIN_PANEL_DOMAIN=admin-panel-undso.intelsight.de
# ===================== OPTIONALE VARIABLEN =====================
# JWT für API-Auth (WICHTIG: Für sichere Token-Verschlüsselung!)
JWT_SECRET=xY9ZmK2pL7nQ4wF6jH8vB3tG5aZ1dE7fR9hT2kM4nP6qS8uW0xC3yA5bD7eF9gH2jK4
# E-Mail Konfiguration (z.B. bei Ablaufwarnungen)
# MAIL_SERVER=smtp.meinedomain.de
# MAIL_PORT=587
# MAIL_USERNAME=deinemail
# MAIL_PASSWORD=geheim
# MAIL_FROM=no-reply@meinedomain.de
# Logging
# LOG_LEVEL=info
# Erlaubte CORS-Domains (für Web-Frontend)
# ALLOWED_ORIGINS=https://admin.meinedomain.de
# ===================== VERSION =====================
# Serverseitig gepflegte aktuelle Software-Version
# Diese wird vom Lizenzserver genutzt, um die Kundenversion zu vergleichen
LATEST_CLIENT_VERSION=1.0.0
# ===================== BACKUP KONFIGURATION =====================
# E-Mail für Backup-Benachrichtigungen
EMAIL_ENABLED=false
# Backup-Verschlüsselung (optional, wird automatisch generiert wenn leer)
# BACKUP_ENCRYPTION_KEY=
# ===================== CAPTCHA KONFIGURATION =====================
# Google reCAPTCHA v2 Keys (https://www.google.com/recaptcha/admin)
# Für PoC-Phase auskommentiert - CAPTCHA wird übersprungen wenn Keys fehlen
# RECAPTCHA_SITE_KEY=your-site-key-here
# RECAPTCHA_SECRET_KEY=your-secret-key-here
# ===================== MONITORING KONFIGURATION =====================
# Grafana Admin Credentials
GRAFANA_USER=admin
GRAFANA_PASSWORD=admin
# SMTP Settings for Alertmanager (optional)
# SMTP_USERNAME=your-email@gmail.com
# SMTP_PASSWORD=your-app-password
# Webhook URLs for critical alerts (optional)
# WEBHOOK_CRITICAL=https://your-webhook-url/critical
# WEBHOOK_SECURITY=https://your-webhook-url/security

56
v2/.env.production.template Normale Datei
Datei anzeigen

@@ -0,0 +1,56 @@
# PostgreSQL-Datenbank
POSTGRES_DB=meinedatenbank
POSTGRES_USER=adminuser
# IMPORTANT: Generate a strong password using generate-secrets.py
POSTGRES_PASSWORD=CHANGE_THIS_STRONG_PASSWORD
# Admin-Panel Zugangsdaten
ADMIN1_USERNAME=rac00n
ADMIN1_PASSWORD=1248163264
ADMIN2_USERNAME=w@rh@mm3r
ADMIN2_PASSWORD=Warhammer123!
# Domains
API_DOMAIN=api-software-undso.intelsight.de
ADMIN_PANEL_DOMAIN=admin-panel-undso.intelsight.de
# JWT für API-Auth (WICHTIG: Für sichere Token-Verschlüsselung!)
# IMPORTANT: Generate using generate-secrets.py
JWT_SECRET=CHANGE_THIS_GENERATE_SECURE_SECRET
# E-Mail Konfiguration (optional)
# MAIL_SERVER=smtp.meinedomain.de
# MAIL_PORT=587
# MAIL_USERNAME=deinemail
# MAIL_PASSWORD=geheim
# MAIL_FROM=no-reply@intelsight.de
# Logging
LOG_LEVEL=info
# Erlaubte CORS-Domains (für Web-Frontend)
ALLOWED_ORIGINS=https://admin-panel-undso.intelsight.de
# VERSION
LATEST_CLIENT_VERSION=1.0.0
# BACKUP KONFIGURATION
EMAIL_ENABLED=false
# CAPTCHA KONFIGURATION (optional für PoC)
# RECAPTCHA_SITE_KEY=your-site-key-here
# RECAPTCHA_SECRET_KEY=your-secret-key-here
# MONITORING KONFIGURATION
GRAFANA_USER=admin
# IMPORTANT: Generate a strong password using generate-secrets.py
GRAFANA_PASSWORD=CHANGE_THIS_STRONG_PASSWORD
# SMTP Settings for Alertmanager (optional)
# SMTP_USERNAME=your-email@gmail.com
# SMTP_PASSWORD=your-app-password
# Webhook URLs for critical alerts (optional)
# WEBHOOK_CRITICAL=https://your-webhook-url/critical
# WEBHOOK_SECURITY=https://your-webhook-url/security

164
v2/docker-compose.yaml Normale Datei
Datei anzeigen

@@ -0,0 +1,164 @@
services:
postgres:
build:
context: ../v2_postgres
container_name: db
restart: always
env_file: .env
environment:
POSTGRES_HOST: postgres
POSTGRES_INITDB_ARGS: '--encoding=UTF8 --locale=de_DE.UTF-8'
POSTGRES_COLLATE: 'de_DE.UTF-8'
POSTGRES_CTYPE: 'de_DE.UTF-8'
TZ: Europe/Berlin
PGTZ: Europe/Berlin
volumes:
# Persistente Speicherung der Datenbank auf dem Windows-Host
- postgres_data:/var/lib/postgresql/data
# Init-Skript für Tabellen
- ../v2_adminpanel/init.sql:/docker-entrypoint-initdb.d/init.sql
networks:
- internal_net
deploy:
resources:
limits:
cpus: '2'
memory: 4g
license-server:
build:
context: ../v2_lizenzserver
container_name: license-server
restart: always
# Port-Mapping entfernt - nur noch über Nginx erreichbar
env_file: .env
environment:
TZ: Europe/Berlin
depends_on:
- postgres
networks:
- internal_net
deploy:
resources:
limits:
cpus: '2'
memory: 4g
# auth-service:
# build:
# context: ../lizenzserver/services/auth
# container_name: auth-service
# restart: always
# # Port 5001 - nur intern erreichbar
# env_file: .env
# environment:
# TZ: Europe/Berlin
# DATABASE_URL: postgresql://postgres:${POSTGRES_PASSWORD}@postgres:5432/v2_adminpanel
# REDIS_URL: redis://redis:6379/1
# JWT_SECRET: ${JWT_SECRET}
# FLASK_ENV: production
# depends_on:
# - postgres
# - redis
# networks:
# - internal_net
# deploy:
# resources:
# limits:
# cpus: '1'
# memory: 1g
# analytics-service:
# build:
# context: ../lizenzserver/services/analytics
# container_name: analytics-service
# restart: always
# # Port 5003 - nur intern erreichbar
# env_file: .env
# environment:
# TZ: Europe/Berlin
# DATABASE_URL: postgresql://postgres:${POSTGRES_PASSWORD}@postgres:5432/v2_adminpanel
# REDIS_URL: redis://redis:6379/2
# JWT_SECRET: ${JWT_SECRET}
# FLASK_ENV: production
# depends_on:
# - postgres
# - redis
# - rabbitmq
# networks:
# - internal_net
# deploy:
# resources:
# limits:
# cpus: '1'
# memory: 2g
# admin-api-service:
# build:
# context: ../lizenzserver/services/admin_api
# container_name: admin-api-service
# restart: always
# # Port 5004 - nur intern erreichbar
# env_file: .env
# environment:
# TZ: Europe/Berlin
# DATABASE_URL: postgresql://postgres:${POSTGRES_PASSWORD}@postgres:5432/v2_adminpanel
# REDIS_URL: redis://redis:6379/3
# JWT_SECRET: ${JWT_SECRET}
# FLASK_ENV: production
# depends_on:
# - postgres
# - redis
# - rabbitmq
# networks:
# - internal_net
# deploy:
# resources:
# limits:
# cpus: '1'
# memory: 2g
admin-panel:
build:
context: ../v2_adminpanel
container_name: admin-panel
restart: always
# Port-Mapping entfernt - nur über nginx erreichbar
env_file: .env
environment:
TZ: Europe/Berlin
depends_on:
- postgres
networks:
- internal_net
volumes:
# Backup-Verzeichnis
- ../backups:/app/backups
deploy:
resources:
limits:
cpus: '2'
memory: 4g
nginx:
build:
context: ../v2_nginx
container_name: nginx-proxy
restart: always
ports:
- "80:80"
- "443:443"
environment:
TZ: Europe/Berlin
depends_on:
- admin-panel
- license-server
networks:
- internal_net
networks:
internal_net:
driver: bridge
volumes:
postgres_data:

33
v2_adminpanel/Dockerfile Normale Datei
Datei anzeigen

@@ -0,0 +1,33 @@
FROM python:3.11-slim
# Locale für deutsche Sprache und UTF-8 setzen
ENV LANG=de_DE.UTF-8
ENV LC_ALL=de_DE.UTF-8
ENV PYTHONIOENCODING=utf-8
# Zeitzone auf Europe/Berlin setzen
ENV TZ=Europe/Berlin
WORKDIR /app
# System-Dependencies inkl. PostgreSQL-Tools installieren
RUN apt-get update && apt-get install -y \
locales \
postgresql-client \
tzdata \
&& sed -i '/de_DE.UTF-8/s/^# //g' /etc/locale.gen \
&& locale-gen \
&& update-locale LANG=de_DE.UTF-8 \
&& ln -sf /usr/share/zoneinfo/Europe/Berlin /etc/localtime \
&& echo "Europe/Berlin" > /etc/timezone \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
COPY . .
EXPOSE 5000
CMD ["python", "app.py"]

Binäre Datei nicht angezeigt.

168
v2_adminpanel/app.py Normale Datei
Datei anzeigen

@@ -0,0 +1,168 @@
import os
import sys
import logging
from datetime import datetime
# Add current directory to Python path to ensure modules can be imported
sys.path.insert(0, os.path.dirname(os.path.abspath(__file__)))
from flask import Flask, render_template, session
from flask_session import Session
from werkzeug.middleware.proxy_fix import ProxyFix
from apscheduler.schedulers.background import BackgroundScheduler
from prometheus_flask_exporter import PrometheusMetrics
# Import our configuration and utilities
import config
from utils.backup import create_backup
# Import error handling system
from core.error_handlers import init_error_handlers
from core.logging_config import setup_logging
from core.monitoring import init_monitoring
from middleware.error_middleware import ErrorHandlingMiddleware
app = Flask(__name__)
# Initialize Prometheus metrics
metrics = PrometheusMetrics(app)
metrics.info('admin_panel_info', 'Admin Panel Information', version='1.0.0')
# Load configuration from config module
app.config['SECRET_KEY'] = config.SECRET_KEY
app.config['SESSION_TYPE'] = config.SESSION_TYPE
app.config['JSON_AS_ASCII'] = config.JSON_AS_ASCII
app.config['JSONIFY_MIMETYPE'] = config.JSONIFY_MIMETYPE
app.config['PERMANENT_SESSION_LIFETIME'] = config.PERMANENT_SESSION_LIFETIME
app.config['SESSION_COOKIE_HTTPONLY'] = config.SESSION_COOKIE_HTTPONLY
app.config['SESSION_COOKIE_SECURE'] = config.SESSION_COOKIE_SECURE
app.config['SESSION_COOKIE_SAMESITE'] = config.SESSION_COOKIE_SAMESITE
app.config['SESSION_COOKIE_NAME'] = config.SESSION_COOKIE_NAME
app.config['SESSION_REFRESH_EACH_REQUEST'] = config.SESSION_REFRESH_EACH_REQUEST
Session(app)
# ProxyFix für korrekte IP-Adressen hinter Nginx
app.wsgi_app = ProxyFix(
app.wsgi_app, x_for=1, x_proto=1, x_host=1, x_prefix=1
)
# Configuration is now loaded from config module
# Initialize error handling system
setup_logging(app)
init_error_handlers(app)
init_monitoring(app)
ErrorHandlingMiddleware(app)
# Logging konfigurieren
logging.basicConfig(level=logging.INFO)
# Initialize scheduler from scheduler module
from scheduler import init_scheduler
scheduler = init_scheduler()
# Import and register blueprints
try:
from routes.auth_routes import auth_bp
from routes.admin_routes import admin_bp
from routes.api_routes import api_bp
from routes.batch_routes import batch_bp
from routes.customer_routes import customer_bp
from routes.export_routes import export_bp
from routes.license_routes import license_bp
from routes.resource_routes import resource_bp
from routes.session_routes import session_bp
from routes.monitoring_routes import monitoring_bp
from leads import leads_bp
print("All blueprints imported successfully!")
except Exception as e:
print(f"Blueprint import error: {str(e)}")
import traceback
traceback.print_exc()
# Register all blueprints
app.register_blueprint(auth_bp)
app.register_blueprint(admin_bp)
app.register_blueprint(api_bp)
app.register_blueprint(batch_bp)
app.register_blueprint(customer_bp)
app.register_blueprint(export_bp)
app.register_blueprint(license_bp)
app.register_blueprint(resource_bp)
app.register_blueprint(session_bp)
app.register_blueprint(monitoring_bp)
app.register_blueprint(leads_bp, url_prefix='/leads')
# Template filters
@app.template_filter('nl2br')
def nl2br_filter(s):
"""Convert newlines to <br> tags"""
return s.replace('\n', '<br>\n') if s else ''
# Debug routes to test
@app.route('/test-customers-licenses')
def test_route():
return "Test route works! If you see this, routing is working."
@app.route('/direct-customers-licenses')
def direct_customers_licenses():
"""Direct route without blueprint"""
try:
return render_template("customers_licenses.html", customers=[])
except Exception as e:
return f"Error: {str(e)}"
@app.route('/debug-routes')
def debug_routes():
"""Show all registered routes"""
routes = []
for rule in app.url_map.iter_rules():
routes.append(f"{rule.endpoint}: {rule.rule}")
return "<br>".join(sorted(routes))
# Scheduled backup job is now handled by scheduler module
# Error handlers are now managed by the error handling system in core/error_handlers.py
# Context processors
@app.context_processor
def inject_global_vars():
"""Inject global variables into all templates"""
return {
'current_year': datetime.now().year,
'app_version': '2.0.0',
'is_logged_in': session.get('logged_in', False),
'username': session.get('username', '')
}
# Simple test route that should always work
@app.route('/simple-test')
def simple_test():
return "Simple test works!"
@app.route('/test-db')
def test_db():
"""Test database connection"""
try:
import psycopg2
conn = psycopg2.connect(
host=os.getenv("POSTGRES_HOST", "postgres"),
port=os.getenv("POSTGRES_PORT", "5432"),
dbname=os.getenv("POSTGRES_DB"),
user=os.getenv("POSTGRES_USER"),
password=os.getenv("POSTGRES_PASSWORD")
)
cur = conn.cursor()
cur.execute("SELECT COUNT(*) FROM customers")
count = cur.fetchone()[0]
cur.close()
conn.close()
return f"Database works! Customers count: {count}"
except Exception as e:
import traceback
return f"Database error: {str(e)}<br><pre>{traceback.format_exc()}</pre>"
if __name__ == "__main__":
app.run(host="0.0.0.0", port=5000)

Datei anzeigen

@@ -0,0 +1,52 @@
#!/usr/bin/env python3
"""
Apply Lead Management Tables Migration
"""
import psycopg2
import os
from db import get_db_connection
def apply_migration():
"""Apply the lead tables migration"""
try:
# Read migration SQL
migration_file = os.path.join(os.path.dirname(__file__),
'migrations', 'create_lead_tables.sql')
with open(migration_file, 'r') as f:
migration_sql = f.read()
# Connect and execute
with get_db_connection() as conn:
cur = conn.cursor()
print("Applying lead management tables migration...")
cur.execute(migration_sql)
# Verify tables were created
cur.execute("""
SELECT table_name
FROM information_schema.tables
WHERE table_schema = 'public'
AND table_name LIKE 'lead_%'
ORDER BY table_name
""")
tables = cur.fetchall()
print(f"\nCreated {len(tables)} tables:")
for table in tables:
print(f" - {table[0]}")
cur.close()
print("\n✅ Migration completed successfully!")
except FileNotFoundError:
print(f"❌ Migration file not found: {migration_file}")
except psycopg2.Error as e:
print(f"❌ Database error: {e}")
except Exception as e:
print(f"❌ Unexpected error: {e}")
if __name__ == "__main__":
apply_migration()

Datei anzeigen

@@ -0,0 +1,92 @@
#!/usr/bin/env python3
"""
Apply the license_heartbeats table migration
"""
import os
import psycopg2
import logging
from datetime import datetime
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)
def get_db_connection():
"""Get database connection"""
return psycopg2.connect(
host=os.environ.get('POSTGRES_HOST', 'postgres'),
database=os.environ.get('POSTGRES_DB', 'v2_adminpanel'),
user=os.environ.get('POSTGRES_USER', 'postgres'),
password=os.environ.get('POSTGRES_PASSWORD', 'postgres')
)
def apply_migration():
"""Apply the license_heartbeats migration"""
conn = None
try:
logger.info("Connecting to database...")
conn = get_db_connection()
cur = conn.cursor()
# Read migration file
migration_file = os.path.join(os.path.dirname(__file__), 'migrations', 'create_license_heartbeats_table.sql')
logger.info(f"Reading migration file: {migration_file}")
with open(migration_file, 'r') as f:
migration_sql = f.read()
# Execute migration
logger.info("Executing migration...")
cur.execute(migration_sql)
# Verify table was created
cur.execute("""
SELECT EXISTS (
SELECT 1
FROM information_schema.tables
WHERE table_name = 'license_heartbeats'
)
""")
if cur.fetchone()[0]:
logger.info("✓ license_heartbeats table created successfully!")
# Check partitions
cur.execute("""
SELECT tablename
FROM pg_tables
WHERE tablename LIKE 'license_heartbeats_%'
ORDER BY tablename
""")
partitions = cur.fetchall()
logger.info(f"✓ Created {len(partitions)} partitions:")
for partition in partitions:
logger.info(f" - {partition[0]}")
else:
logger.error("✗ Failed to create license_heartbeats table")
return False
conn.commit()
logger.info("✓ Migration completed successfully!")
return True
except Exception as e:
logger.error(f"✗ Migration failed: {str(e)}")
if conn:
conn.rollback()
return False
finally:
if conn:
cur.close()
conn.close()
if __name__ == "__main__":
logger.info("=== Applying license_heartbeats migration ===")
logger.info(f"Timestamp: {datetime.now()}")
if apply_migration():
logger.info("=== Migration successful! ===")
else:
logger.error("=== Migration failed! ===")
exit(1)

Datei anzeigen

@@ -0,0 +1,122 @@
#!/usr/bin/env python3
"""
Apply partition migration for license_heartbeats table.
This script creates missing partitions for the current and future months.
"""
import psycopg2
import os
import sys
from datetime import datetime, timedelta
from dateutil.relativedelta import relativedelta
def get_db_connection():
"""Get database connection"""
return psycopg2.connect(
host=os.environ.get('POSTGRES_HOST', 'postgres'),
database=os.environ.get('POSTGRES_DB', 'v2_adminpanel'),
user=os.environ.get('POSTGRES_USER', 'postgres'),
password=os.environ.get('POSTGRES_PASSWORD', 'postgres')
)
def create_partition(cursor, year, month):
"""Create a partition for the given year and month"""
partition_name = f"license_heartbeats_{year}_{month:02d}"
start_date = f"{year}-{month:02d}-01"
# Calculate end date (first day of next month)
if month == 12:
end_date = f"{year + 1}-01-01"
else:
end_date = f"{year}-{month + 1:02d}-01"
# Check if partition already exists
cursor.execute("""
SELECT EXISTS (
SELECT 1
FROM pg_tables
WHERE tablename = %s
)
""", (partition_name,))
exists = cursor.fetchone()[0]
if not exists:
try:
cursor.execute(f"""
CREATE TABLE {partition_name} PARTITION OF license_heartbeats
FOR VALUES FROM ('{start_date}') TO ('{end_date}')
""")
print(f"✓ Created partition {partition_name}")
return True
except Exception as e:
print(f"✗ Error creating partition {partition_name}: {e}")
return False
else:
print(f"- Partition {partition_name} already exists")
return False
def main():
"""Main function"""
print("Applying license_heartbeats partition migration...")
print("-" * 50)
try:
# Connect to database
conn = get_db_connection()
cursor = conn.cursor()
# Check if license_heartbeats table exists
cursor.execute("""
SELECT EXISTS (
SELECT 1
FROM information_schema.tables
WHERE table_name = 'license_heartbeats'
)
""")
if not cursor.fetchone()[0]:
print("✗ Error: license_heartbeats table does not exist!")
print(" Please run the init.sql script first.")
return 1
# Get current date
current_date = datetime.now()
partitions_created = 0
# Create partitions for the next 6 months (including current month)
for i in range(7):
target_date = current_date + relativedelta(months=i)
if create_partition(cursor, target_date.year, target_date.month):
partitions_created += 1
# Commit changes
conn.commit()
print("-" * 50)
print(f"✓ Migration complete. Created {partitions_created} new partitions.")
# List all partitions
cursor.execute("""
SELECT tablename
FROM pg_tables
WHERE tablename LIKE 'license_heartbeats_%'
ORDER BY tablename
""")
partitions = cursor.fetchall()
print(f"\nTotal partitions: {len(partitions)}")
for partition in partitions:
print(f" - {partition[0]}")
cursor.close()
conn.close()
return 0
except Exception as e:
print(f"✗ Error: {e}")
return 1
if __name__ == "__main__":
sys.exit(main())

Datei anzeigen

@@ -0,0 +1 @@
# Auth module initialization

Datei anzeigen

@@ -0,0 +1,44 @@
from functools import wraps
from flask import session, redirect, url_for, flash, request
from datetime import datetime, timedelta
from zoneinfo import ZoneInfo
import logging
from utils.audit import log_audit
logger = logging.getLogger(__name__)
def login_required(f):
@wraps(f)
def decorated_function(*args, **kwargs):
if 'logged_in' not in session:
return redirect(url_for('auth.login'))
# Check if session has expired
if 'last_activity' in session:
last_activity = datetime.fromisoformat(session['last_activity'])
time_since_activity = datetime.now(ZoneInfo("Europe/Berlin")).replace(tzinfo=None) - last_activity
# Debug logging
logger.info(f"Session check for {session.get('username', 'unknown')}: "
f"Last activity: {last_activity}, "
f"Time since: {time_since_activity.total_seconds()} seconds")
if time_since_activity > timedelta(minutes=5):
# Session expired - Logout
username = session.get('username', 'unbekannt')
logger.info(f"Session timeout for user {username} - auto logout")
# Audit log for automatic logout (before session.clear()!)
try:
log_audit('AUTO_LOGOUT', 'session',
additional_info={'reason': 'Session timeout (5 minutes)', 'username': username})
except:
pass
session.clear()
flash('Ihre Sitzung ist abgelaufen. Bitte melden Sie sich erneut an.', 'warning')
return redirect(url_for('auth.login'))
# Activity is NOT automatically updated
# Only on explicit user actions (done by heartbeat)
return f(*args, **kwargs)
return decorated_function

Datei anzeigen

@@ -0,0 +1,11 @@
import bcrypt
def hash_password(password):
"""Hash a password using bcrypt"""
return bcrypt.hashpw(password.encode('utf-8'), bcrypt.gensalt()).decode('utf-8')
def verify_password(password, hashed):
"""Verify a password against its hash"""
return bcrypt.checkpw(password.encode('utf-8'), hashed.encode('utf-8'))

Datei anzeigen

@@ -0,0 +1,124 @@
import random
import logging
from datetime import datetime, timedelta
from zoneinfo import ZoneInfo
from flask import request
from db import execute_query, get_db_connection, get_db_cursor
from config import FAIL_MESSAGES, MAX_LOGIN_ATTEMPTS, BLOCK_DURATION_HOURS, EMAIL_ENABLED
from utils.audit import log_audit
from utils.network import get_client_ip
logger = logging.getLogger(__name__)
def check_ip_blocked(ip_address):
"""Check if an IP address is blocked"""
result = execute_query(
"""
SELECT blocked_until FROM login_attempts
WHERE ip_address = %s AND blocked_until IS NOT NULL
""",
(ip_address,),
fetch_one=True
)
if result and result[0]:
if result[0] > datetime.now(ZoneInfo("Europe/Berlin")).replace(tzinfo=None):
return True, result[0]
return False, None
def record_failed_attempt(ip_address, username):
"""Record a failed login attempt"""
# Random error message
error_message = random.choice(FAIL_MESSAGES)
with get_db_connection() as conn:
with get_db_cursor(conn) as cur:
try:
# Check if IP already exists
cur.execute("""
SELECT attempt_count FROM login_attempts
WHERE ip_address = %s
""", (ip_address,))
result = cur.fetchone()
if result:
# Update existing entry
new_count = result[0] + 1
blocked_until = None
if new_count >= MAX_LOGIN_ATTEMPTS:
blocked_until = datetime.now(ZoneInfo("Europe/Berlin")).replace(tzinfo=None) + timedelta(hours=BLOCK_DURATION_HOURS)
# Email notification (if enabled)
if EMAIL_ENABLED:
send_security_alert_email(ip_address, username, new_count)
cur.execute("""
UPDATE login_attempts
SET attempt_count = %s,
last_attempt = CURRENT_TIMESTAMP,
blocked_until = %s,
last_username_tried = %s,
last_error_message = %s
WHERE ip_address = %s
""", (new_count, blocked_until, username, error_message, ip_address))
else:
# Create new entry
cur.execute("""
INSERT INTO login_attempts
(ip_address, attempt_count, last_username_tried, last_error_message)
VALUES (%s, 1, %s, %s)
""", (ip_address, username, error_message))
conn.commit()
# Audit log
log_audit('LOGIN_FAILED', 'user',
additional_info=f"IP: {ip_address}, User: {username}, Message: {error_message}")
except Exception as e:
logger.error(f"Rate limiting error: {e}")
conn.rollback()
return error_message
def reset_login_attempts(ip_address):
"""Reset login attempts for an IP"""
execute_query(
"DELETE FROM login_attempts WHERE ip_address = %s",
(ip_address,)
)
def get_login_attempts(ip_address):
"""Get the number of login attempts for an IP"""
result = execute_query(
"SELECT attempt_count FROM login_attempts WHERE ip_address = %s",
(ip_address,),
fetch_one=True
)
return result[0] if result else 0
def send_security_alert_email(ip_address, username, attempt_count):
"""Send a security alert email"""
subject = f"⚠️ SICHERHEITSWARNUNG: {attempt_count} fehlgeschlagene Login-Versuche"
body = f"""
WARNUNG: Mehrere fehlgeschlagene Login-Versuche erkannt!
IP-Adresse: {ip_address}
Versuchter Benutzername: {username}
Anzahl Versuche: {attempt_count}
Zeit: {datetime.now(ZoneInfo("Europe/Berlin")).strftime('%Y-%m-%d %H:%M:%S')}
Die IP-Adresse wurde für 24 Stunden gesperrt.
Dies ist eine automatische Nachricht vom v2-Docker Admin Panel.
"""
# TODO: Email sending implementation when SMTP is configured
logger.warning(f"Sicherheitswarnung: {attempt_count} fehlgeschlagene Versuche von IP {ip_address}")
print(f"E-Mail würde gesendet: {subject}")

Datei anzeigen

@@ -0,0 +1,57 @@
import pyotp
import qrcode
import random
import string
import hashlib
from io import BytesIO
import base64
def generate_totp_secret():
"""Generate a new TOTP secret"""
return pyotp.random_base32()
def generate_qr_code(username, totp_secret):
"""Generate QR code for TOTP setup"""
totp_uri = pyotp.totp.TOTP(totp_secret).provisioning_uri(
name=username,
issuer_name='V2 Admin Panel'
)
qr = qrcode.QRCode(version=1, box_size=10, border=5)
qr.add_data(totp_uri)
qr.make(fit=True)
img = qr.make_image(fill_color="black", back_color="white")
buf = BytesIO()
img.save(buf, format='PNG')
buf.seek(0)
return base64.b64encode(buf.getvalue()).decode()
def verify_totp(totp_secret, token):
"""Verify a TOTP token"""
totp = pyotp.TOTP(totp_secret)
return totp.verify(token, valid_window=1)
def generate_backup_codes(count=8):
"""Generate backup codes for 2FA recovery"""
codes = []
for _ in range(count):
code = ''.join(random.choices(string.ascii_uppercase + string.digits, k=8))
codes.append(code)
return codes
def hash_backup_code(code):
"""Hash a backup code for storage"""
return hashlib.sha256(code.encode()).hexdigest()
def verify_backup_code(code, hashed_codes):
"""Verify a backup code against stored hashes"""
code_hash = hashlib.sha256(code.encode()).hexdigest()
return code_hash in hashed_codes

70
v2_adminpanel/config.py Normale Datei
Datei anzeigen

@@ -0,0 +1,70 @@
import os
from datetime import timedelta
from pathlib import Path
from dotenv import load_dotenv
load_dotenv()
# Flask Configuration
SECRET_KEY = os.urandom(24)
SESSION_TYPE = 'filesystem'
JSON_AS_ASCII = False
JSONIFY_MIMETYPE = 'application/json; charset=utf-8'
PERMANENT_SESSION_LIFETIME = timedelta(minutes=5)
SESSION_COOKIE_HTTPONLY = True
SESSION_COOKIE_SECURE = os.getenv("SESSION_COOKIE_SECURE", "true").lower() == "true" # Default True for HTTPS
SESSION_COOKIE_SAMESITE = 'Lax'
SESSION_COOKIE_NAME = 'admin_session'
SESSION_REFRESH_EACH_REQUEST = False
# Database Configuration
DATABASE_CONFIG = {
'host': os.getenv("POSTGRES_HOST", "postgres"),
'port': os.getenv("POSTGRES_PORT", "5432"),
'dbname': os.getenv("POSTGRES_DB"),
'user': os.getenv("POSTGRES_USER"),
'password': os.getenv("POSTGRES_PASSWORD"),
'options': '-c client_encoding=UTF8'
}
# Backup Configuration
BACKUP_DIR = Path("/app/backups")
BACKUP_DIR.mkdir(exist_ok=True)
BACKUP_ENCRYPTION_KEY = os.getenv("BACKUP_ENCRYPTION_KEY")
# Rate Limiting Configuration
FAIL_MESSAGES = [
"NOPE!",
"ACCESS DENIED, TRY HARDER",
"WRONG! 🚫",
"COMPUTER SAYS NO",
"YOU FAILED"
]
MAX_LOGIN_ATTEMPTS = 5
BLOCK_DURATION_HOURS = 24
CAPTCHA_AFTER_ATTEMPTS = 2
# reCAPTCHA Configuration
RECAPTCHA_SITE_KEY = os.getenv('RECAPTCHA_SITE_KEY')
RECAPTCHA_SECRET_KEY = os.getenv('RECAPTCHA_SECRET_KEY')
# Email Configuration
EMAIL_ENABLED = os.getenv("EMAIL_ENABLED", "false").lower() == "true"
# Admin Users (for backward compatibility)
ADMIN_USERS = {
os.getenv("ADMIN1_USERNAME"): os.getenv("ADMIN1_PASSWORD"),
os.getenv("ADMIN2_USERNAME"): os.getenv("ADMIN2_PASSWORD")
}
# Scheduler Configuration
SCHEDULER_CONFIG = {
'backup_hour': 3,
'backup_minute': 0
}
# Logging Configuration
LOGGING_CONFIG = {
'level': 'INFO',
'format': '%(asctime)s - %(name)s - %(levelname)s - %(message)s'
}

Datei anzeigen

Datei anzeigen

@@ -0,0 +1,273 @@
import logging
import traceback
from functools import wraps
from typing import Optional, Dict, Any, Callable, Union
from flask import (
Flask, request, jsonify, render_template, flash, redirect,
url_for, current_app, g
)
from werkzeug.exceptions import HTTPException
import psycopg2
from .exceptions import (
BaseApplicationException, DatabaseException, ValidationException,
AuthenticationException, ResourceException, QueryError,
ConnectionError, TransactionError
)
logger = logging.getLogger(__name__)
def init_error_handlers(app: Flask) -> None:
@app.before_request
def before_request():
g.request_id = request.headers.get('X-Request-ID',
BaseApplicationException('', '', 0).request_id)
@app.errorhandler(BaseApplicationException)
def handle_application_error(error: BaseApplicationException):
return _handle_error(error)
@app.errorhandler(HTTPException)
def handle_http_error(error: HTTPException):
return _handle_error(error)
@app.errorhandler(psycopg2.Error)
def handle_database_error(error: psycopg2.Error):
db_exception = _convert_psycopg2_error(error)
return _handle_error(db_exception)
@app.errorhandler(Exception)
def handle_unexpected_error(error: Exception):
logger.error(
f"Unexpected error: {str(error)}",
exc_info=True,
extra={'request_id': getattr(g, 'request_id', 'unknown')}
)
if current_app.debug:
raise
generic_error = BaseApplicationException(
message="An unexpected error occurred",
code="INTERNAL_ERROR",
status_code=500,
user_message="Ein unerwarteter Fehler ist aufgetreten"
)
return _handle_error(generic_error)
def _handle_error(error: Union[BaseApplicationException, HTTPException, Exception]) -> tuple:
if isinstance(error, HTTPException):
status_code = error.code
error_dict = {
'error': {
'code': error.name.upper().replace(' ', '_'),
'message': error.description or str(error),
'request_id': getattr(g, 'request_id', 'unknown')
}
}
user_message = error.description or str(error)
elif isinstance(error, BaseApplicationException):
status_code = error.status_code
error_dict = error.to_dict(include_details=current_app.debug)
error_dict['error']['request_id'] = getattr(g, 'request_id', error.request_id)
user_message = error.user_message
logger.error(
f"{error.__class__.__name__}: {error.message}",
extra={
'error_code': error.code,
'details': error.details,
'request_id': error_dict['error']['request_id']
}
)
else:
status_code = 500
error_dict = {
'error': {
'code': 'INTERNAL_ERROR',
'message': 'An internal error occurred',
'request_id': getattr(g, 'request_id', 'unknown')
}
}
user_message = "Ein interner Fehler ist aufgetreten"
if _is_json_request():
return jsonify(error_dict), status_code
else:
if status_code == 404:
return render_template('404.html'), 404
elif status_code >= 500:
return render_template('500.html', error=user_message), status_code
else:
flash(user_message, 'error')
return render_template('error.html',
error=user_message,
error_code=error_dict['error']['code'],
request_id=error_dict['error']['request_id']), status_code
def _convert_psycopg2_error(error: psycopg2.Error) -> DatabaseException:
error_code = getattr(error, 'pgcode', None)
error_message = str(error).split('\n')[0]
if isinstance(error, psycopg2.OperationalError):
return ConnectionError(
message=f"Database connection failed: {error_message}",
host=None
)
elif isinstance(error, psycopg2.IntegrityError):
if error_code == '23505':
return ValidationException(
message="Duplicate entry violation",
details={'constraint': error_message},
user_message="Dieser Eintrag existiert bereits"
)
elif error_code == '23503':
return ValidationException(
message="Foreign key violation",
details={'constraint': error_message},
user_message="Referenzierte Daten existieren nicht"
)
else:
return ValidationException(
message="Data integrity violation",
details={'error_code': error_code},
user_message="Datenintegritätsfehler"
)
elif isinstance(error, psycopg2.DataError):
return ValidationException(
message="Invalid data format",
details={'error': error_message},
user_message="Ungültiges Datenformat"
)
else:
return QueryError(
message=error_message,
query="[query hidden for security]",
error_code=error_code
)
def _is_json_request() -> bool:
return (request.is_json or
request.path.startswith('/api/') or
request.accept_mimetypes.best == 'application/json')
def handle_errors(
catch: tuple = (Exception,),
message: str = "Operation failed",
user_message: Optional[str] = None,
redirect_to: Optional[str] = None
) -> Callable:
def decorator(func: Callable) -> Callable:
@wraps(func)
def wrapper(*args, **kwargs):
try:
return func(*args, **kwargs)
except catch as e:
if isinstance(e, BaseApplicationException):
raise
logger.error(
f"Error in {func.__name__}: {str(e)}",
exc_info=True,
extra={'request_id': getattr(g, 'request_id', 'unknown')}
)
if _is_json_request():
return jsonify({
'error': {
'code': 'OPERATION_FAILED',
'message': user_message or message,
'request_id': getattr(g, 'request_id', 'unknown')
}
}), 500
else:
flash(user_message or message, 'error')
if redirect_to:
return redirect(url_for(redirect_to))
return redirect(request.referrer or url_for('admin.dashboard'))
return wrapper
return decorator
def validate_request(
required_fields: Optional[Dict[str, type]] = None,
optional_fields: Optional[Dict[str, type]] = None
) -> Callable:
def decorator(func: Callable) -> Callable:
@wraps(func)
def wrapper(*args, **kwargs):
data = request.get_json() if request.is_json else request.form
if required_fields:
for field, expected_type in required_fields.items():
if field not in data:
raise ValidationException(
message=f"Missing required field: {field}",
field=field,
user_message=f"Pflichtfeld fehlt: {field}"
)
try:
if expected_type != str:
if expected_type == int:
int(data[field])
elif expected_type == float:
float(data[field])
elif expected_type == bool:
if isinstance(data[field], str):
if data[field].lower() not in ['true', 'false', '1', '0']:
raise ValueError
except (ValueError, TypeError):
raise ValidationException(
message=f"Invalid type for field {field}",
field=field,
value=data[field],
details={'expected_type': expected_type.__name__},
user_message=f"Ungültiger Typ für Feld {field}"
)
return func(*args, **kwargs)
return wrapper
return decorator
class ErrorContext:
def __init__(
self,
operation: str,
resource_type: Optional[str] = None,
resource_id: Optional[Any] = None
):
self.operation = operation
self.resource_type = resource_type
self.resource_id = resource_id
def __enter__(self):
return self
def __exit__(self, exc_type, exc_val, exc_tb):
if exc_val is None:
return False
if isinstance(exc_val, BaseApplicationException):
return False
logger.error(
f"Error during {self.operation}",
exc_info=True,
extra={
'operation': self.operation,
'resource_type': self.resource_type,
'resource_id': self.resource_id,
'request_id': getattr(g, 'request_id', 'unknown')
}
)
return False

Datei anzeigen

@@ -0,0 +1,356 @@
import uuid
from typing import Optional, Dict, Any
from datetime import datetime
class BaseApplicationException(Exception):
def __init__(
self,
message: str,
code: str,
status_code: int = 500,
details: Optional[Dict[str, Any]] = None,
user_message: Optional[str] = None
):
super().__init__(message)
self.message = message
self.code = code
self.status_code = status_code
self.details = details or {}
self.user_message = user_message or message
self.timestamp = datetime.utcnow()
self.request_id = str(uuid.uuid4())
def to_dict(self, include_details: bool = False) -> Dict[str, Any]:
result = {
'error': {
'code': self.code,
'message': self.user_message,
'timestamp': self.timestamp.isoformat(),
'request_id': self.request_id
}
}
if include_details and self.details:
result['error']['details'] = self.details
return result
class ValidationException(BaseApplicationException):
def __init__(
self,
message: str,
field: Optional[str] = None,
value: Any = None,
details: Optional[Dict[str, Any]] = None,
user_message: Optional[str] = None
):
details = details or {}
if field:
details['field'] = field
if value is not None:
details['value'] = str(value)
super().__init__(
message=message,
code='VALIDATION_ERROR',
status_code=400,
details=details,
user_message=user_message or "Ungültige Eingabe"
)
class InputValidationError(ValidationException):
def __init__(
self,
field: str,
message: str,
value: Any = None,
expected_type: Optional[str] = None
):
details = {'expected_type': expected_type} if expected_type else None
super().__init__(
message=f"Invalid input for field '{field}': {message}",
field=field,
value=value,
details=details,
user_message=f"Ungültiger Wert für Feld '{field}'"
)
class BusinessRuleViolation(ValidationException):
def __init__(
self,
rule: str,
message: str,
context: Optional[Dict[str, Any]] = None
):
super().__init__(
message=message,
details={'rule': rule, 'context': context or {}},
user_message="Geschäftsregel verletzt"
)
class DataIntegrityError(ValidationException):
def __init__(
self,
entity: str,
constraint: str,
message: str,
details: Optional[Dict[str, Any]] = None
):
details = details or {}
details.update({'entity': entity, 'constraint': constraint})
super().__init__(
message=message,
details=details,
user_message="Datenintegritätsfehler"
)
class AuthenticationException(BaseApplicationException):
def __init__(
self,
message: str,
details: Optional[Dict[str, Any]] = None,
user_message: Optional[str] = None
):
super().__init__(
message=message,
code='AUTHENTICATION_ERROR',
status_code=401,
details=details,
user_message=user_message or "Authentifizierung fehlgeschlagen"
)
class InvalidCredentialsError(AuthenticationException):
def __init__(self, username: Optional[str] = None):
details = {'username': username} if username else None
super().__init__(
message="Invalid username or password",
details=details,
user_message="Ungültiger Benutzername oder Passwort"
)
class SessionExpiredError(AuthenticationException):
def __init__(self, session_id: Optional[str] = None):
details = {'session_id': session_id} if session_id else None
super().__init__(
message="Session has expired",
details=details,
user_message="Ihre Sitzung ist abgelaufen"
)
class InsufficientPermissionsError(AuthenticationException):
def __init__(
self,
required_permission: str,
user_permissions: Optional[list] = None
):
super().__init__(
message=f"User lacks required permission: {required_permission}",
details={
'required': required_permission,
'user_permissions': user_permissions or []
},
user_message="Unzureichende Berechtigungen für diese Aktion"
)
self.status_code = 403
class DatabaseException(BaseApplicationException):
def __init__(
self,
message: str,
query: Optional[str] = None,
details: Optional[Dict[str, Any]] = None,
user_message: Optional[str] = None
):
details = details or {}
if query:
details['query_hash'] = str(hash(query))
super().__init__(
message=message,
code='DATABASE_ERROR',
status_code=500,
details=details,
user_message=user_message or "Datenbankfehler aufgetreten"
)
class ConnectionError(DatabaseException):
def __init__(self, message: str, host: Optional[str] = None):
details = {'host': host} if host else None
super().__init__(
message=message,
details=details,
user_message="Datenbankverbindung fehlgeschlagen"
)
class QueryError(DatabaseException):
def __init__(
self,
message: str,
query: str,
error_code: Optional[str] = None
):
super().__init__(
message=message,
query=query,
details={'error_code': error_code} if error_code else None,
user_message="Datenbankabfrage fehlgeschlagen"
)
class TransactionError(DatabaseException):
def __init__(self, message: str, operation: str):
super().__init__(
message=message,
details={'operation': operation},
user_message="Transaktion fehlgeschlagen"
)
class ExternalServiceException(BaseApplicationException):
def __init__(
self,
service_name: str,
message: str,
details: Optional[Dict[str, Any]] = None,
user_message: Optional[str] = None
):
details = details or {}
details['service'] = service_name
super().__init__(
message=message,
code='EXTERNAL_SERVICE_ERROR',
status_code=502,
details=details,
user_message=user_message or f"Fehler beim Zugriff auf {service_name}"
)
class APIError(ExternalServiceException):
def __init__(
self,
service_name: str,
endpoint: str,
status_code: int,
message: str
):
super().__init__(
service_name=service_name,
message=message,
details={
'endpoint': endpoint,
'response_status': status_code
},
user_message=f"API-Fehler bei {service_name}"
)
class TimeoutError(ExternalServiceException):
def __init__(
self,
service_name: str,
timeout_seconds: int,
operation: str
):
super().__init__(
service_name=service_name,
message=f"Timeout after {timeout_seconds}s while {operation}",
details={
'timeout_seconds': timeout_seconds,
'operation': operation
},
user_message=f"Zeitüberschreitung bei {service_name}"
)
class ResourceException(BaseApplicationException):
def __init__(
self,
message: str,
resource_type: str,
resource_id: Any = None,
details: Optional[Dict[str, Any]] = None,
user_message: Optional[str] = None
):
details = details or {}
details.update({
'resource_type': resource_type,
'resource_id': str(resource_id) if resource_id else None
})
super().__init__(
message=message,
code='RESOURCE_ERROR',
status_code=404,
details=details,
user_message=user_message or "Ressourcenfehler"
)
class ResourceNotFoundError(ResourceException):
def __init__(
self,
resource_type: str,
resource_id: Any = None,
search_criteria: Optional[Dict[str, Any]] = None
):
details = {'search_criteria': search_criteria} if search_criteria else None
super().__init__(
message=f"{resource_type} not found",
resource_type=resource_type,
resource_id=resource_id,
details=details,
user_message=f"{resource_type} nicht gefunden"
)
self.status_code = 404
class ResourceConflictError(ResourceException):
def __init__(
self,
resource_type: str,
resource_id: Any,
conflict_reason: str
):
super().__init__(
message=f"Conflict with {resource_type}: {conflict_reason}",
resource_type=resource_type,
resource_id=resource_id,
details={'conflict_reason': conflict_reason},
user_message=f"Konflikt mit {resource_type}"
)
self.status_code = 409
class ResourceLimitExceeded(ResourceException):
def __init__(
self,
resource_type: str,
limit: int,
current: int,
requested: Optional[int] = None
):
details = {
'limit': limit,
'current': current,
'requested': requested
}
super().__init__(
message=f"{resource_type} limit exceeded: {current}/{limit}",
resource_type=resource_type,
details=details,
user_message=f"Limit für {resource_type} überschritten"
)
self.status_code = 429

Datei anzeigen

@@ -0,0 +1,190 @@
import logging
import logging.handlers
import json
import sys
import os
from datetime import datetime
from typing import Dict, Any
from flask import g, request, has_request_context
class StructuredFormatter(logging.Formatter):
def format(self, record):
log_data = {
'timestamp': datetime.utcnow().isoformat(),
'level': record.levelname,
'logger': record.name,
'message': record.getMessage(),
'module': record.module,
'function': record.funcName,
'line': record.lineno
}
if has_request_context():
log_data['request'] = {
'method': request.method,
'path': request.path,
'remote_addr': request.remote_addr,
'user_agent': request.user_agent.string,
'request_id': getattr(g, 'request_id', 'unknown')
}
if hasattr(record, 'request_id'):
log_data['request_id'] = record.request_id
if hasattr(record, 'error_code'):
log_data['error_code'] = record.error_code
if hasattr(record, 'details') and record.details:
log_data['details'] = self._sanitize_details(record.details)
if record.exc_info:
log_data['exception'] = {
'type': record.exc_info[0].__name__,
'message': str(record.exc_info[1]),
'traceback': self.formatException(record.exc_info)
}
return json.dumps(log_data, ensure_ascii=False)
def _sanitize_details(self, details: Dict[str, Any]) -> Dict[str, Any]:
sensitive_fields = {
'password', 'secret', 'token', 'api_key', 'authorization',
'credit_card', 'ssn', 'pin'
}
sanitized = {}
for key, value in details.items():
if any(field in key.lower() for field in sensitive_fields):
sanitized[key] = '[REDACTED]'
elif isinstance(value, dict):
sanitized[key] = self._sanitize_details(value)
else:
sanitized[key] = value
return sanitized
class ErrorLevelFilter(logging.Filter):
def __init__(self, min_level=logging.ERROR):
self.min_level = min_level
def filter(self, record):
return record.levelno >= self.min_level
def setup_logging(app):
log_level = os.getenv('LOG_LEVEL', 'INFO').upper()
log_dir = os.getenv('LOG_DIR', 'logs')
os.makedirs(log_dir, exist_ok=True)
root_logger = logging.getLogger()
root_logger.setLevel(getattr(logging, log_level))
console_handler = logging.StreamHandler(sys.stdout)
console_handler.setLevel(logging.INFO)
if app.debug:
console_formatter = logging.Formatter(
'%(asctime)s - %(name)s - %(levelname)s - %(message)s'
)
else:
console_formatter = StructuredFormatter()
console_handler.setFormatter(console_formatter)
root_logger.addHandler(console_handler)
app_log_handler = logging.handlers.RotatingFileHandler(
os.path.join(log_dir, 'app.log'),
maxBytes=10 * 1024 * 1024,
backupCount=10
)
app_log_handler.setLevel(logging.DEBUG)
app_log_handler.setFormatter(StructuredFormatter())
root_logger.addHandler(app_log_handler)
error_log_handler = logging.handlers.RotatingFileHandler(
os.path.join(log_dir, 'errors.log'),
maxBytes=10 * 1024 * 1024,
backupCount=10
)
error_log_handler.setLevel(logging.ERROR)
error_log_handler.setFormatter(StructuredFormatter())
error_log_handler.addFilter(ErrorLevelFilter())
root_logger.addHandler(error_log_handler)
security_logger = logging.getLogger('security')
security_handler = logging.handlers.RotatingFileHandler(
os.path.join(log_dir, 'security.log'),
maxBytes=10 * 1024 * 1024,
backupCount=20
)
security_handler.setFormatter(StructuredFormatter())
security_logger.addHandler(security_handler)
security_logger.setLevel(logging.INFO)
werkzeug_logger = logging.getLogger('werkzeug')
werkzeug_logger.setLevel(logging.WARNING)
@app.before_request
def log_request_info():
logger = logging.getLogger('request')
logger.info(
'Request started',
extra={
'request_id': getattr(g, 'request_id', 'unknown'),
'details': {
'method': request.method,
'path': request.path,
'query_params': dict(request.args),
'content_length': request.content_length
}
}
)
@app.after_request
def log_response_info(response):
logger = logging.getLogger('request')
logger.info(
'Request completed',
extra={
'request_id': getattr(g, 'request_id', 'unknown'),
'details': {
'status_code': response.status_code,
'content_length': response.content_length or 0
}
}
)
return response
def get_logger(name: str) -> logging.Logger:
return logging.getLogger(name)
def log_error(logger: logging.Logger, message: str, error: Exception = None, **kwargs):
extra = kwargs.copy()
if error:
extra['error_type'] = type(error).__name__
extra['error_message'] = str(error)
if hasattr(error, 'code'):
extra['error_code'] = error.code
if hasattr(error, 'details'):
extra['details'] = error.details
logger.error(message, exc_info=error, extra=extra)
def log_security_event(event_type: str, message: str, **details):
logger = logging.getLogger('security')
logger.warning(
f"Security Event: {event_type} - {message}",
extra={
'security_event': event_type,
'details': details,
'request_id': getattr(g, 'request_id', 'unknown') if has_request_context() else None
}
)

Datei anzeigen

@@ -0,0 +1,246 @@
import time
import functools
from typing import Dict, Any, Optional, List
from collections import defaultdict, deque
from datetime import datetime, timedelta
from threading import Lock
import logging
from prometheus_client import Counter, Histogram, Gauge, generate_latest
from flask import g, request, Response
from .exceptions import BaseApplicationException
from .logging_config import log_security_event
logger = logging.getLogger(__name__)
class ErrorMetrics:
def __init__(self):
self.error_counter = Counter(
'app_errors_total',
'Total number of errors',
['error_code', 'status_code', 'endpoint']
)
self.error_rate = Gauge(
'app_error_rate',
'Error rate per minute',
['error_code']
)
self.request_duration = Histogram(
'app_request_duration_seconds',
'Request duration in seconds',
['method', 'endpoint', 'status_code']
)
self.validation_errors = Counter(
'app_validation_errors_total',
'Total validation errors',
['field', 'endpoint']
)
self.auth_failures = Counter(
'app_auth_failures_total',
'Total authentication failures',
['reason', 'endpoint']
)
self.db_errors = Counter(
'app_database_errors_total',
'Total database errors',
['error_type', 'operation']
)
self._error_history = defaultdict(lambda: deque(maxlen=60))
self._lock = Lock()
def record_error(self, error: BaseApplicationException, endpoint: str = None):
endpoint = endpoint or request.endpoint or 'unknown'
self.error_counter.labels(
error_code=error.code,
status_code=error.status_code,
endpoint=endpoint
).inc()
with self._lock:
self._error_history[error.code].append(datetime.utcnow())
self._update_error_rates()
if error.code == 'VALIDATION_ERROR' and 'field' in error.details:
self.validation_errors.labels(
field=error.details['field'],
endpoint=endpoint
).inc()
elif error.code == 'AUTHENTICATION_ERROR':
reason = error.__class__.__name__
self.auth_failures.labels(
reason=reason,
endpoint=endpoint
).inc()
elif error.code == 'DATABASE_ERROR':
error_type = error.__class__.__name__
operation = error.details.get('operation', 'unknown')
self.db_errors.labels(
error_type=error_type,
operation=operation
).inc()
def _update_error_rates(self):
now = datetime.utcnow()
one_minute_ago = now - timedelta(minutes=1)
for error_code, timestamps in self._error_history.items():
recent_count = sum(1 for ts in timestamps if ts >= one_minute_ago)
self.error_rate.labels(error_code=error_code).set(recent_count)
class AlertManager:
def __init__(self):
self.alerts = []
self.alert_thresholds = {
'error_rate': 10,
'auth_failure_rate': 5,
'db_error_rate': 3,
'response_time_95th': 2.0
}
self._lock = Lock()
def check_alerts(self, metrics: ErrorMetrics):
new_alerts = []
for error_code, rate in self._get_current_error_rates(metrics).items():
if rate > self.alert_thresholds['error_rate']:
new_alerts.append({
'type': 'high_error_rate',
'severity': 'critical',
'error_code': error_code,
'rate': rate,
'threshold': self.alert_thresholds['error_rate'],
'message': f'High error rate for {error_code}: {rate}/min',
'timestamp': datetime.utcnow()
})
auth_failure_rate = self._get_auth_failure_rate(metrics)
if auth_failure_rate > self.alert_thresholds['auth_failure_rate']:
new_alerts.append({
'type': 'auth_failures',
'severity': 'warning',
'rate': auth_failure_rate,
'threshold': self.alert_thresholds['auth_failure_rate'],
'message': f'High authentication failure rate: {auth_failure_rate}/min',
'timestamp': datetime.utcnow()
})
log_security_event(
'HIGH_AUTH_FAILURE_RATE',
f'Authentication failure rate exceeded threshold',
rate=auth_failure_rate,
threshold=self.alert_thresholds['auth_failure_rate']
)
with self._lock:
self.alerts.extend(new_alerts)
self.alerts = [a for a in self.alerts
if a['timestamp'] > datetime.utcnow() - timedelta(hours=24)]
return new_alerts
def _get_current_error_rates(self, metrics: ErrorMetrics) -> Dict[str, float]:
rates = {}
with metrics._lock:
now = datetime.utcnow()
one_minute_ago = now - timedelta(minutes=1)
for error_code, timestamps in metrics._error_history.items():
rates[error_code] = sum(1 for ts in timestamps if ts >= one_minute_ago)
return rates
def _get_auth_failure_rate(self, metrics: ErrorMetrics) -> float:
return sum(
sample.value
for sample in metrics.auth_failures._child_samples()
) / 60.0
def get_active_alerts(self) -> List[Dict[str, Any]]:
with self._lock:
return list(self.alerts)
error_metrics = ErrorMetrics()
alert_manager = AlertManager()
def init_monitoring(app):
@app.before_request
def before_request():
g.start_time = time.time()
@app.after_request
def after_request(response):
if hasattr(g, 'start_time'):
duration = time.time() - g.start_time
error_metrics.request_duration.labels(
method=request.method,
endpoint=request.endpoint or 'unknown',
status_code=response.status_code
).observe(duration)
return response
@app.route('/metrics')
def metrics():
alert_manager.check_alerts(error_metrics)
return Response(generate_latest(), mimetype='text/plain')
@app.route('/api/alerts')
def get_alerts():
alerts = alert_manager.get_active_alerts()
return {
'alerts': alerts,
'total': len(alerts),
'critical': len([a for a in alerts if a['severity'] == 'critical']),
'warning': len([a for a in alerts if a['severity'] == 'warning'])
}
def monitor_performance(func):
@functools.wraps(func)
def wrapper(*args, **kwargs):
start_time = time.time()
try:
result = func(*args, **kwargs)
return result
finally:
duration = time.time() - start_time
if duration > 1.0:
logger.warning(
f"Slow function execution: {func.__name__}",
extra={
'function': func.__name__,
'duration': duration,
'request_id': getattr(g, 'request_id', 'unknown')
}
)
return wrapper
def track_error(error: BaseApplicationException):
error_metrics.record_error(error)
if error.status_code >= 500:
logger.error(
f"Critical error occurred: {error.code}",
extra={
'error_code': error.code,
'message': error.message,
'details': error.details,
'request_id': error.request_id
}
)

Datei anzeigen

@@ -0,0 +1,435 @@
import re
from typing import Any, Optional, List, Dict, Callable, Union
from datetime import datetime, date
from functools import wraps
import ipaddress
from flask import request
from .exceptions import InputValidationError, ValidationException
class ValidationRules:
EMAIL_PATTERN = re.compile(
r'^[a-zA-Z0-9._%+-]+@[a-zA-Z0-9.-]+\.[a-zA-Z]{2,}$'
)
PHONE_PATTERN = re.compile(
r'^[\+]?[(]?[0-9]{1,4}[)]?[-\s\.]?[(]?[0-9]{1,4}[)]?[-\s\.]?[0-9]{1,10}$'
)
LICENSE_KEY_PATTERN = re.compile(
r'^[A-Z0-9]{4}-[A-Z0-9]{4}-[A-Z0-9]{4}-[A-Z0-9]{4}$'
)
SAFE_STRING_PATTERN = re.compile(
r'^[a-zA-Z0-9\s\-\_\.\,\!\?\@\#\$\%\&\*\(\)\[\]\{\}\:\;\'\"\+\=\/\\]+$'
)
USERNAME_PATTERN = re.compile(
r'^[a-zA-Z0-9_\-\.]{3,50}$'
)
PASSWORD_MIN_LENGTH = 8
PASSWORD_REQUIRE_UPPER = True
PASSWORD_REQUIRE_LOWER = True
PASSWORD_REQUIRE_DIGIT = True
PASSWORD_REQUIRE_SPECIAL = True
class Validators:
@staticmethod
def required(value: Any, field_name: str = "field") -> Any:
if value is None or (isinstance(value, str) and not value.strip()):
raise InputValidationError(
field=field_name,
message="This field is required",
value=value
)
return value
@staticmethod
def email(value: str, field_name: str = "email") -> str:
value = Validators.required(value, field_name).strip()
if not ValidationRules.EMAIL_PATTERN.match(value):
raise InputValidationError(
field=field_name,
message="Invalid email format",
value=value,
expected_type="email"
)
return value.lower()
@staticmethod
def phone(value: str, field_name: str = "phone") -> str:
value = Validators.required(value, field_name).strip()
cleaned = re.sub(r'[\s\-\(\)]', '', value)
if not ValidationRules.PHONE_PATTERN.match(value):
raise InputValidationError(
field=field_name,
message="Invalid phone number format",
value=value,
expected_type="phone"
)
return cleaned
@staticmethod
def license_key(value: str, field_name: str = "license_key") -> str:
value = Validators.required(value, field_name).strip().upper()
if not ValidationRules.LICENSE_KEY_PATTERN.match(value):
raise InputValidationError(
field=field_name,
message="Invalid license key format (expected: XXXX-XXXX-XXXX-XXXX)",
value=value,
expected_type="license_key"
)
return value
@staticmethod
def integer(
value: Union[str, int],
field_name: str = "field",
min_value: Optional[int] = None,
max_value: Optional[int] = None
) -> int:
try:
int_value = int(value)
except (ValueError, TypeError):
raise InputValidationError(
field=field_name,
message="Must be a valid integer",
value=value,
expected_type="integer"
)
if min_value is not None and int_value < min_value:
raise InputValidationError(
field=field_name,
message=f"Must be at least {min_value}",
value=int_value
)
if max_value is not None and int_value > max_value:
raise InputValidationError(
field=field_name,
message=f"Must be at most {max_value}",
value=int_value
)
return int_value
@staticmethod
def float_number(
value: Union[str, float],
field_name: str = "field",
min_value: Optional[float] = None,
max_value: Optional[float] = None
) -> float:
try:
float_value = float(value)
except (ValueError, TypeError):
raise InputValidationError(
field=field_name,
message="Must be a valid number",
value=value,
expected_type="float"
)
if min_value is not None and float_value < min_value:
raise InputValidationError(
field=field_name,
message=f"Must be at least {min_value}",
value=float_value
)
if max_value is not None and float_value > max_value:
raise InputValidationError(
field=field_name,
message=f"Must be at most {max_value}",
value=float_value
)
return float_value
@staticmethod
def boolean(value: Union[str, bool], field_name: str = "field") -> bool:
if isinstance(value, bool):
return value
if isinstance(value, str):
value_lower = value.lower()
if value_lower in ['true', '1', 'yes', 'on']:
return True
elif value_lower in ['false', '0', 'no', 'off']:
return False
raise InputValidationError(
field=field_name,
message="Must be a valid boolean",
value=value,
expected_type="boolean"
)
@staticmethod
def string(
value: str,
field_name: str = "field",
min_length: Optional[int] = None,
max_length: Optional[int] = None,
pattern: Optional[re.Pattern] = None,
safe_only: bool = False
) -> str:
value = Validators.required(value, field_name).strip()
if min_length is not None and len(value) < min_length:
raise InputValidationError(
field=field_name,
message=f"Must be at least {min_length} characters",
value=value
)
if max_length is not None and len(value) > max_length:
raise InputValidationError(
field=field_name,
message=f"Must be at most {max_length} characters",
value=value
)
if safe_only and not ValidationRules.SAFE_STRING_PATTERN.match(value):
raise InputValidationError(
field=field_name,
message="Contains invalid characters",
value=value
)
if pattern and not pattern.match(value):
raise InputValidationError(
field=field_name,
message="Does not match required format",
value=value
)
return value
@staticmethod
def username(value: str, field_name: str = "username") -> str:
value = Validators.required(value, field_name).strip()
if not ValidationRules.USERNAME_PATTERN.match(value):
raise InputValidationError(
field=field_name,
message="Username must be 3-50 characters and contain only letters, numbers, _, -, or .",
value=value,
expected_type="username"
)
return value
@staticmethod
def password(value: str, field_name: str = "password") -> str:
value = Validators.required(value, field_name)
errors = []
if len(value) < ValidationRules.PASSWORD_MIN_LENGTH:
errors.append(f"at least {ValidationRules.PASSWORD_MIN_LENGTH} characters")
if ValidationRules.PASSWORD_REQUIRE_UPPER and not re.search(r'[A-Z]', value):
errors.append("at least one uppercase letter")
if ValidationRules.PASSWORD_REQUIRE_LOWER and not re.search(r'[a-z]', value):
errors.append("at least one lowercase letter")
if ValidationRules.PASSWORD_REQUIRE_DIGIT and not re.search(r'\d', value):
errors.append("at least one digit")
if ValidationRules.PASSWORD_REQUIRE_SPECIAL and not re.search(r'[!@#$%^&*(),.?":{}|<>]', value):
errors.append("at least one special character")
if errors:
raise InputValidationError(
field=field_name,
message=f"Password must contain {', '.join(errors)}",
value="[hidden]"
)
return value
@staticmethod
def date_string(
value: str,
field_name: str = "date",
format: str = "%Y-%m-%d",
min_date: Optional[date] = None,
max_date: Optional[date] = None
) -> date:
value = Validators.required(value, field_name).strip()
try:
date_value = datetime.strptime(value, format).date()
except ValueError:
raise InputValidationError(
field=field_name,
message=f"Invalid date format (expected: {format})",
value=value,
expected_type="date"
)
if min_date and date_value < min_date:
raise InputValidationError(
field=field_name,
message=f"Date must be after {min_date}",
value=value
)
if max_date and date_value > max_date:
raise InputValidationError(
field=field_name,
message=f"Date must be before {max_date}",
value=value
)
return date_value
@staticmethod
def ip_address(
value: str,
field_name: str = "ip_address",
version: Optional[int] = None
) -> str:
value = Validators.required(value, field_name).strip()
try:
ip = ipaddress.ip_address(value)
if version and ip.version != version:
raise ValueError
except ValueError:
version_str = f"IPv{version}" if version else "IP"
raise InputValidationError(
field=field_name,
message=f"Invalid {version_str} address",
value=value,
expected_type="ip_address"
)
return str(ip)
@staticmethod
def url(
value: str,
field_name: str = "url",
require_https: bool = False
) -> str:
value = Validators.required(value, field_name).strip()
url_pattern = re.compile(
r'^https?://'
r'(?:(?:[A-Z0-9](?:[A-Z0-9-]{0,61}[A-Z0-9])?\.)+[A-Z]{2,6}\.?|'
r'localhost|'
r'\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3})'
r'(?::\d+)?'
r'(?:/?|[/?]\S+)$', re.IGNORECASE
)
if not url_pattern.match(value):
raise InputValidationError(
field=field_name,
message="Invalid URL format",
value=value,
expected_type="url"
)
if require_https and not value.startswith('https://'):
raise InputValidationError(
field=field_name,
message="URL must use HTTPS",
value=value
)
return value
@staticmethod
def enum(
value: Any,
field_name: str,
allowed_values: List[Any]
) -> Any:
if value not in allowed_values:
raise InputValidationError(
field=field_name,
message=f"Must be one of: {', '.join(map(str, allowed_values))}",
value=value
)
return value
def validate(rules: Dict[str, Dict[str, Any]]) -> Callable:
def decorator(func: Callable) -> Callable:
@wraps(func)
def wrapper(*args, **kwargs):
data = request.get_json() if request.is_json else request.form
validated_data = {}
for field_name, field_rules in rules.items():
value = data.get(field_name)
if 'required' in field_rules and field_rules['required']:
value = Validators.required(value, field_name)
elif value is None or value == '':
if 'default' in field_rules:
validated_data[field_name] = field_rules['default']
continue
validator_name = field_rules.get('type', 'string')
validator_func = getattr(Validators, validator_name, None)
if not validator_func:
raise ValueError(f"Unknown validator type: {validator_name}")
validator_params = {
k: v for k, v in field_rules.items()
if k not in ['type', 'required', 'default']
}
validator_params['field_name'] = field_name
validated_data[field_name] = validator_func(value, **validator_params)
request.validated_data = validated_data
return func(*args, **kwargs)
return wrapper
return decorator
def sanitize_html(value: str) -> str:
dangerous_tags = re.compile(
r'<(script|iframe|object|embed|form|input|button|textarea|select|link|meta|style).*?>.*?</\1>',
re.IGNORECASE | re.DOTALL
)
dangerous_attrs = re.compile(
r'\s*(on\w+|style|javascript:)[\s]*=[\s]*["\']?[^"\'>\s]+',
re.IGNORECASE
)
value = dangerous_tags.sub('', value)
value = dangerous_attrs.sub('', value)
return value
def sanitize_sql_identifier(value: str) -> str:
if not re.match(r'^[a-zA-Z_][a-zA-Z0-9_]*$', value):
raise ValidationException(
message="Invalid SQL identifier",
details={'value': value},
user_message="Ungültiger Bezeichner"
)
return value

84
v2_adminpanel/db.py Normale Datei
Datei anzeigen

@@ -0,0 +1,84 @@
import psycopg2
from psycopg2.extras import Json, RealDictCursor
from contextlib import contextmanager
from config import DATABASE_CONFIG
def get_connection():
"""Create and return a new database connection"""
conn = psycopg2.connect(**DATABASE_CONFIG)
conn.set_client_encoding('UTF8')
return conn
@contextmanager
def get_db_connection():
"""Context manager for database connections"""
conn = get_connection()
try:
yield conn
conn.commit()
except Exception:
conn.rollback()
raise
finally:
conn.close()
@contextmanager
def get_db_cursor(conn=None):
"""Context manager for database cursors"""
if conn is None:
with get_db_connection() as connection:
cur = connection.cursor()
try:
yield cur
finally:
cur.close()
else:
cur = conn.cursor()
try:
yield cur
finally:
cur.close()
@contextmanager
def get_dict_cursor(conn=None):
"""Context manager for dictionary cursors"""
if conn is None:
with get_db_connection() as connection:
cur = connection.cursor(cursor_factory=RealDictCursor)
try:
yield cur
finally:
cur.close()
else:
cur = conn.cursor(cursor_factory=RealDictCursor)
try:
yield cur
finally:
cur.close()
def execute_query(query, params=None, fetch_one=False, fetch_all=False, as_dict=False):
"""Execute a query and optionally fetch results"""
with get_db_connection() as conn:
cursor_func = get_dict_cursor if as_dict else get_db_cursor
with cursor_func(conn) as cur:
cur.execute(query, params)
if fetch_one:
return cur.fetchone()
elif fetch_all:
return cur.fetchall()
else:
return cur.rowcount
def execute_many(query, params_list):
"""Execute a query multiple times with different parameters"""
with get_db_connection() as conn:
with get_db_cursor(conn) as cur:
cur.executemany(query, params_list)
return cur.rowcount

704
v2_adminpanel/init.sql Normale Datei
Datei anzeigen

@@ -0,0 +1,704 @@
-- UTF-8 Encoding für deutsche Sonderzeichen sicherstellen
SET client_encoding = 'UTF8';
-- Zeitzone auf Europe/Berlin setzen
SET timezone = 'Europe/Berlin';
CREATE TABLE IF NOT EXISTS customers (
id SERIAL PRIMARY KEY,
name TEXT NOT NULL,
email TEXT,
is_fake BOOLEAN DEFAULT FALSE,
created_at TIMESTAMP WITH TIME ZONE DEFAULT CURRENT_TIMESTAMP,
CONSTRAINT unique_email UNIQUE (email)
);
CREATE TABLE IF NOT EXISTS licenses (
id SERIAL PRIMARY KEY,
license_key TEXT UNIQUE NOT NULL,
customer_id INTEGER REFERENCES customers(id),
license_type TEXT NOT NULL,
valid_from DATE NOT NULL,
valid_until DATE NOT NULL,
is_active BOOLEAN DEFAULT TRUE,
is_fake BOOLEAN DEFAULT FALSE,
created_at TIMESTAMP WITH TIME ZONE DEFAULT CURRENT_TIMESTAMP
);
CREATE TABLE IF NOT EXISTS sessions (
id SERIAL PRIMARY KEY,
license_id INTEGER REFERENCES licenses(id),
license_key VARCHAR(60), -- Denormalized for performance
session_id TEXT UNIQUE NOT NULL,
username VARCHAR(50),
computer_name VARCHAR(100),
hardware_id VARCHAR(100),
ip_address TEXT,
user_agent TEXT,
app_version VARCHAR(20),
login_time TIMESTAMP WITH TIME ZONE DEFAULT CURRENT_TIMESTAMP, -- Alias for started_at
last_activity TIMESTAMP WITH TIME ZONE DEFAULT CURRENT_TIMESTAMP, -- Alias for last_heartbeat
logout_time TIMESTAMP WITH TIME ZONE, -- Alias for ended_at
started_at TIMESTAMP WITH TIME ZONE DEFAULT CURRENT_TIMESTAMP,
last_heartbeat TIMESTAMP WITH TIME ZONE DEFAULT CURRENT_TIMESTAMP,
ended_at TIMESTAMP WITH TIME ZONE,
is_active BOOLEAN DEFAULT TRUE,
active BOOLEAN DEFAULT TRUE -- Alias for is_active
);
-- Audit-Log-Tabelle für Änderungsprotokolle
CREATE TABLE IF NOT EXISTS audit_log (
id SERIAL PRIMARY KEY,
timestamp TIMESTAMP WITH TIME ZONE DEFAULT CURRENT_TIMESTAMP,
username TEXT NOT NULL,
action TEXT NOT NULL,
entity_type TEXT NOT NULL,
entity_id INTEGER,
old_values JSONB,
new_values JSONB,
ip_address TEXT,
user_agent TEXT,
additional_info TEXT
);
-- Index für bessere Performance bei Abfragen
CREATE INDEX idx_audit_log_timestamp ON audit_log(timestamp DESC);
CREATE INDEX idx_audit_log_username ON audit_log(username);
CREATE INDEX idx_audit_log_entity ON audit_log(entity_type, entity_id);
-- Backup-Historie-Tabelle
CREATE TABLE IF NOT EXISTS backup_history (
id SERIAL PRIMARY KEY,
filename TEXT NOT NULL,
filepath TEXT NOT NULL,
filesize BIGINT,
backup_type TEXT NOT NULL, -- 'manual' oder 'scheduled'
status TEXT NOT NULL, -- 'success', 'failed', 'in_progress'
error_message TEXT,
created_at TIMESTAMP WITH TIME ZONE DEFAULT CURRENT_TIMESTAMP,
created_by TEXT NOT NULL,
tables_count INTEGER,
records_count INTEGER,
duration_seconds NUMERIC,
is_encrypted BOOLEAN DEFAULT TRUE
);
-- Index für bessere Performance
CREATE INDEX idx_backup_history_created_at ON backup_history(created_at DESC);
CREATE INDEX idx_backup_history_status ON backup_history(status);
-- Login-Attempts-Tabelle für Rate-Limiting
CREATE TABLE IF NOT EXISTS login_attempts (
ip_address VARCHAR(45) PRIMARY KEY,
attempt_count INTEGER DEFAULT 0,
first_attempt TIMESTAMP WITH TIME ZONE DEFAULT CURRENT_TIMESTAMP,
last_attempt TIMESTAMP WITH TIME ZONE DEFAULT CURRENT_TIMESTAMP,
blocked_until TIMESTAMP WITH TIME ZONE NULL,
last_username_tried TEXT,
last_error_message TEXT
);
-- Index für schnelle Abfragen
CREATE INDEX idx_login_attempts_blocked_until ON login_attempts(blocked_until);
CREATE INDEX idx_login_attempts_last_attempt ON login_attempts(last_attempt DESC);
-- Migration: Füge created_at zu licenses hinzu, falls noch nicht vorhanden
DO $$
BEGIN
IF NOT EXISTS (SELECT 1 FROM information_schema.columns
WHERE table_name = 'licenses' AND column_name = 'created_at') THEN
ALTER TABLE licenses ADD COLUMN created_at TIMESTAMP WITH TIME ZONE DEFAULT CURRENT_TIMESTAMP;
-- Setze created_at für bestehende Einträge auf das valid_from Datum
UPDATE licenses SET created_at = valid_from WHERE created_at IS NULL;
END IF;
END $$;
-- ===================== RESOURCE POOL SYSTEM =====================
-- Haupttabelle für den Resource Pool
CREATE TABLE IF NOT EXISTS resource_pools (
id SERIAL PRIMARY KEY,
resource_type VARCHAR(20) NOT NULL CHECK (resource_type IN ('domain', 'ipv4', 'phone')),
resource_value VARCHAR(255) NOT NULL,
status VARCHAR(20) DEFAULT 'available' CHECK (status IN ('available', 'allocated', 'quarantine')),
allocated_to_license INTEGER REFERENCES licenses(id) ON DELETE SET NULL,
status_changed_at TIMESTAMP WITH TIME ZONE DEFAULT CURRENT_TIMESTAMP,
status_changed_by VARCHAR(50),
quarantine_reason VARCHAR(100) CHECK (quarantine_reason IN ('abuse', 'defect', 'maintenance', 'blacklisted', 'expired', 'review', NULL)),
quarantine_until TIMESTAMP WITH TIME ZONE,
created_at TIMESTAMP WITH TIME ZONE DEFAULT CURRENT_TIMESTAMP,
notes TEXT,
is_fake BOOLEAN DEFAULT FALSE,
UNIQUE(resource_type, resource_value)
);
-- Resource History für vollständige Nachverfolgbarkeit
CREATE TABLE IF NOT EXISTS resource_history (
id SERIAL PRIMARY KEY,
resource_id INTEGER REFERENCES resource_pools(id) ON DELETE CASCADE,
license_id INTEGER REFERENCES licenses(id) ON DELETE SET NULL,
action VARCHAR(50) NOT NULL CHECK (action IN ('allocated', 'deallocated', 'quarantined', 'released', 'created', 'deleted')),
action_by VARCHAR(50) NOT NULL,
action_at TIMESTAMP WITH TIME ZONE DEFAULT CURRENT_TIMESTAMP,
details JSONB,
ip_address TEXT
);
-- Resource Metrics für Performance-Tracking und ROI
CREATE TABLE IF NOT EXISTS resource_metrics (
id SERIAL PRIMARY KEY,
resource_id INTEGER REFERENCES resource_pools(id) ON DELETE CASCADE,
metric_date DATE NOT NULL,
usage_count INTEGER DEFAULT 0,
performance_score DECIMAL(5,2) DEFAULT 0.00,
cost DECIMAL(10,2) DEFAULT 0.00,
revenue DECIMAL(10,2) DEFAULT 0.00,
issues_count INTEGER DEFAULT 0,
availability_percent DECIMAL(5,2) DEFAULT 100.00,
UNIQUE(resource_id, metric_date)
);
-- Zuordnungstabelle zwischen Lizenzen und Ressourcen
CREATE TABLE IF NOT EXISTS license_resources (
id SERIAL PRIMARY KEY,
license_id INTEGER REFERENCES licenses(id) ON DELETE CASCADE,
resource_id INTEGER REFERENCES resource_pools(id) ON DELETE CASCADE,
assigned_at TIMESTAMP WITH TIME ZONE DEFAULT CURRENT_TIMESTAMP,
assigned_by VARCHAR(50),
is_active BOOLEAN DEFAULT TRUE,
UNIQUE(license_id, resource_id)
);
-- Erweiterung der licenses Tabelle um Resource-Counts
DO $$
BEGIN
IF NOT EXISTS (SELECT 1 FROM information_schema.columns
WHERE table_name = 'licenses' AND column_name = 'domain_count') THEN
ALTER TABLE licenses
ADD COLUMN domain_count INTEGER DEFAULT 1 CHECK (domain_count >= 0 AND domain_count <= 10),
ADD COLUMN ipv4_count INTEGER DEFAULT 1 CHECK (ipv4_count >= 0 AND ipv4_count <= 10),
ADD COLUMN phone_count INTEGER DEFAULT 1 CHECK (phone_count >= 0 AND phone_count <= 10);
END IF;
END $$;
-- Erweiterung der licenses Tabelle um device_limit
DO $$
BEGIN
IF NOT EXISTS (SELECT 1 FROM information_schema.columns
WHERE table_name = 'licenses' AND column_name = 'device_limit') THEN
ALTER TABLE licenses
ADD COLUMN device_limit INTEGER DEFAULT 3 CHECK (device_limit >= 1 AND device_limit <= 10);
END IF;
END $$;
-- Tabelle für Geräte-Registrierungen
CREATE TABLE IF NOT EXISTS device_registrations (
id SERIAL PRIMARY KEY,
license_id INTEGER REFERENCES licenses(id) ON DELETE CASCADE,
hardware_id TEXT NOT NULL,
device_name TEXT,
operating_system TEXT,
first_seen TIMESTAMP WITH TIME ZONE DEFAULT CURRENT_TIMESTAMP,
last_seen TIMESTAMP WITH TIME ZONE DEFAULT CURRENT_TIMESTAMP,
is_active BOOLEAN DEFAULT TRUE,
deactivated_at TIMESTAMP WITH TIME ZONE,
deactivated_by TEXT,
ip_address TEXT,
user_agent TEXT,
UNIQUE(license_id, hardware_id)
);
-- Indizes für device_registrations
CREATE INDEX IF NOT EXISTS idx_device_license ON device_registrations(license_id);
CREATE INDEX IF NOT EXISTS idx_device_hardware ON device_registrations(hardware_id);
CREATE INDEX IF NOT EXISTS idx_device_active ON device_registrations(license_id, is_active) WHERE is_active = TRUE;
-- Indizes für Performance
CREATE INDEX IF NOT EXISTS idx_resource_status ON resource_pools(status);
CREATE INDEX IF NOT EXISTS idx_resource_type_status ON resource_pools(resource_type, status);
CREATE INDEX IF NOT EXISTS idx_resource_allocated ON resource_pools(allocated_to_license) WHERE allocated_to_license IS NOT NULL;
CREATE INDEX IF NOT EXISTS idx_resource_quarantine ON resource_pools(quarantine_until) WHERE status = 'quarantine';
CREATE INDEX IF NOT EXISTS idx_resource_history_date ON resource_history(action_at DESC);
CREATE INDEX IF NOT EXISTS idx_resource_history_resource ON resource_history(resource_id);
CREATE INDEX IF NOT EXISTS idx_resource_metrics_date ON resource_metrics(metric_date DESC);
CREATE INDEX IF NOT EXISTS idx_license_resources_active ON license_resources(license_id) WHERE is_active = TRUE;
-- Users table for authentication with password and 2FA support
CREATE TABLE IF NOT EXISTS users (
id SERIAL PRIMARY KEY,
username VARCHAR(50) UNIQUE NOT NULL,
password_hash VARCHAR(255) NOT NULL,
email VARCHAR(100),
totp_secret VARCHAR(32),
totp_enabled BOOLEAN DEFAULT FALSE,
backup_codes TEXT, -- JSON array of hashed backup codes
created_at TIMESTAMP WITH TIME ZONE DEFAULT CURRENT_TIMESTAMP,
last_password_change TIMESTAMP WITH TIME ZONE DEFAULT CURRENT_TIMESTAMP,
password_reset_token VARCHAR(64),
password_reset_expires TIMESTAMP WITH TIME ZONE,
failed_2fa_attempts INTEGER DEFAULT 0,
last_failed_2fa TIMESTAMP WITH TIME ZONE
);
-- Index for faster login lookups
CREATE INDEX IF NOT EXISTS idx_users_username ON users(username);
CREATE INDEX IF NOT EXISTS idx_users_reset_token ON users(password_reset_token) WHERE password_reset_token IS NOT NULL;
-- Migration: Add is_fake column to licenses if it doesn't exist
DO $$
BEGIN
IF NOT EXISTS (SELECT 1 FROM information_schema.columns
WHERE table_name = 'licenses' AND column_name = 'is_fake') THEN
ALTER TABLE licenses ADD COLUMN is_fake BOOLEAN DEFAULT FALSE;
-- Mark all existing licenses as fake data
UPDATE licenses SET is_fake = TRUE;
-- Add index for better performance when filtering fake data
CREATE INDEX idx_licenses_is_fake ON licenses(is_fake);
END IF;
END $$;
-- Migration: Add is_fake column to customers if it doesn't exist
DO $$
BEGIN
IF NOT EXISTS (SELECT 1 FROM information_schema.columns
WHERE table_name = 'customers' AND column_name = 'is_fake') THEN
ALTER TABLE customers ADD COLUMN is_fake BOOLEAN DEFAULT FALSE;
-- Mark all existing customers as fake data
UPDATE customers SET is_fake = TRUE;
-- Add index for better performance
CREATE INDEX idx_customers_is_fake ON customers(is_fake);
END IF;
END $$;
-- Migration: Add is_fake column to resource_pools if it doesn't exist
DO $$
BEGIN
IF NOT EXISTS (SELECT 1 FROM information_schema.columns
WHERE table_name = 'resource_pools' AND column_name = 'is_fake') THEN
ALTER TABLE resource_pools ADD COLUMN is_fake BOOLEAN DEFAULT FALSE;
-- Mark all existing resources as fake data
UPDATE resource_pools SET is_fake = TRUE;
-- Add index for better performance
CREATE INDEX idx_resource_pools_is_fake ON resource_pools(is_fake);
END IF;
END $$;
-- Migration: Add missing columns to sessions table
DO $$
BEGIN
-- Add license_key column
IF NOT EXISTS (SELECT 1 FROM information_schema.columns
WHERE table_name = 'sessions' AND column_name = 'license_key') THEN
ALTER TABLE sessions ADD COLUMN license_key VARCHAR(60);
END IF;
-- Add username column
IF NOT EXISTS (SELECT 1 FROM information_schema.columns
WHERE table_name = 'sessions' AND column_name = 'username') THEN
ALTER TABLE sessions ADD COLUMN username VARCHAR(50);
END IF;
-- Add computer_name column
IF NOT EXISTS (SELECT 1 FROM information_schema.columns
WHERE table_name = 'sessions' AND column_name = 'computer_name') THEN
ALTER TABLE sessions ADD COLUMN computer_name VARCHAR(100);
END IF;
-- Add hardware_id column
IF NOT EXISTS (SELECT 1 FROM information_schema.columns
WHERE table_name = 'sessions' AND column_name = 'hardware_id') THEN
ALTER TABLE sessions ADD COLUMN hardware_id VARCHAR(100);
END IF;
-- Add app_version column
IF NOT EXISTS (SELECT 1 FROM information_schema.columns
WHERE table_name = 'sessions' AND column_name = 'app_version') THEN
ALTER TABLE sessions ADD COLUMN app_version VARCHAR(20);
END IF;
-- Add login_time as alias for started_at
IF NOT EXISTS (SELECT 1 FROM information_schema.columns
WHERE table_name = 'sessions' AND column_name = 'login_time') THEN
ALTER TABLE sessions ADD COLUMN login_time TIMESTAMP WITH TIME ZONE;
UPDATE sessions SET login_time = started_at;
END IF;
-- Add last_activity as alias for last_heartbeat
IF NOT EXISTS (SELECT 1 FROM information_schema.columns
WHERE table_name = 'sessions' AND column_name = 'last_activity') THEN
ALTER TABLE sessions ADD COLUMN last_activity TIMESTAMP WITH TIME ZONE;
UPDATE sessions SET last_activity = last_heartbeat;
END IF;
-- Add logout_time as alias for ended_at
IF NOT EXISTS (SELECT 1 FROM information_schema.columns
WHERE table_name = 'sessions' AND column_name = 'logout_time') THEN
ALTER TABLE sessions ADD COLUMN logout_time TIMESTAMP WITH TIME ZONE;
UPDATE sessions SET logout_time = ended_at;
END IF;
-- Add active as alias for is_active
IF NOT EXISTS (SELECT 1 FROM information_schema.columns
WHERE table_name = 'sessions' AND column_name = 'active') THEN
ALTER TABLE sessions ADD COLUMN active BOOLEAN DEFAULT TRUE;
UPDATE sessions SET active = is_active;
END IF;
END $$;
-- ===================== LICENSE SERVER TABLES =====================
-- Following best practices: snake_case for DB fields, clear naming conventions
-- Enable UUID extension
CREATE EXTENSION IF NOT EXISTS "uuid-ossp";
-- License tokens for offline validation
CREATE TABLE IF NOT EXISTS license_tokens (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
license_id INTEGER REFERENCES licenses(id) ON DELETE CASCADE,
token VARCHAR(512) NOT NULL UNIQUE,
hardware_id VARCHAR(255) NOT NULL,
valid_until TIMESTAMP NOT NULL,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
last_validated TIMESTAMP,
validation_count INTEGER DEFAULT 0
);
CREATE INDEX idx_token ON license_tokens(token);
CREATE INDEX idx_hardware ON license_tokens(hardware_id);
CREATE INDEX idx_valid_until ON license_tokens(valid_until);
-- Heartbeat tracking with partitioning support
CREATE TABLE IF NOT EXISTS license_heartbeats (
id BIGSERIAL,
license_id INTEGER REFERENCES licenses(id) ON DELETE CASCADE,
hardware_id VARCHAR(255) NOT NULL,
ip_address INET,
user_agent VARCHAR(500),
app_version VARCHAR(50),
timestamp TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
session_data JSONB,
PRIMARY KEY (id, timestamp)
) PARTITION BY RANGE (timestamp);
-- Create partitions for the current and next month
CREATE TABLE IF NOT EXISTS license_heartbeats_2025_01 PARTITION OF license_heartbeats
FOR VALUES FROM ('2025-01-01') TO ('2025-02-01');
CREATE TABLE IF NOT EXISTS license_heartbeats_2025_02 PARTITION OF license_heartbeats
FOR VALUES FROM ('2025-02-01') TO ('2025-03-01');
-- Add June 2025 partition for current month
CREATE TABLE IF NOT EXISTS license_heartbeats_2025_06 PARTITION OF license_heartbeats
FOR VALUES FROM ('2025-06-01') TO ('2025-07-01');
CREATE INDEX idx_heartbeat_license_time ON license_heartbeats(license_id, timestamp DESC);
CREATE INDEX idx_heartbeat_hardware_time ON license_heartbeats(hardware_id, timestamp DESC);
-- Activation events tracking
CREATE TABLE IF NOT EXISTS activation_events (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
license_id INTEGER REFERENCES licenses(id) ON DELETE CASCADE,
event_type VARCHAR(50) NOT NULL CHECK (event_type IN ('activation', 'deactivation', 'reactivation', 'transfer')),
hardware_id VARCHAR(255),
previous_hardware_id VARCHAR(255),
ip_address INET,
user_agent VARCHAR(500),
success BOOLEAN DEFAULT true,
error_message TEXT,
metadata JSONB,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);
CREATE INDEX idx_license_events ON activation_events(license_id, created_at DESC);
CREATE INDEX idx_event_type ON activation_events(event_type, created_at DESC);
-- API rate limiting
CREATE TABLE IF NOT EXISTS api_rate_limits (
id SERIAL PRIMARY KEY,
api_key VARCHAR(255) NOT NULL UNIQUE,
requests_per_minute INTEGER DEFAULT 60,
requests_per_hour INTEGER DEFAULT 1000,
requests_per_day INTEGER DEFAULT 10000,
burst_size INTEGER DEFAULT 100,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);
-- Anomaly detection
CREATE TABLE IF NOT EXISTS anomaly_detections (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
license_id INTEGER REFERENCES licenses(id),
anomaly_type VARCHAR(100) NOT NULL CHECK (anomaly_type IN ('multiple_ips', 'rapid_hardware_change', 'suspicious_pattern', 'concurrent_use', 'geo_anomaly')),
severity VARCHAR(20) NOT NULL CHECK (severity IN ('low', 'medium', 'high', 'critical')),
details JSONB NOT NULL,
detected_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
resolved BOOLEAN DEFAULT false,
resolved_at TIMESTAMP,
resolved_by VARCHAR(255),
action_taken TEXT
);
CREATE INDEX idx_unresolved ON anomaly_detections(resolved, severity, detected_at DESC);
CREATE INDEX idx_license_anomalies ON anomaly_detections(license_id, detected_at DESC);
-- API clients for authentication
CREATE TABLE IF NOT EXISTS api_clients (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
client_name VARCHAR(255) NOT NULL,
api_key VARCHAR(255) NOT NULL UNIQUE,
secret_key VARCHAR(255) NOT NULL,
is_active BOOLEAN DEFAULT true,
allowed_endpoints TEXT[],
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);
-- Feature flags for gradual rollout
CREATE TABLE IF NOT EXISTS feature_flags (
id SERIAL PRIMARY KEY,
feature_name VARCHAR(100) NOT NULL UNIQUE,
is_enabled BOOLEAN DEFAULT false,
rollout_percentage INTEGER DEFAULT 0 CHECK (rollout_percentage >= 0 AND rollout_percentage <= 100),
whitelist_license_ids INTEGER[],
blacklist_license_ids INTEGER[],
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);
-- Insert default feature flags
INSERT INTO feature_flags (feature_name, is_enabled, rollout_percentage) VALUES
('anomaly_detection', true, 100),
('offline_tokens', true, 100),
('advanced_analytics', false, 0),
('geo_restriction', false, 0)
ON CONFLICT (feature_name) DO NOTHING;
-- Session management for concurrent use tracking
CREATE TABLE IF NOT EXISTS active_sessions (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
license_id INTEGER REFERENCES licenses(id) ON DELETE CASCADE,
hardware_id VARCHAR(255) NOT NULL,
session_token VARCHAR(512) NOT NULL UNIQUE,
ip_address INET,
started_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
last_seen TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
expires_at TIMESTAMP NOT NULL
);
CREATE INDEX idx_session_license ON active_sessions(license_id);
CREATE INDEX idx_session_expires ON active_sessions(expires_at);
-- Update trigger for updated_at columns
CREATE OR REPLACE FUNCTION update_updated_at_column()
RETURNS TRIGGER AS $$
BEGIN
NEW.updated_at = CURRENT_TIMESTAMP;
RETURN NEW;
END;
$$ language 'plpgsql';
CREATE TRIGGER update_api_rate_limits_updated_at BEFORE UPDATE ON api_rate_limits
FOR EACH ROW EXECUTE FUNCTION update_updated_at_column();
CREATE TRIGGER update_api_clients_updated_at BEFORE UPDATE ON api_clients
FOR EACH ROW EXECUTE FUNCTION update_updated_at_column();
CREATE TRIGGER update_feature_flags_updated_at BEFORE UPDATE ON feature_flags
FOR EACH ROW EXECUTE FUNCTION update_updated_at_column();
-- Function to automatically create monthly partitions for heartbeats
CREATE OR REPLACE FUNCTION create_monthly_partition()
RETURNS void AS $$
DECLARE
start_date date;
end_date date;
partition_name text;
BEGIN
start_date := date_trunc('month', CURRENT_DATE + interval '1 month');
end_date := start_date + interval '1 month';
partition_name := 'license_heartbeats_' || to_char(start_date, 'YYYY_MM');
EXECUTE format('CREATE TABLE IF NOT EXISTS %I PARTITION OF license_heartbeats FOR VALUES FROM (%L) TO (%L)',
partition_name, start_date, end_date);
END;
$$ LANGUAGE plpgsql;
-- Migration: Add max_devices column to licenses if it doesn't exist
DO $$
BEGIN
IF NOT EXISTS (SELECT 1 FROM information_schema.columns
WHERE table_name = 'licenses' AND column_name = 'max_devices') THEN
ALTER TABLE licenses ADD COLUMN max_devices INTEGER DEFAULT 3 CHECK (max_devices >= 1);
END IF;
END $$;
-- Migration: Add expires_at column to licenses if it doesn't exist
DO $$
BEGIN
IF NOT EXISTS (SELECT 1 FROM information_schema.columns
WHERE table_name = 'licenses' AND column_name = 'expires_at') THEN
ALTER TABLE licenses ADD COLUMN expires_at TIMESTAMP;
-- Set expires_at based on valid_until for existing licenses
UPDATE licenses SET expires_at = valid_until::timestamp WHERE expires_at IS NULL;
END IF;
END $$;
-- Migration: Add features column to licenses if it doesn't exist
DO $$
BEGIN
IF NOT EXISTS (SELECT 1 FROM information_schema.columns
WHERE table_name = 'licenses' AND column_name = 'features') THEN
ALTER TABLE licenses ADD COLUMN features TEXT[] DEFAULT '{}';
END IF;
END $$;
-- Migration: Add updated_at column to licenses if it doesn't exist
DO $$
BEGIN
IF NOT EXISTS (SELECT 1 FROM information_schema.columns
WHERE table_name = 'licenses' AND column_name = 'updated_at') THEN
ALTER TABLE licenses ADD COLUMN updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP;
CREATE TRIGGER update_licenses_updated_at BEFORE UPDATE ON licenses
FOR EACH ROW EXECUTE FUNCTION update_updated_at_column();
END IF;
END $$;
-- Migration: Add device_type column to device_registrations table
DO $$
BEGIN
IF NOT EXISTS (SELECT 1 FROM information_schema.columns
WHERE table_name = 'device_registrations' AND column_name = 'device_type') THEN
ALTER TABLE device_registrations ADD COLUMN device_type VARCHAR(50) DEFAULT 'unknown';
-- Update existing records to have a device_type based on operating system
UPDATE device_registrations
SET device_type = CASE
WHEN operating_system ILIKE '%windows%' THEN 'desktop'
WHEN operating_system ILIKE '%mac%' THEN 'desktop'
WHEN operating_system ILIKE '%linux%' THEN 'desktop'
WHEN operating_system ILIKE '%android%' THEN 'mobile'
WHEN operating_system ILIKE '%ios%' THEN 'mobile'
ELSE 'unknown'
END
WHERE device_type IS NULL OR device_type = 'unknown';
END IF;
END $$;
-- Client configuration table for Account Forger
CREATE TABLE IF NOT EXISTS client_configs (
id SERIAL PRIMARY KEY,
client_name VARCHAR(100) NOT NULL DEFAULT 'Account Forger',
api_key VARCHAR(255) NOT NULL,
heartbeat_interval INTEGER DEFAULT 30, -- seconds
session_timeout INTEGER DEFAULT 60, -- seconds (2x heartbeat)
current_version VARCHAR(20) NOT NULL,
minimum_version VARCHAR(20) NOT NULL,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);
-- License sessions for single-session enforcement
CREATE TABLE IF NOT EXISTS license_sessions (
id SERIAL PRIMARY KEY,
license_id INTEGER REFERENCES licenses(id) ON DELETE CASCADE,
hardware_id VARCHAR(255) NOT NULL,
ip_address INET,
client_version VARCHAR(20),
session_token VARCHAR(255) UNIQUE NOT NULL,
started_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
last_heartbeat TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
UNIQUE(license_id) -- Only one active session per license
);
-- Session history for debugging
CREATE TABLE IF NOT EXISTS session_history (
id SERIAL PRIMARY KEY,
license_id INTEGER REFERENCES licenses(id) ON DELETE CASCADE,
hardware_id VARCHAR(255) NOT NULL,
ip_address INET,
client_version VARCHAR(20),
started_at TIMESTAMP,
ended_at TIMESTAMP,
end_reason VARCHAR(50) -- 'normal', 'timeout', 'forced', 'replaced'
);
-- Create indexes for performance
CREATE INDEX IF NOT EXISTS idx_license_sessions_license_id ON license_sessions(license_id);
CREATE INDEX IF NOT EXISTS idx_license_sessions_last_heartbeat ON license_sessions(last_heartbeat);
CREATE INDEX IF NOT EXISTS idx_session_history_license_id ON session_history(license_id);
CREATE INDEX IF NOT EXISTS idx_session_history_ended_at ON session_history(ended_at);
-- Insert default client configuration if not exists
INSERT INTO client_configs (client_name, api_key, current_version, minimum_version)
VALUES ('Account Forger', 'AF-' || gen_random_uuid()::text, '1.0.0', '1.0.0')
ON CONFLICT DO NOTHING;
-- ===================== SYSTEM API KEY TABLE =====================
-- Single API key for system-wide authentication
CREATE TABLE IF NOT EXISTS system_api_key (
id INTEGER PRIMARY KEY DEFAULT 1 CHECK (id = 1), -- Ensures single row
api_key VARCHAR(255) NOT NULL,
created_at TIMESTAMP WITH TIME ZONE DEFAULT CURRENT_TIMESTAMP,
regenerated_at TIMESTAMP WITH TIME ZONE,
last_used_at TIMESTAMP WITH TIME ZONE,
usage_count INTEGER DEFAULT 0,
created_by VARCHAR(50),
regenerated_by VARCHAR(50)
);
-- Function to generate API key with AF-YYYY- prefix
CREATE OR REPLACE FUNCTION generate_api_key() RETURNS VARCHAR AS $$
DECLARE
year_part VARCHAR(4);
random_part VARCHAR(32);
BEGIN
year_part := to_char(CURRENT_DATE, 'YYYY');
random_part := upper(substring(md5(random()::text || clock_timestamp()::text) from 1 for 32));
RETURN 'AF-' || year_part || '-' || random_part;
END;
$$ LANGUAGE plpgsql;
-- Initialize with a default API key if none exists
INSERT INTO system_api_key (api_key, created_by)
SELECT generate_api_key(), 'system'
WHERE NOT EXISTS (SELECT 1 FROM system_api_key);
-- Audit trigger for API key changes
CREATE OR REPLACE FUNCTION audit_api_key_changes() RETURNS TRIGGER AS $$
BEGIN
IF TG_OP = 'UPDATE' AND OLD.api_key != NEW.api_key THEN
INSERT INTO audit_log (
timestamp,
username,
action,
entity_type,
entity_id,
old_values,
new_values,
additional_info
) VALUES (
CURRENT_TIMESTAMP,
COALESCE(NEW.regenerated_by, 'system'),
'api_key_regenerated',
'system_api_key',
NEW.id,
jsonb_build_object('api_key', LEFT(OLD.api_key, 8) || '...'),
jsonb_build_object('api_key', LEFT(NEW.api_key, 8) || '...'),
'API Key regenerated'
);
END IF;
RETURN NEW;
END;
$$ LANGUAGE plpgsql;
CREATE TRIGGER audit_system_api_key_changes
AFTER UPDATE ON system_api_key
FOR EACH ROW EXECUTE FUNCTION audit_api_key_changes();

Einige Dateien werden nicht angezeigt, da zu viele Dateien in diesem Diff geändert wurden Mehr anzeigen