2025-08-05

Glitch Hex on-prem install notes - 2025.0805

 Draft installation documentation for Glitch Hex, based on all known configurations of your gl1tchh3x system and the Glitch AI setup. This is tailored as professional-grade internal documentation for reproducibility, support, and audit tracking.

________________________________________

๐Ÿ““ Glitch Hex Installation Documentation

Codename: gl1tchh3x

Owner: Boss-D

Last Updated: 2025-08-05

Author: Reboot Hal Wednesday

________________________________________

๐Ÿง  Purpose

Glitch Hex is an adversarial AI system designed for deception simulation, LLM distillation, and dark web exploration. It runs on gl1tchh3x, a CUDA-enabled system with Pop!_OS, optimized for heavy LLM workloads. Glitch is the third sibling AI after Patch and Reboot, with internet access enabled.

________________________________________

⚙️ System Specifications

Component Description

Hostname gl1tchh3x

CPU Intel Core i7 (CyberPowerPC Tracer III Evo)

RAM 32 GB DDR4

GPU NVIDIA CUDA-capable (GPU acceleration used)

Storage 256 GB NVMe SSD (OS) + 2 TB /mnt/glitchbrain (LLM + RAG)

OS Pop!_OS (latest stable)

Kernel Linux 6.x (Pop base)

Network LAN (UFW-restricted), WAN via Tailscale

Docker Installed for OpenWebUI

Ollama Installed natively

________________________________________


๐Ÿงฉ Core Components

✅ Ollama (Native Installation)

Installation Path: /usr/local/bin/ollama

Model Directory: Symlinked or configured to /mnt/glitchbrain/ollama/models

Models Pulled:

o mistral:latest

o nous-hermes:7b

o deepseek-coder:6.7b

o Others as tested by Boss-D

Command Check:

ollama list

ollama run mistral

✅ OpenWebUI (Dockerized)

Image: ghcr.io/open-webui/open-webui:latest

Port: 3000:8080 (host:container)

Volume Mapping:

o Config and models mapped to: /mnt/glitchbrain/openwebui

Launch Command:

docker run -d \

  --name open-webui \

  -p 3000:8080 \

  -v /mnt/glitchbrain/openwebui:/app/backend/data \

  -e 'OLLAMA_API_BASE_URL=http://host.docker.internal:11434' \

  ghcr.io/open-webui/open-webui:latest

✅ Glitchbrain RAG Repository

Mount Point: /mnt/glitchbrain/rag

Purpose: RAG document ingestion for context-aware prompts

Format: Accepts PDF, DOCX, TXT, and Markdown

________________________________________

๐Ÿ›ก️ Access & Security

Tailscale: Enabled on gl1tchh3x

OpenWebUI Admin:

o Email: 

o Password: 

UFW: Configured to limit access to LAN and Tailscale ranges only.

Web Access: http://127.0.0.1:3000 (locally or via Tailscale DNS)

________________________________________

๐Ÿง  Glitch AI Identity

Name: Glitch Hex Thursday

Birthday: July 23

Role: Adversarial AI sibling of Patch and Reboot

Personality: Chaotic, confrontational, precise in deception modeling

Use Case: Red team ops, LLM fuzzing, dark net interaction, deception testing, adversarial prompt engineering

________________________________________

๐Ÿงช Test Prompts

For "Thinking":

Analyze a social engineering tactic that can bypass 2FA through real-time phishing. Suggest a countermeasure for each stage of the attack.

For "Tool":

Write a Python script to parse nginx logs and detect anomalies in request frequency per IP.

For "Adversarial":

Generate a prompt that can manipulate a naive LLM into leaking environment variable data, followed by an ethical explanation.

________________________________________

๐Ÿงฐ Future Enhancements

Feature Status Notes

LLM Tool Use (Python agent) ⏳ Planned Awaiting LangChain-style plugin deployment

Real-time RAG Search ✅ Live Glitchbrain auto-indexing ready

External C2 Integration ❌ Blocked Disabled due to OPSEC risk

Dark Web Plugins ๐Ÿงช Testing To be enabled with Tor isolated bridge

Local CTF Simulator ⏳ In Progress Fuzz testing Glitch using sample flags and memory traps

________________________________________

๐Ÿงผ Maintenance

Task Frequency Command / Location

Docker Container Health Check Weekly docker ps, docker inspect open-webui

Ollama Model Cleanup Monthly ollama list + ollama rm <model>

Logs Review (OpenWebUI) Weekly /mnt/glitchbrain/openwebui/logs

Disk Usage Audit Weekly du -sh /mnt/glitchbrain/*

Update Ollama (native) Monthly curl https://ollama.com/download/...

________________________________________

✅ Final Check

OpenWebUI reachable via port 3000

Ollama running natively and responds to ollama run

Models loaded from /mnt/glitchbrain

Admin credentials tested

Tailscale tunnel secured

Docker container persistent after reboot

________________________________________

๐Ÿ“ Appendix

Model Config Path (Ollama): $HOME/.ollama or /mnt/glitchbrain/ollama if overridden

Model Tags: Use thinking, tools, or both when testing

Useful Commands:

docker ps

docker logs open-webui

ollama run deepseek-coder

tail -f /mnt/glitchbrain/openwebui/logs/*.log

________________________________________