Skip to content

Instantly share code, notes, and snippets.

@usrbinkat
Created December 27, 2025 17:40
Show Gist options
  • Select an option

  • Save usrbinkat/3ba3d0632b319ef52005fd6d91fd9637 to your computer and use it in GitHub Desktop.

Select an option

Save usrbinkat/3ba3d0632b319ef52005fd6d91fd9637 to your computer and use it in GitHub Desktop.
What's Cookn

Infrastructure-as-Code Platform: Exhaustive Technical Omnibus

Document Type: Long-form technical reference derived from exhaustive code analysis Generated: 2025-12-27 Source: 8 parallel exploratory agents analyzing 150+ Python files


Table of Contents

  1. Executive Summary
  2. Platform Capability Inventory
  3. Core Architecture Foundation
  4. Kubernetes Resource Abstraction Layer
  5. Component Architecture
  6. Networking Architecture
  7. Virtual Machine Orchestration
  8. Distributed Storage
  9. Stack Configuration Patterns
  10. Metadata and Transformations
  11. Developer Use Scenarios
  12. Project Outcomes and Implications
  13. Complete File Inventory

1. Executive Summary

This document presents an exhaustive analysis of a production-grade Pulumi Python infrastructure platform designed for Kubernetes-first deployments. The platform implements sophisticated patterns for virtual machine orchestration, distributed storage, advanced networking, and developer portal provisioning across bare-metal and cloud environments.

Core Philosophy:

  • Zero hardcoding, 100% dynamic discovery
  • Fat Models, Thin Controllers pattern throughout
  • Fail-fast validation with Pydantic
  • Convention over configuration
  • Explicit provider management (no ambient credentials)

Key Statistics:

  • 150+ Python files
  • 100+ Pydantic models
  • 32 Kubernetes resource types
  • 24+ Kubernetes components
  • 25+ stack configurations
  • 8 VM profiles
  • 6 network attachment definition types

2. Platform Capability Inventory

2.1 Cloud Provider Modules

Module Status Capabilities
aws Stub (scaffolded) VPC, EKS, S3, RDS, GuardDuty, Security Hub, CloudTrail
k8s Production Full Kubernetes orchestration (see sections below)

AWS Module Capabilities (Defined in Schema)

AWS Module (src/aws/)
├── Networking
│   ├── VPC with configurable CIDR (default: 10.0.0.0/16)
│   └── Multi-AZ support (1-6 availability zones)
├── Compute
│   ├── EKS cluster with version pinning
│   └── Configurable node instance types
├── Storage
│   ├── S3 with versioning support
│   └── S3 server-side encryption
├── Database
│   ├── RDS with engine version configuration
│   └── Instance class selection
└── Security
    ├── GuardDuty threat detection
    ├── Security Hub
    └── CloudTrail audit logging

2.2 Kubernetes Components Inventory

Component Category Key Features
cilium CNI/Networking Gateway API, L2 announcements, Hubble UI, eBPF
cert_manager Security Self-signed CA, ACME/Let's Encrypt, DNS-01 solvers
multus Networking Pod multi-homing, thick plugin, Talos patches
cluster_network_addons_operator Networking macvtap, OVS, linux-bridge, kube-mac-pool
envoy_gateway Edge Gateway Multi-protocol (HTTP/HTTPS/TCP/UDP), Gateway API
external_dns DNS Cloudflare, Route53, automated DNS records
authelia Authentication OIDC, 2FA, session management
dex Identity OIDC identity broker, LDAP/SAML connectors
rook_ceph_operator Storage CSI drivers, device discovery
rook_ceph_cluster Storage MON/MGR/OSD, RBD, CephFS, RGW
external_snapshotter Storage VolumeSnapshot CRDs, controller HA
hostpath_provisioner_operator Storage Local path provisioning
kubevirt Virtualization VM orchestration, live migration, snapshots
containerized_data_importer Virtualization Disk import (registry, HTTP, S3)
kubevirt_manager Virtualization Web UI for VM management
virtual_machine Virtualization VM profiles, cloud-init, performance tuning
cloudnative_pg Database PostgreSQL operator, HA clusters
postgres_database Database Individual database provisioning
redis_operator Database Redis cluster/sentinel
prometheus Observability Monitoring stack, Grafana
zot_registry Registry OCI v1.1.1 container registry
backstage Developer Portal Service catalog, TechDocs
forgejo_server Git Self-hosted Git forge
forgejo_runner CI/CD Actions runner for Forgejo
namespace Infrastructure Namespace with quotas, policies

2.3 Kubernetes Resource Types Inventory

Resource Location Purpose
namespace resources/namespace/ Namespace with finalizers, quotas
deployment resources/deployment/ Workload deployment
service resources/service/ Service exposure
config_map resources/config_map/ Configuration data
secret resources/secret/ Sensitive data (TLS, docker-config, opaque)
service_account resources/service_account/ Pod identity
role resources/role/ Namespaced RBAC
role_binding resources/role_binding/ Role attachment
cluster_role resources/cluster_role/ Cluster-wide RBAC
cluster_role_binding resources/cluster_role_binding/ Cluster role attachment
resource_quota resources/resource_quota/ Resource limits
limit_range resources/limit_range/ Default limits
network_policy resources/network_policy/ Pod network isolation
storage_class resources/storage_class/ Dynamic provisioning
persistent_volume_claim resources/persistent_volume_claim/ Storage claims
volume_snapshot_class resources/volume_snapshot_class/ Snapshot configuration
helm_chart resources/helm_chart/ Helm v4 chart deployment
config_file resources/config_file/ YAML manifest deployment
custom_resource resources/custom_resource/ Generic CRD instances
daemon_set resources/daemon_set/ Node-level workloads
cilium_httproute resources/cilium_httproute/ Gateway API HTTPRoute
cilium_ingress resources/cilium_ingress/ Kubernetes Ingress
envoy_httproute resources/envoy_httproute/ Envoy Gateway HTTPRoute
envoy_tcproute resources/envoy_tcproute/ Envoy Gateway TCPRoute
envoy_reference_grant resources/envoy_reference_grant/ Cross-namespace references
network_attachment_definition resources/network_attachment_definition/ Multus NADs
golden_image resources/golden_image/ VM golden images
kubevirt_image resources/kubevirt_image/ KubeVirt disk images
postgresdb resources/postgresdb/ CloudNativePG clusters
redis_cluster resources/redis_cluster/ Redis clusters
redis_sentinel resources/redis_sentinel/ Redis sentinels
validating_webhook_configuration resources/validating_webhook_configuration/ Admission webhooks
mutating_webhook_configuration resources/mutating_webhook_configuration/ Mutating webhooks

2.4 Network Attachment Definition Types

Type CNI Plugin Use Case Key Config
bridge bridge Standard L2 bridge, vlan, mtu
macvtap macvtap High-performance VMs master, mode, device plugin
linux_bridge cnv-bridge CNAO-managed bridge, vlan
ipvlan ipvlan Low-overhead L2/L3 master, mode (l2/l3/l3s)
ovs ovs Software-defined bridge, vlan
sriov sriov Direct hardware device_id, vlan

2.5 IPAM Types

Type Purpose
dhcp External DHCP server
host-local Node-local allocation
static Manual IP assignment
whereabouts Cluster-wide unique IPs

2.6 VM Profiles

Profile OS Type Resources
minimal_vm_profile Generic Testing 2c/4Gi/20Gi
high_performance_vm_profile Generic Production CPU pinning, hugepages
debian_server_profile Debian 13 Server 4c/8Gi/64Gi
debian_developer_profile Debian 13 Desktop 8c/16Gi/128Gi, GNOME+XRDP
kali_server_profile Kali Headless 4c/8Gi/64Gi
kali_developer_profile Kali Desktop 8c/16Gi/256Gi, XFCE+XRDP
ubuntu_server_profile Ubuntu Server Configurable
ubuntu_developer_profile Ubuntu Desktop Configurable

3. Core Architecture Foundation

3.1 Entry Point and Module System

File: src/__main__.py

The platform entry point implements a fully dynamic, configuration-driven architecture:

def main() -> None:
    """Main entry point - completely dynamic, configuration-driven."""
    # 1. Initialize metadata singleton with validated config
    metadata = load_and_validate_config()  # GlobalMetadata singleton

    # 2. Schema generation in developer mode
    if metadata.config.developer_mode:
        generate_stack_config_schema(project_root, metadata.discovered_modules)

    # 3. Initialize provider registry
    registry = ProviderRegistry()

    # 4. Load configuration (returns ONLY enabled modules as Pydantic models)
    config = load_configuration()
    enabled_module_configs = config.get("enabled_modules", {})

    # 5. Strict validation - must be dict of Pydantic models
    for module_name, module_config in enabled_module_configs.items():
        if not isinstance(module_config, BaseModel):
            raise RuntimeError(f"Module '{module_name}' config is not a Pydantic model")
        if not hasattr(module_config, "enabled"):
            raise RuntimeError(f"Module '{module_name}' config missing 'enabled' field")

    # 6. Deploy modules in execution order
    execution_order = config.get("module_execution_order", [])
    prioritized = set(execution_order)
    all_modules = [m for m in execution_order if m in enabled_module_configs]
    all_modules.extend([m for m in enabled_module_configs if m not in prioritized])

    deployed_modules = {}
    for module_name in all_modules:
        module = registry.load_and_validate_module(module_name, enabled_module_configs[module_name])
        if module:
            module.initialize()
            module.deploy()
            deployed_modules[module_name] = module

    # 7. Export stack outputs
    export_stack_outputs(deployed_modules)

3.2 BaseModule Contract

File: src/core/base_module.py

Every infrastructure module extends BaseModule:

class BaseModule(ABC):
    """Abstract base class that all provider modules must extend."""

    def __init__(
        self,
        config: BaseModel,           # Pre-validated Pydantic model
        registry: "ProviderRegistry",  # Cross-module provider sharing
        module_name: str,            # Module identifier
    ) -> None:
        self.config = config
        self.registry = registry
        self.module_name = module_name
        self.metadata = GlobalMetadata()  # Singleton access
        self.deployed = False

    @abstractmethod
    def initialize(self) -> None:
        """Initialize provider and register if needed."""
        ...

    @abstractmethod
    def deploy(self) -> None:
        """Deploy infrastructure for this provider."""
        ...

    def get_config(self, field_name: str, default: Any = None) -> Any:
        """Get configuration field with deterministic access (SSOT)."""
        try:
            return getattr(self.config, field_name)
        except AttributeError:
            return default

    def get_outputs(self) -> dict[str, object]:
        """Get standardized module outputs."""
        return {
            "module_name": self.module_name,
            "deployed": self.deployed,
            "exports": ModuleExports(),
        }

3.3 Provider Registry Pattern

File: src/core/registry.py

Thread-safe registry enabling cross-module provider sharing:

class ProviderRegistry:
    """Provider registry for cross-module provider sharing."""

    def __init__(self) -> None:
        self._providers: dict[str, ProviderInfo] = {}
        self._lock = Lock()  # Thread-safe operations

    def register_provider(
        self,
        provider_id: str,
        provider: object,
        platform: str,
        metadata: ProviderMetadata | None = None,
    ) -> None:
        """Register a provider for cross-module use."""
        with self._lock:
            self._providers[provider_id] = ProviderInfo(
                provider=provider, platform=platform, metadata=metadata
            )

    def load_and_validate_module(
        self,
        module_name: str,
        module_configuration: BaseModel,
    ) -> Optional["BaseModule"]:
        """Load module with pre-validated configuration."""
        # DETERMINISTIC: Module class name MUST be PascalCase(module_name) + "Module"
        class_name = f"{self._to_pascal_case(module_name)}Module"
        imported_module = importlib.import_module(module_name)
        module_class = getattr(imported_module, class_name, None)

        if module_class:
            return module_class(module_configuration, self, module_name)
        return None

    def _to_pascal_case(self, snake_str: str) -> str:
        """Convert snake_case to PascalCase with special handling."""
        if snake_str == "k8s":
            return "K8S"
        return "".join(x.capitalize() for x in snake_str.split("_"))

3.4 GlobalMetadata Singleton

File: src/core/metadata.py

Single source of truth for all metadata:

class GlobalMetadata:
    """Singleton source of truth for all metadata."""

    _instance: Optional["GlobalMetadata"] = None
    _config: InputSchema | None = None
    _deployment_id: str
    _deployed_at: datetime
    _git_metadata: GitMetadata
    _discovered_modules: list[str]
    _project_root: Path

    def __new__(cls, config_data: StackConfig | None = None,
                discovered_modules: list[str] | None = None):
        """Ensure only one instance exists."""
        if cls._instance is None:
            if config_data is None:
                raise RuntimeError("GlobalMetadata not initialized.")
            if discovered_modules is None:
                raise RuntimeError("GlobalMetadata requires discovered_modules.")

            instance = super().__new__(cls)

            # Validate and freeze configuration
            instance._config = InputSchema.model_validate(config_data)
            instance._discovered_modules = discovered_modules
            instance._git_metadata = GitMetadata.from_repo()
            instance._deployment_id = # ... deterministic hash
            instance._deployed_at = datetime.now()

            cls._instance = instance

        return cls._instance

    @property
    def discovered_modules(self) -> list[str]:
        """Immutable list of discovered modules."""
        return self._discovered_modules.copy()  # Defensive copy

3.5 Pydantic Configuration Schema

File: src/core/metadata.py

Strict configuration validation:

class DynamicModuleConfig(BaseModel):
    """Base configuration for any dynamically discovered module."""
    enabled: bool = Field(description="Enable this module")

    model_config = ConfigDict(
        extra="forbid",          # STRICT: No undefined fields allowed
        populate_by_name=True,
        frozen=True,             # Immutable after creation
        validate_assignment=True,
    )

class InputSchema(BaseModel):
    """Root input schema - THE source of truth."""
    environment: Environment
    debug: bool = Field(default=False)
    developer_mode: bool = Field(default=False)
    compliance: ComplianceConfig
    modules: ModulesConfig

    model_config = ConfigDict(
        extra="forbid",
        alias_generator=to_camel,  # snake_case ↔ camelCase
        populate_by_name=True,
        validate_assignment=True,
        frozen=True,
    )

3.6 Module Discovery

File: src/core/discovery.py

Centralized module discovery with caching:

_discovered_modules_cache: list[str] | None = None

def discover_modules(force_refresh: bool = False) -> list[str]:
    """Discover all valid modules from filesystem (SSOT)."""
    global _discovered_modules_cache

    if _discovered_modules_cache is not None and not force_refresh:
        return _discovered_modules_cache.copy()

    for path in src_dir.iterdir():
        if not path.is_dir() or path.name == "core":
            continue

        init_file = path / "__init__.py"
        if not init_file.exists():
            continue

        # NO SILENT FAILURES
        try:
            importlib.import_module(path.name)
            discovered.append(path.name)
        except ImportError as e:
            raise RuntimeError(
                f"CRITICAL: Module '{path.name}' cannot be imported. "
                f"This is a BROKEN module that violates IaC requirements."
            ) from e

    _discovered_modules_cache = discovered
    return discovered.copy()

4. Kubernetes Resource Abstraction Layer

4.1 Resource Base Classes

File: src/k8s/resources/base.py

Mixin-based architecture eliminating ~82% of boilerplate:

class BaseK8SResource(StandardMetadataMixin, DNS1123ValidationMixin, ABC):
    """Abstract base for all K8S resources."""

    config_class: type[BaseModel]  # Subclasses declare config type

    def __init__(
        self,
        name: str,
        config: BaseModel,
        provider: k8s.Provider,
        k8s_metadata: K8SMetadata,
    ):
        # Validates config is instance of declared config_class
        if not isinstance(config, self.config_class):
            raise TypeError(f"Config must be {self.config_class.__name__}")

        self.name = name
        self.config = config
        self.provider = provider  # CRITICAL: Explicit provider
        self.k8s_metadata = k8s_metadata
        self.logger = get_logger(f"k8s.resources.{self.__class__.__name__}")

    @abstractmethod
    def validate(self) -> bool:
        """Resource-specific validation."""
        ...

    @abstractmethod
    def create(self, opts: ResourceOptions | None = None) -> pulumi.Resource:
        """Create the Pulumi resource."""
        ...

    def _merge_resource_options(self, opts: ResourceOptions | None, **kwargs) -> ResourceOptions:
        """Merge user options with base options, ensuring provider is set."""
        base_opts = ResourceOptions(
            provider=self.provider,
            protect=kwargs.get("protect", False),
            delete_before_replace=kwargs.get("delete_before_replace"),
            ignore_changes=kwargs.get("ignore_changes"),
            custom_timeouts=kwargs.get("custom_timeouts"),
        )

        if opts:
            return ResourceOptions.merge(base_opts, opts)
        return base_opts

4.2 Standard Mixins

File: src/k8s/resources/mixins.py

StandardMetadataMixin (lines 22-138)

class StandardMetadataMixin:
    """Automatic metadata injection."""

    def _merge_metadata(
        self,
        component: str,
        user_labels: dict[str, str],
        user_annotations: dict[str, str],
        version: str = "v1",
        description: str | None = None,
    ) -> tuple[dict[str, str], dict[str, str]]:
        """Single-line metadata retrieval."""
        base_labels = self._get_standard_labels(component, version)
        base_annotations = self._get_standard_annotations(component, description)

        # User overrides take precedence
        labels = {**base_labels, **user_labels}
        annotations = {**base_annotations, **user_annotations}

        return labels, annotations

DNS1123ValidationMixin (lines 141-275)

class DNS1123ValidationMixin:
    """Kubernetes naming compliance validation."""

    _DNS1123_PATTERN = re.compile(r"^[a-z0-9]([-a-z0-9]*[a-z0-9])?$")

    def _validate_dns1123(self, value: str, max_length: int = 63) -> bool:
        """Validate DNS-1123 compliance."""
        if not value:
            return False
        if len(value) > max_length:
            return False
        return bool(self._DNS1123_PATTERN.match(value))

    def _validate_name(self, max_length: int = 63) -> bool:
        """Validate self.name with actionable error messages."""
        if not self._validate_dns1123(self.name, max_length):
            self.logger.error(
                f"Name '{self.name}' is not DNS-1123 compliant. "
                f"Must be lowercase alphanumeric + hyphens, max {max_length} chars."
            )
            return False
        return True

4.3 Helm Chart Resource

File: src/k8s/resources/helm_chart/__init__.py

Feature-rich Helm chart handling:

class HelmChartResource(BaseK8SResource):
    """Helm chart deployment with caching and version resolution."""

    config_class = HelmChartConfig

    def _resolve_chart_cache(self, config: HelmChartConfig) -> HelmChartConfig:
        """Resolve chart from local cache if available."""
        if not config.use_cache:
            return config
        if config.chart.startswith(("oci://", "file://", "./", "/")):
            return config

        cache = HelmChartCache()
        cached_path = cache.get_cached_chart(
            chart=config.chart,
            version=config.version,
            repository=config.repository,
        )

        if cached_path:
            self.logger.info(f"Using cached chart: {cached_path}")
            return config.model_copy(update={
                "chart": cached_path,
                "repository": None,
            })

        if config.cache_chart:
            try:
                cached_path = cache.cache_chart(...)
                return config.model_copy(update={...})
            except RuntimeError as e:
                self.logger.warning(f"Chart caching failed: {e}")

        return config

4.4 ConfigFile Resource

File: src/k8s/resources/config_file/__init__.py

Remote YAML with caching and transformation:

class ConfigFileResource(BaseK8SResource):
    """Deploy YAML manifests from URLs with caching."""

    def create(self, opts: ResourceOptions) -> pulumi.Resource:
        config: ConfigFileConfig = self.config

        # 1. Fetch/load YAML content (with caching)
        yaml_content = self._get_yaml_content(config)

        # 2. Transform resources
        transformed_docs = self._transform_yaml(yaml_content, config)

        # 3. Write to temp file
        temp_path = self._write_temp_yaml(transformed_docs)

        # 4. Deploy via ConfigFile
        config_file = k8s.yaml.ConfigFile(
            self.name,
            file=temp_path,
            skip_await=config.skip_await,
            opts=ResourceOptions(provider=self.provider),
        )

        # 5. Schedule cleanup
        pulumi.Output.all().apply(lambda _: self._cleanup_temp_files())

        return config_file

    def _transform_yaml(self, yaml_content: str, config: ConfigFileConfig) -> list[dict]:
        """Transform YAML documents."""
        documents = list(yaml.safe_load_all(yaml_content))

        for doc in documents:
            # Skip namespace resources if requested
            if config.skip_namespace_resource and doc.get("kind") == "Namespace":
                continue

            # Inject namespace (not for cluster-scoped)
            if config.namespace and not self._is_cluster_scoped(doc):
                doc["metadata"]["namespace"] = config.namespace

            # Apply custom transformations
            if config.transformations:
                for transform_fn in config.transformations:
                    doc = transform_fn(doc)

4.5 Cache Directory Structure

src/cache/
├── helm_chart/
│   ├── charts.jetstack.io/
│   │   └── cert-manager-v1.19.2.tgz
│   ├── helm.cilium.io/
│   │   └── cilium-1.18.0.tgz
│   └── charts.rook.io/
│       └── rook-ceph-cluster-v1.17.0.tgz
└── config_file/
    ├── github.com/kubernetes-sigs/gateway-api/releases/download/v1.4.0/
    │   └── experimental-install.yaml
    ├── raw.githubusercontent.com/cloudnative-pg/cloudnative-pg/v1.24.1/
    │   └── postgresql.cnpg.io_clusters.yaml
    └── github.com/kubevirt/cluster-network-addons-operator/releases/download/v0.99.3/
        └── operator.yaml

5. Component Architecture

5.1 Fat Models, Thin Controllers Pattern

File: src/k8s/components/base.py

class BaseK8SComponent(Generic[TConfig]):
    """Generic base for K8S components."""

    def __init__(
        self,
        name: str,
        config: TConfig | dict,
        provider: k8s.Provider,
        k8s_metadata: K8SMetadata,
        depends_on: list[pulumi.Resource] | None = None,
        deployed_resources: dict[str, dict[str, pulumi.Resource]] | None = None,
    ):
        # Accept either Pydantic model or dict
        if isinstance(config, dict):
            self.config = self.config_class(**config)
        else:
            self.config = config

        self.name = name
        self.provider = provider
        self.k8s_metadata = k8s_metadata
        self.depends_on = depends_on or []
        self.deployed_resources = deployed_resources or {}

    @abstractmethod
    def deploy(self) -> dict[str, pulumi.Resource]:
        """Deploy component resources. Returns resource dict."""
        ...

    def get_outputs(self) -> dict[str, Any]:
        """Default outputs with component metadata."""
        return {
            "component": self.name,
            "namespace": getattr(self.config, "namespace", None),
        }

5.2 Component Discovery

File: src/k8s/discovery.py

def discover_k8s_components(force_refresh: bool = False) -> dict[str, type[Any]]:
    """Discover all K8S components from filesystem."""

    for path in components_dir.iterdir():
        if not path.is_dir() or path.name.startswith("_"):
            continue

        # STRICT CONVENTION: cert_manager/ → CertManagerComponent
        component_name = path.name
        class_name = f"{_snake_to_pascal_case(component_name)}Component"

        try:
            module = importlib.import_module(f"k8s.components.{component_name}")

            if not hasattr(module, class_name):
                raise RuntimeError(
                    f"CRITICAL: Component '{component_name}' missing class '{class_name}'"
                )

            discovered[component_name] = getattr(module, class_name)
        except ImportError as e:
            raise RuntimeError(f"CRITICAL: Component import failed: {e}") from e

    return discovered.copy()

def discover_k8s_resources(force_refresh: bool = False) -> dict[str, type[Any]]:
    """Discover all K8S resource types."""
    # Convention: limit_range/ → LimitRangeResource
    class_name = _snake_to_pascal_case(resource_type) + "Resource"

5.3 Parent Chain Hierarchy

Every component defines explicit parent chains:

# Cilium Component
k8s-providergateway-api-crdshelm-chart
      → [l2-policy, gateway, httproute]

# KubeVirt Component
providernamespace (with kubevirt.io label)
    → operator (ConfigFile)
      → kubevirt-cr
        → [selinux-workaround, export-ingress]

# Rook Ceph Cluster Component
providerhelm-chart (45m timeout)
    → [rbd-storageclass, cephfs-storageclass, dashboard-httproute]

# Backstage Component
[zot_registry-ready]
  → namespaceserviceaccountclusterroleclusterrolebindingpostgressecretconfigmap
                → [docker_image (optional)]
                  → deploymentpdb + service
                      → [httproute]

5.4 Component-Ready Markers

# Priority cascade for downstream dependencies
if "hubble-ui-ingress" in resources:
    resources["component-ready"] = resources["hubble-ui-ingress"]
elif "hubble-ui-httproute" in resources:
    resources["component-ready"] = resources["hubble-ui-httproute"]
elif "shared-gateway" in resources:
    resources["component-ready"] = resources["shared-gateway"]
else:
    resources["component-ready"] = resources["helm-chart"]

5.5 Cross-Component Dependencies

# Backstage depends on Zot Registry
def deploy(self) -> dict[str, pulumi.Resource]:
    # Extract zot_registry component-ready marker
    zot_ready = self.deployed_resources.get("zot_registry", {}).get("component-ready")

    if self.config.build_from_source and zot_ready:
        # Docker image build depends on registry being ready
        image_opts = ResourceOptions(depends_on=[zot_ready])

6. Networking Architecture

6.1 Gateway API Implementation

Files: src/k8s/resources/cilium_httproute/, src/k8s/components/cilium/models/gateway.py

HTTPRoute Spec

class CiliumHTTPRouteSpec(BaseModel):
    """Fat model for HTTPRoute with builder methods."""

    # Hostnames
    hostname: str | None = None
    hostnames: list[str] | None = None

    # Backend
    service_name: str
    service_port: int
    service_namespace: str | None = None  # Cross-namespace with ReferenceGrant

    # Gateway attachment
    gateway_name: str = "cilium-gateway"
    gateway_namespace: str = "kube-system"
    gateway_listener_name: str = "https"

    # Path matching
    path: str = "/"
    path_type: Literal["PathPrefix", "Exact"] = "PathPrefix"

    def to_config(self) -> CiliumHTTPRouteConfig:
        """Convert to resource config."""
        ...

Gateway API Spec

class GatewayAPISpec(BaseModel):
    """Gateway API config for Cilium."""

    # Network modes
    host_network_mode: bool  # True=0.0.0.0:80/443, False=LoadBalancer
    envoy_keep_cap_net_bind_service: bool = True

    # Protocol configuration
    enable_proxy_protocol: bool = False
    enable_app_protocol: bool = False
    enable_alpn: bool = False

    # TLS configuration
    certificate_ip_addresses: list[str] = ["127.0.0.1"]

    # Custom listeners
    listeners: list[GatewayListenerSpec] = []

    def to_helm_values(self) -> dict[str, Any]:
        """Transform to Cilium Helm values."""
        return {
            "gatewayAPI": {
                "enabled": True,
                "enableProxyProtocol": self.enable_proxy_protocol,
                "hostNetwork": {"enabled": self.host_network_mode},
            }
        }

6.2 L2 Announcements (Bare-Metal LoadBalancer)

File: src/k8s/components/cilium/models/l2.py

class L2AnnouncementsSpec(BaseModel):
    """L2 announcements for bare-metal LoadBalancer support."""

    enabled: bool = True
    interface: str = "br0"
    ip_pool_cidr: str  # e.g., "192.168.1.192/28"

    # Service selection (OR logic via multiple policies)
    service_selectors: list[L2ServiceSelector] | None

    def to_cilium_resources(self) -> dict[str, dict[str, Any]]:
        """Generate CiliumLoadBalancerIPPool + CiliumL2AnnouncementPolicy."""
        resources = {
            "ippool": {
                "api_version": "cilium.io/v2",
                "kind": "CiliumLoadBalancerIPPool",
                "spec": {"blocks": [{"cidr": self.ip_pool_cidr}]},
            },
        }

        # One policy per selector (enables OR logic)
        for selector in self.service_selectors:
            resources[f"policy-{selector.name}"] = {
                "api_version": "cilium.io/v2alpha1",
                "kind": "CiliumL2AnnouncementPolicy",
                "spec": {
                    "loadBalancerIPs": True,
                    "interfaces": [self.interface],
                    "serviceSelector": {"matchLabels": selector.match_labels},
                },
            }

        return resources

6.3 Network Attachment Definitions

File: src/k8s/resources/network_attachment_definition/types/

Bridge NAD

class BridgeNADSpec(BaseModel):
    """Linux bridge CNI for L2 networking."""

    name: str
    namespace: str = "default"
    bridge: str  # Host bridge interface
    vlan: int | None = None
    mtu: int = 1500
    ipam: IPAMConfig | None = None

    def to_cni_config(self) -> dict[str, Any]:
        return {
            "cniVersion": "0.3.1",
            "type": "bridge",
            "bridge": self.bridge,
            "mtu": self.mtu,
            "vlan": self.vlan,
            "ipam": self.ipam.to_cni_config() if self.ipam else {},
        }

Macvtap NAD

class MacvtapNADSpec(BaseModel):
    """Macvtap CNI for near-native NIC performance."""

    name: str
    namespace: str = "default"
    master: str  # Physical interface (enp3s0)
    mode: Literal["bridge", "vepa", "private", "passthru"] = "bridge"
    use_device_plugin: bool = True  # CNAO device plugin mode

    def to_nad_config(self) -> NetworkAttachmentDefinitionConfig:
        annotations = {}
        if self.use_device_plugin:
            annotations["k8s.v1.cni.cncf.io/resourceName"] = (
                f"macvtap.network.kubevirt.io/{self.master}"
            )
        return NetworkAttachmentDefinitionConfig(
            name=self.name,
            namespace=self.namespace,
            config=self.to_cni_config(),
            annotations=annotations,
        )

SR-IOV NAD

class SriovNADSpec(BaseModel):
    """SR-IOV CNI for direct hardware access."""

    name: str
    namespace: str = "default"
    vlan: int | None = None
    device_id: str | None = None  # PCI format: 0000:03:02.0

    @field_validator("device_id")
    @classmethod
    def validate_device_id(cls, v: str | None) -> str | None:
        if v and not re.match(r"^[0-9a-fA-F]{4}:[0-9a-fA-F]{2}:[0-9a-fA-F]{2}\.[0-9a-fA-F]$", v):
            raise ValueError("Device ID must be PCI format")
        return v

7. Virtual Machine Orchestration

7.1 VM Specification Models

Directory: src/k8s/components/virtual_machine/models/

CPU Configuration

class CPUSpec(BaseModel):
    """CPU topology and performance settings."""

    cores: int = 2
    threads: int = 1
    sockets: int = 1
    model: str = "host-model"  # or "host-passthrough"

    # Performance tuning
    dedicated_cpu_placement: bool = False  # Requires CPU manager
    isolate_emulator_thread: bool = False  # Separate QEMU housekeeping

    # NUMA
    numa: NUMASpec | None = None

Memory Configuration

class ResourcesSpec(BaseModel):
    """Memory allocation."""

    memory: str = "4Gi"
    hugepages: HugepagesSpec | None = None
    overcommit_guest_overhead: bool = False

class HugepagesSpec(BaseModel):
    """Hugepages configuration."""

    page_size: Literal["2Mi", "1Gi"] = "2Mi"

Disk Configuration

class DiskSpec(BaseModel):
    """Virtual disk with performance tuning."""

    name: str
    size: str | None = None

    # Source types
    source_type: Literal["dataVolume", "persistentVolumeClaim",
                         "containerDisk", "snapshot_clone"]
    data_volume_name: str | None = None
    pvc_name: str | None = None
    container_disk_image: str | None = None

    # Performance tuning
    cache: Literal["none", "writethrough", "writeback"] = "none"
    io: Literal["native", "threads", "default"] = "native"
    dedicated_io_thread: bool = False

    # Boot order
    boot_order: int | None = None

Network Interface Configuration

class NetworkInterfaceSpec(BaseModel):
    """Network interface with advanced features."""

    name: str
    network_name: str | None = None  # Multus NAD reference

    # Interface types (mutually exclusive with binding_plugin)
    bridge: bool = False
    masquerade: bool = False
    sriov: bool = False

    # KubeVirt v1.3+ binding plugins
    binding_plugin: Literal["macvtap", "passt", "slirp"] | None = None

    # Performance
    queues: int | None = None  # Multi-queue (set to vCPU count)
    model: str = "virtio"

Cloud-Init Configuration

class CloudInitSpec(BaseModel):
    """Cloud-init configuration."""

    use_secret: bool = True  # Store in K8s secret
    network_data: CloudInitNetworkDataSpec | str | None = None
    user_data: CloudInitUserDataSpec | str | None = None

class CloudInitUserDataSpec(BaseModel):
    """Cloud-config user data."""

    username: str = "user"
    password_hash: str | None = None
    ssh_authorized_keys: list[str] = []
    packages: list[str] = []
    package_upgrade: bool = False
    write_files: list[dict] = []
    runcmd: list[str | list[str]] = []

7.2 Three Deployment Modes

Mode 1: Traditional (Inline DataVolumes)

VirtualMachineSpec(
    data_volumes=[DataVolumeSpec(
        name="root",
        source_type="registry",
        url="docker://containercraft/debian:13",
        storage_class="ceph-nvme-vm",
        size="128Gi",
    )],
    disks=[DiskSpec(
        name="root",
        source_type="dataVolume",
        data_volume_name="root",
    )],
)

Mode 2: Same-Namespace Golden Image

VirtualMachineComponentConfig(
    golden_image=GoldenImageConfig(
        source=GoldenImageSourceSpec(
            source_type="registry",
            url="docker://containercraft/kali:latest",
        ),
        snapshot=GoldenImageSnapshotSpec(
            name="kali-base-snapshot",
            volume_snapshot_class="ceph-rbd-snapshot",
        ),
    ),
)
# Creates: DataVolume → VolumeSnapshot → Clone PVC → VM

Mode 3: Cross-Namespace Snapshot (Production)

VirtualMachineComponentConfig(
    snapshot_source=SnapshotSourceConfig(
        snapshot_name="kali-base-snapshot",
        snapshot_namespace="kubevirt",  # Central image store
        storage_class="ceph-nvme-vm",
        size="256Gi",
    ),
)
# Creates: DataVolume (with spec.source.snapshot) → VM

7.3 Profile System

Directory: src/k8s/components/virtual_machine/profiles/

Debian Developer Profile

def debian_developer_profile(
    name: str,
    namespace: str,
    memory: str = "16Gi",
    cores: int = 8,
    disk_size: str = "128Gi",
    storage_class: str = "ceph-nvme-vm",
    ssh_keys: list[str] = [],
    password_hash: str | None = None,
) -> VirtualMachineSpec:
    return VirtualMachineSpec(
        name=name,
        namespace=namespace,
        cpu=CPUSpec(cores=cores),
        resources=ResourcesSpec(memory=memory),
        firmware=FirmwareSpec(type="uefi"),
        disks=[DiskSpec(
            name="root",
            source_type="dataVolume",
            data_volume_name=f"{name}-root",
            boot_order=1,
            cache="none",
            io="native",
        )],
        data_volumes=[DataVolumeSpec(
            name=f"{name}-root",
            source_type="registry",
            url="docker://docker.io/containercraft/debian:13",
            storage_class=storage_class,
            size=disk_size,
        )],
        interfaces=[NetworkInterfaceSpec(
            name="default",
            masquerade=True,
            ports=[PortSpec(name="ssh", port=22)],
        )],
        cloud_init=CloudInitSpec(
            user_data=CloudInitUserDataSpec(
                username="debian",
                password_hash=password_hash,
                ssh_authorized_keys=ssh_keys,
                packages=[
                    # Desktop
                    "gnome", "gdm3", "xorg",
                    # XRDP
                    "xrdp", "xorgxrdp",
                    # Development
                    "build-essential", "python3", "python3-pip",
                    # Containers
                    "docker.io", "docker-compose",
                    # 400+ more packages...
                ],
                write_files=[
                    # GDM3 config: Force X11
                    {"path": "/etc/gdm3/custom.conf", "content": "WaylandEnable=false"},
                    # XRDP config: TLS 1.3
                    {"path": "/etc/xrdp/xrdp.ini", "content": "..."},
                    # Docker daemon.json
                    {"path": "/etc/docker/daemon.json", "content": "..."},
                ],
                runcmd=[
                    # 25+ commands for setup
                    "systemctl enable xrdp",
                    "usermod -aG docker debian",
                    # ...
                ],
            ),
        ),
    )

7.4 Performance Optimization

CPU Performance

CPUSpec(
    cores=8,
    dedicated_cpu_placement=True,  # Pin to physical CPUs
    isolate_emulator_thread=True,  # Separate QEMU housekeeping
    model="host-passthrough",      # Full host CPU
    numa=NUMASpec(guest_mapping_passthrough=True),
)

Memory Performance

ResourcesSpec(
    memory="32Gi",
    hugepages=HugepagesSpec(page_size="2Mi"),  # Eliminate TLB misses
)

I/O Performance (Ceph-optimized)

DiskSpec(
    cache="none",               # O_DIRECT bypass page cache
    io="native",                # Linux AIO
    dedicated_io_thread=True,   # Prevent contention
)

VirtualMachineSpec(
    io_threads_policy="auto",   # 1 IOThread per disk
    block_multi_queue=True,     # Parallel I/O submission
)

8. Distributed Storage

8.1 Rook Ceph Architecture

Files: src/k8s/components/rook_ceph_operator/, src/k8s/components/rook_ceph_cluster/

Two-Stack Deployment

Rook Operator Stack
├── Operator Helm chart
├── CSI drivers (RBD + CephFS)
└── Device discovery

Rook Cluster Stack
├── CephCluster CR via Helm
├── MON, MGR, OSD daemons
├── CephBlockPool (RBD)
├── CephFilesystem (CephFS)
├── StorageClasses
└── VolumeSnapshotClasses

8.2 Storage Configuration

File: src/k8s/components/rook_ceph_cluster/models/storage.py

class StorageSpec(BaseModel):
    """OSD device configuration."""

    use_all_nodes: bool = False
    use_all_devices: bool = False
    device_filter: str | None = None
    device_path_filter: str | None = None

    nodes: list[NodeStorageConfig] | None = None

class NodeStorageConfig(BaseModel):
    """Per-node storage configuration."""

    name: str  # Node hostname
    devices: list[StorageDeviceConfig] | None = None
    device_filter: str | None = None

class StorageDeviceConfig(BaseModel):
    """Individual device configuration."""

    name: str  # nvme0n1 or /dev/disk/by-id/...

8.3 Storage Classes

File: src/k8s/components/rook_ceph_cluster/models/pools.py

RBD Block Storage

class RBDStorageClassSpec(BaseModel):
    """RBD block storage configuration."""

    name: str = "rook-ceph-block"
    pool_name: str = "replicapool"
    is_default: bool = False

    # Replication
    replicated_size: int = 3
    requires_safe_replica_size: int = 2
    pg_num: int = 32

    # Device class isolation
    device_class: Literal["nvme", "ssd", "hdd"] | None = None

    # Volume settings
    volume_binding_mode: str = "WaitForFirstConsumer"
    allow_volume_expansion: bool = True
    reclaim_policy: Literal["Delete", "Retain"] = "Delete"

    # RBD image configuration
    image_format: str = "2"
    image_features: str = "layering,exclusive-lock,object-map"

CephFS Shared Filesystem

class CephFSStorageClassSpec(BaseModel):
    """CephFS shared filesystem configuration."""

    name: str = "rook-cephfs"
    fs_name: str = "cephfs"
    pool_name: str = "cephfs-data0"

    # Pool configuration
    replicated_size: int = 3
    metadata_replicated_size: int = 3

    # MDS configuration
    mds_active_count: int = 1
    mds_standby_count: int = 0

    # Device class isolation
    device_class: Literal["nvme", "ssd", "hdd"] | None = None

8.4 Volume Snapshots

File: src/k8s/components/external_snapshotter/

class ExternalSnapshotterSpec(BaseModel):
    """External snapshotter configuration."""

    version: str | None = None  # Auto-detects latest
    namespace: str = "kube-system"
    controller_replicas: int = 2  # HA with leader election

    # Features
    enable_volume_group_snapshots: bool = False  # Alpha
    leader_election: bool = True
    worker_threads: int = 10

    protected: bool = True

8.5 Multi-Tier Storage Classes

Storage Class Type Device Class Use Case
ceph-nvme-vm-block RBD NVMe VM root disks
ceph-nvme-vm-fs RBD NVMe VM scratch space
ceph-nvme-db-block RBD NVMe Database storage
cephfs-nvme-vm CephFS NVMe Shared filesystems
ceph-ssd-archive RBD SSD Archived data

9. Stack Configuration Patterns

9.1 Stack Taxonomy

Stack Type Purpose Components
Bootstrap Foundation services Cilium, cert-manager, Envoy Gateway, Authelia, Dex
Storage Distributed storage Rook Operator, Rook Cluster, External Snapshotter
Data Services Database operators CloudNativePG, Redis Operator
Virtualization VM platform KubeVirt, CDI
Application Workload services Forgejo, Backstage

9.2 Deployment Order

# Bootstrap Stack (Dependency Chain)
components_deployment_order:
  - cilium                        # CNI first (installs Gateway API CRDs)
  - cert_manager                  # TLS certificates
  - external_dns                  # DNS automation
  - envoy_gateway                 # Edge gateway
  - multus                        # Secondary networks
  - cluster_network_addons_operator  # L2 plugins
  - authelia                      # Authentication
  - dex                          # Identity broker

9.3 Environment Configuration

# Development
debug: true
developer_mode: true
protected: false
compliance:
  fisma:
    compliance_level: low
    enforcement_mode: warn

# Production
debug: false
developer_mode: false
protected: true
compliance:
  fisma:
    compliance_level: high
    enforcement_mode: enforce

9.4 Real Stack Examples

Bootstrap Stack

config:
  environment: dev
  k8s:
    enabled: true
    components:
      cilium:
        enabled: true
        spec:
          deployment_mode: bare-metal
          gateway_api:
            enabled: true
            crd_version: v1.4.0
          l2_announcements:
            enabled: true
            interface: enp3s0
            ip_pool_cidr: "192.168.1.192/28"
      cert_manager:
        enabled: true
        spec:
          enable_self_signed_ca: true
          acme_issuers:
            - name: letsencrypt-cloudflare
              email: [email protected]
      envoy_gateway:
        enabled: true
        spec:
          gateway:
            listeners:
              - name: https
                port: 443
                protocol: HTTPS

KubeVirt Stack

config:
  k8s:
    enabled: true
    components:
      kubevirt:
        enabled: true
        spec:
          cpu:
            default_cpu_model: Skylake-Client-IBRS
          memory:
            default_hugepages: 2Mi
          live_migration:
            bandwidth_per_migration: 5Gi
          permitted_host_devices:
            pci_host_devices:
              - pci_vendor_selector: "8086:1912"
                resource_name: devices.kubevirt.io/intel-hd-530

10. Metadata and Transformations

10.1 Three-Tier Metadata System

Tier 1 (Global): commonLabels for ALL resources Tier 2 (Pod-level): podAnnotations, podLabels Tier 3 (Component-specific): Hubble, operator-specific

10.2 Stack Transformations

File: src/k8s/core/transformations.py

PROTECTED_RESOURCE_TYPES = {
    "kubernetes:ceph.rook.io/v1:CephCluster",
    "kubernetes:storage.k8s.io/v1:StorageClass",
    "kubernetes:core/v1:PersistentVolumeClaim",
    "kubernetes:core/v1:Secret",
}

VOLATILE_METADATA_IGNORE_PATTERNS = [
    'metadata.annotations["deployed.at"]',
    'metadata.annotations["deployment.id"]',
    'metadata.annotations["provenance.attestation"]',
    'metadata.labels["app.kubernetes.io/instance"]',
]

def create_metadata_transformation(k8s_metadata: K8SMetadata):
    """Create closure for global resource transformation."""
    base_labels = k8s_metadata.get_base_labels()
    base_annotations = k8s_metadata.get_base_annotations()

    def transform_kubernetes_resource(args: ResourceTransformArgs):
        if not args.type_.startswith("kubernetes:"):
            return None

        # Initialize null metadata
        if args.props["metadata"] is None:
            args.props["metadata"] = {}

        # Merge labels/annotations
        props["metadata"]["labels"] = {**base_labels, **existing_labels}
        props["metadata"]["annotations"] = {**base_annotations, **existing_annotations}

        # Protected resources: add ignore_changes
        if args.type_ in PROTECTED_RESOURCE_TYPES:
            result_opts = ResourceOptions(
                ignore_changes=VOLATILE_METADATA_IGNORE_PATTERNS
            )

        return ResourceTransformResult(props=props, opts=result_opts)

    return transform_kubernetes_resource

10.3 GitHub Release Version Resolution

File: src/k8s/core/query_github_release_version.py

class GitHubReleaseVersionQuery:
    """Generic interface for GitHub release version queries."""

    GITHUB_API_BASE = "https://api.github.com"

    def __init__(self, owner: str, repo: str):
        self.owner = owner
        self.repo = repo

    def resolve_version(self, version: str | None, is_production: bool) -> str:
        if version and version.lower() not in {"latest", ""}:
            return version.lstrip("v")  # No API call

        if is_production:
            raise ValueError("Cannot use 'latest' in production")

        return self.get_latest_stable()  # GET /releases/latest

    def get_latest_stable(self) -> str | None:
        url = f"{self.GITHUB_API_BASE}/repos/{self.owner}/{self.repo}/releases/latest"
        release = self._request(url)
        return self._normalize_version(release.get("tag_name", ""))

11. Developer Use Scenarios

11.1 Adding a New Component

# 1. Create component directory
mkdir -p src/k8s/components/my_component/

# 2. Create models.py
cat > src/k8s/components/my_component/models.py << 'EOF'
from pydantic import BaseModel, Field
from k8s.components.base import DynamicComponentConfig

class MyComponentSpec(BaseModel):
    namespace: str = "default"
    replicas: int = Field(default=1, ge=1, le=10)

    model_config = ConfigDict(extra="forbid", frozen=True)

class MyComponentComponentConfig(DynamicComponentConfig):
    spec: MyComponentSpec
EOF

# 3. Create __init__.py
cat > src/k8s/components/my_component/__init__.py << 'EOF'
from k8s.components.base import BaseK8SComponent
from .models import MyComponentComponentConfig, MyComponentSpec

class MyComponentComponent(BaseK8SComponent[MyComponentComponentConfig]):
    config_class = MyComponentComponentConfig

    def deploy(self) -> dict[str, pulumi.Resource]:
        spec: MyComponentSpec = self.config.spec
        resources = {}

        # Create resources here

        resources["component-ready"] = resources["deployment"]
        return resources
EOF

# Component is auto-discovered!

11.2 Creating VM Profiles

# profiles/fedora/developer.py
from k8s.components.virtual_machine.models import *

def fedora_developer_profile(
    name: str,
    namespace: str,
    memory: str = "16Gi",
    cores: int = 8,
    ssh_keys: list[str] = [],
) -> VirtualMachineSpec:
    return VirtualMachineSpec(
        name=name,
        namespace=namespace,
        cpu=CPUSpec(cores=cores),
        resources=ResourcesSpec(memory=memory),
        firmware=FirmwareSpec(type="uefi"),
        disks=[...],
        interfaces=[...],
        cloud_init=CloudInitSpec(
            user_data=CloudInitUserDataSpec(
                username="fedora",
                ssh_authorized_keys=ssh_keys,
                packages=["gnome-desktop", "xrdp", ...],
            ),
        ),
    )

11.3 Extending Networking

# types/vxlan.py
class VxlanNADSpec(BaseModel):
    """VXLAN overlay network."""

    name: str
    namespace: str = "default"
    vni: int  # VXLAN Network Identifier
    remote_ip: str
    mtu: int = 1450

    def to_cni_config(self) -> dict:
        return {
            "cniVersion": "0.3.1",
            "type": "vxlan",
            "vni": self.vni,
            "remote_ip": self.remote_ip,
            "mtu": self.mtu,
        }

11.4 Stack Composition

# Pulumi.my-custom-stack.yaml
config:
  environment: dev
  k8s:
    enabled: true
    components:
      my_component:
        enabled: true
        spec:
          namespace: my-app
          replicas: 3
    components_deployment_order:
      - cilium
      - cert_manager
      - my_component

12. Project Outcomes and Implications

12.1 Infrastructure Capabilities

  • Multi-Cluster Support: Context-aware provider management
  • Bare-Metal Ready: L2 announcements, SR-IOV, macvtap networking
  • Cloud Portable: AWS, Azure, GCP provider integration (scaffolded)
  • Offline Capable: Helm chart and manifest caching
  • Production Grade: Compliance tracking, immutable configurations

12.2 Virtualization Platform

  • Golden Image Pattern: Import once, clone many (Ceph snapshots)
  • Cross-Namespace Cloning: Centralized image management
  • Desktop VMs: GNOME, KDE, XFCE with XRDP remote access
  • Performance Tuning: CPU pinning, hugepages, I/O threading, NUMA

12.3 Storage Capabilities

  • Multi-Tier Storage: Device class isolation (NVMe/SSD/HDD)
  • Dual Protocol: RBD block + CephFS shared filesystem
  • Volume Snapshots: RBD and CephFS snapshot classes
  • CSI Integration: Full Kubernetes native storage

12.4 Networking Capabilities

  • Gateway API: Modern HTTP/HTTPS routing (v1.4.0+)
  • Multi-Protocol Edge: HTTP, HTTPS, TCP, UDP via Envoy Gateway
  • VM Networking: 6 NAD types for diverse workloads
  • L2 LoadBalancer: ARP/NDP without BGP

12.5 For Platform Teams

  • Module Isolation: Components developed independently
  • Schema Validation: IDE support via generated JSON schemas
  • Fail-Fast: All validation before resource creation
  • Audit Trail: Git metadata in all resources

12.6 For Application Developers

  • Self-Service VMs: Profile-based VM provisioning
  • Database-as-a-Service: CloudNativePG integration
  • Registry Access: Zot OCI registry with Gateway API routing
  • Developer Portals: Backstage with Kubernetes integration

12.7 For Operations

  • Observability: Prometheus integration across components
  • Security: RBAC, TLS, OIDC authentication (Authelia/Dex)
  • Disaster Recovery: Volume snapshots for data protection
  • Maintenance: Rolling updates via Helm upgrade patterns

13. Complete File Inventory

13.1 Core Module Files

src/core/
├── __init__.py           # Module exports
├── base_module.py        # BaseModule abstract class
├── config.py             # Configuration loading
├── discovery.py          # Module discovery with caching
├── exceptions.py         # Exception hierarchy
├── git.py                # Git metadata collection
├── logging.py            # Structured logging
├── metadata.py           # GlobalMetadata singleton, Pydantic schemas
├── outputs.py            # Stack output management
└── registry.py           # ProviderRegistry

13.2 AWS Module Files

src/aws/
├── __init__.py           # AwsModule class (stub)
└── core/
    ├── __init__.py
    └── models.py         # AWSModuleConfig schema

13.3 Kubernetes Module Files

src/k8s/
├── __init__.py           # K8SModule class
├── discovery.py          # Component/resource discovery
├── README.md
├── core/
│   ├── __init__.py
│   ├── annotations.py    # K8SAnnotations
│   ├── exceptions.py     # K8S exceptions
│   ├── labels.py         # K8SLabels
│   ├── metadata.py       # K8SMetadata
│   ├── models.py         # K8SModuleConfig
│   ├── provider.py       # K8SProviderManager
│   ├── query_github_release_version.py
│   └── transformations.py # Stack transformations
├── components/
│   ├── base.py           # BaseK8SComponent
│   ├── authelia/
│   ├── backstage/
│   ├── cert_manager/
│   ├── cilium/
│   │   ├── __init__.py
│   │   ├── builders/
│   │   ├── gateway_api_crds.py
│   │   ├── httproute.py
│   │   ├── ingress.py
│   │   ├── models/
│   │   ├── profiles.py
│   │   ├── routes.py
│   │   └── values.yaml
│   ├── cloudflare_tunnels/
│   ├── cloudnative_pg/
│   ├── cluster_network_addons_operator/
│   ├── containerized_data_importer/
│   ├── dex/
│   ├── envoy_gateway/
│   ├── external_dns/
│   ├── external_snapshotter/
│   ├── forgejo_runner/
│   ├── forgejo_server/
│   ├── hostpath_provisioner_operator/
│   ├── kubevirt/
│   ├── kubevirt_manager/
│   ├── multus/
│   ├── namespace/
│   ├── postgres_database/
│   ├── prometheus/
│   ├── redis_operator/
│   ├── rook_ceph_cluster/
│   │   ├── __init__.py
│   │   ├── builders/
│   │   ├── httproute.py
│   │   ├── ingress.py
│   │   ├── models/
│   │   └── routes.py
│   ├── rook_ceph_operator/
│   ├── virtual_machine/
│   │   ├── __init__.py
│   │   ├── helpers.py
│   │   ├── models/
│   │   │   ├── cloud_init.py
│   │   │   ├── component_config.py
│   │   │   ├── cpu.py
│   │   │   ├── desktop.py
│   │   │   ├── disk.py
│   │   │   ├── firmware.py
│   │   │   ├── host_device.py
│   │   │   ├── memory.py
│   │   │   ├── network.py
│   │   │   ├── snapshot_source.py
│   │   │   ├── ssh.py
│   │   │   └── vm_spec.py
│   │   └── profiles/
│   │       ├── common.py
│   │       ├── debian/
│   │       ├── kali/
│   │       └── ubuntu/
│   └── zot_registry/
└── resources/
    ├── base.py           # BaseK8SResource
    ├── mixins.py         # StandardMetadataMixin, DNS1123ValidationMixin
    ├── cilium_httproute/
    ├── cilium_ingress/
    ├── cluster_role/
    ├── cluster_role_binding/
    ├── config_file/
    ├── config_map/
    ├── custom_resource/
    ├── daemon_set/
    ├── deployment/
    ├── envoy_httproute/
    ├── envoy_reference_grant/
    ├── envoy_tcproute/
    ├── golden_image/
    ├── helm_chart/
    │   ├── __init__.py
    │   ├── cache.py
    │   ├── models.py
    │   └── version.py
    ├── kubevirt_image/
    ├── limit_range/
    ├── mutating_webhook_configuration/
    ├── namespace/
    ├── network_attachment_definition/
    │   ├── __init__.py
    │   ├── ipam/
    │   ├── models.py
    │   └── types/
    │       ├── bridge.py
    │       ├── ipvlan.py
    │       ├── linux_bridge.py
    │       ├── macvtap.py
    │       ├── ovs.py
    │       └── sriov.py
    ├── network_policy/
    ├── persistent_volume_claim/
    ├── postgresdb/
    ├── redis_cluster/
    ├── redis_sentinel/
    ├── resource_quota/
    ├── role/
    ├── role_binding/
    ├── secret/
    ├── service/
    ├── service_account/
    ├── storage_class/
    ├── validating_webhook_configuration/
    └── volume_snapshot_class/

13.4 Stack Configuration Files

stacks/
├── pulumi-stack-config.schema.json    # Generated JSON schema (619KB)
├── Pulumi.dev.yaml
├── Pulumi.dev-backstage.yaml
├── Pulumi.dev-bootstrap.yaml
├── Pulumi.dev-cloudnativepg.yaml
├── Pulumi.dev-kubevirt.yaml
├── Pulumi.optiplex-bootstrap.yaml
├── Pulumi.optiplex-cloudnativepg.yaml
├── Pulumi.optiplex-forgejo.yaml
├── Pulumi.optiplex-kubevirt.yaml
├── Pulumi.optiplex-rook-ceph.yaml
├── Pulumi.ucs-backstage.yaml
├── Pulumi.ucs-bootstrap.yaml
├── Pulumi.ucs-kubevirt.yaml
├── Pulumi.ucs-rook-ceph.yaml
└── ... (25+ stack files)

13.5 Cache Directory Structure

src/cache/
├── helm_chart/
│   ├── charts.jetstack.io/
│   ├── helm.cilium.io/
│   ├── charts.rook.io/
│   └── ... (cached .tgz files)
└── config_file/
    ├── github.com/kubernetes-sigs/gateway-api/
    ├── raw.githubusercontent.com/cloudnative-pg/
    └── ... (cached YAML files)

Key Design Decisions Summary

  1. Fat Models, Thin Controllers: All business logic in Pydantic models
  2. Convention over Configuration: Filesystem-based discovery
  3. Explicit Provider: Always set provider (no ambient credentials)
  4. Immutable Configuration: frozen=True on all Pydantic models
  5. Production Safety: Forbid "latest" versions, enforce explicit configs
  6. Caching Strategy: Offline-capable with local chart/manifest caches
  7. Parent Chain Hierarchy: Explicit dependency ordering
  8. Protected Resources: Prevent accidental deletion of critical resources
  9. Volatile Metadata Patterns: Prevent recreation from timestamp changes
  10. Type-Safe Configuration: Strict Pydantic validation with extra="forbid"

This document was generated from exhaustive code analysis of 150+ Python files across the infrastructure platform.

KONDUCTOR: Exhaustive Technical Omnibus

Executive Summary

Konductor is a Nix-based, polyglot, AI-first developer workstation distribution that produces:

  • OCI container images for ephemeral development
  • QCOW2 VM images for persistent KubeVirt/libvirt deployments
  • Nix devshells for native machine development
  • NixOS/Home Manager/nix-darwin modules for system-level integration

The project represents a complete, hermetically-sealed development environment with 100+ curated packages, 13 LSP servers, 3 AI coding assistants (Claude Code, OpenCode, Copilot), and a unified Catppuccin Frappe theme across all tools.


Core Architecture

Flake Infrastructure (flake.nix)

Inputs (7 core dependencies):

Input Purpose Version/Branch
nixpkgs Primary package repository nixos-25.11
nixpkgs-unstable Bleeding-edge packages nixos-unstable
flake-utils Multi-system output generation latest
rust-overlay Pinned Rust toolchain stable 1.92.0
nixvim Declarative Neovim configuration nixos-25.11
nix2container Pure-Nix OCI image builder latest
nixos-generators QCOW2/ISO image generation latest

Outputs per system (x86_64-linux, aarch64-linux, aarch64-darwin, x86_64-darwin):

  • 9 devshells: default, python, go, node, rust, dev, full, konductor, ci
  • 2 packages: oci, qcow2 (Linux only)
  • 1 overlay: version pinning + unstable packages + vim-plugin fixes
  • 3 modules: nixosModules.konductor, homeManagerModules.konductor, darwinModules.konductor

Single Source of Truth Pattern

src/lib/versions.nix    → Language versions, NixOS channel, image metadata
src/lib/env.nix         → Environment variables (EDITOR, PAGER, LANG, etc.)
src/lib/aliases.nix     → Shell aliases (ll→eza, cat→bat, grep→rg, etc.)
src/lib/users.nix       → System users (kc2, kc2admin, runner, forgejo)
src/lib/shell-content.nix → Bashrc, bash_profile, inputrc, gitconfig templates

All components read from these centralized files, ensuring consistency across devshells, containers, and VMs.


Devshell Composition System

Layered Architecture

baseShell (foundation)
├── default (core + cli + linters + formatters + ai)
├── python  (base + pythonPackages)
├── go      (base + goPackages)
├── node    (base + nodejsPackages)
├── rust    (base + rustPackages)
├── dev     (base + IDE: neovim + tmux + forgejo-cli)
├── full    (base + all languages + IDE)
├── konductor (full + docker/qemu/libvirt + forgejo-runner)
└── ci      (base + all languages + docker/qemu + forgejo-runner)

Package Categories

Category Package Count Contents
Core 14 coreutils, bash, findutils, grep, sed, awk, tar, gzip, xz, less, ncurses, file, procps
Network 4 curl, wget, gnupg, cacert
System 6 iana-etc, getent, rsync, gosu, su, linux-pam
CLI 20 git, ssh, starship, jq, yq, gh, ripgrep, fd, fzf, mise, direnv, kubectl, k9s, pulumi, etc.
Linters 12 shellcheck, ruff, yamllint, hadolint, eslint, golangci-lint, mypy, bandit, markdownlint, etc.
Formatters 9 shfmt, prettier, taplo, biome, gofumpt, nixpkgs-fmt, stylua, black, isort
AI 3 claude-code, opencode, github-copilot-cli
IDE 15 lazygit, htop, btop, bat, eza, dust, tree-sitter, imagemagick, mermaid-cli, etc.
Python 9 python3.13, poetry, uv, pipx, ruff, mypy, bandit, black, isort
Go 6 go1.24, gopls, delve, golangci-lint, gofumpt, gotools
Node 6 nodejs22, pnpm, yarn, typescript, ts-language-server, prettier
Rust 7 rust1.92.0 (with rust-analyzer, clippy, rustfmt), cargo-watch, cargo-edit
Konductor 13 docker, docker-compose, buildkit, skopeo, crane, qemu, libvirt, OVMF, cachix

Total: 100+ unique packages across the complete ecosystem.


Neovim Configuration

Plugin Architecture (52+ plugins)

Snacks.nvim Framework (replaces 3-4 plugins):

  • Dashboard with custom 62-char formatters
  • Picker with frecency matching (replaces Telescope)
  • Explorer with git/diagnostics (replaces Neo-tree)
  • Terminal with smart naming (replaces Toggleterm)
  • lazygit, zen mode, notifications, words highlighting

LSP Servers (13):

Server Language Special Config
nil_ls Nix nixpkgs-fmt
lua_ls Lua vim/Snacks globals
pyright Python
gopls Go gofumpt, staticcheck
rust_analyzer Rust No cargo/rustc managed
ts_ls TypeScript
bashls Bash
yamlls YAML
jsonls JSON
dockerls Docker
taplo TOML
marksman Markdown

AI Integration Stack:

  • Claude Code: Right-side vertical split (40%), auto-refresh, terminal interrupt
  • OpenCode: HTTP to port 3232, SSE events, operator motions, custom prompts
  • Copilot: Via copilot-cmp (unified completion, no panel)

Keybinding Scheme (365+ mappings):

<leader>v*  → Vibe/AI (Claude, Copilot, OpenCode quick)
<leader>o*  → OpenCode (prompts, session, agent cycle)
<leader>l*  → LSP (code action, rename, format, diagnostics)
<leader>f*  → Find (files, git, recent, buffers, config)
<leader>s*  → Search (grep, symbols, help, keymaps)
<leader>b*  → Buffer (switch, delete, pin, navigate)
<leader>g*  → Git (lazygit, blame, hunks, diff)
<leader>t*  → Terminal (toggle, float, REPLs)
<leader>w*  → Window (focus, split, resize)
<leader>x*  → Diagnostics (trouble integration)
<leader>m*  → Markdown (render, preview)
<leader>r*  → REST (http client)
<leader>q*  → Session/Quit
<leader>u*  → UI toggles

Tmux Configuration

Theme: Catppuccin Frappe with mauve (purple) accents, slanted separators

Key Features:

  • Prefix: Ctrl+a (ergonomic)
  • Escape time: 0ms (neovim-optimized)
  • vim-tmux-navigator: Seamless Ctrl+hjkl across splits
  • 50,000 line history
  • Nested tmux support with F12 toggle

Plugins (7): sensible, catppuccin, vim-tmux-navigator, vim-tmux-focus-events, yank, extrakto (fzf extraction), tmux-fzf


OCI Container Images

Build Pipeline

nix2container base → Dockerfile refinement → Multi-arch manifest

nix2container Stage:

  • Proper setuid binaries (sudo: 4755)
  • PAM configuration for passwordless sudo
  • 32 nixbld users for sandboxed builds
  • Application users: kc2 (1001), kc2admin (1002), runner (1003)
  • Nix flake registry pre-configured

Dockerfile Stage:

  • Writable /tmp
  • Home directory ownership fixes
  • Tool validation tests (30+ checks)
  • Default user: kc2 (unprivileged)

Image Variants:

Tag Contents Use Case
latest Base + all CLI tools Production
dev latest + IDE (neovim, tmux) Development
nix2container Pure Nix base Foundation
amd64, arm64 Platform-specific CI/Registry

Multi-Platform:

  • x86_64-linux (AMD64)
  • aarch64-linux (ARM64)
  • Docker buildx bake with 3-tier caching (GHA + registry + local)

QCOW2 VM Images

NixOS Configuration

System Specification:

  • Format: qcow-efi (EFI boot, 4KB partition alignment)
  • Disk: 20GB (lean)
  • State version: 25.11
  • Kernel: linuxPackages_latest with Ceph RBD optimizations

Pre-installed Software:

  • All 4 language stacks (Python 3.13, Go 1.24, Node 22, Rust 1.92.0)
  • Full IDE (Neovim, Tmux)
  • Container/VM tools (Docker, libvirt, QEMU)
  • Forgejo runner + CLI
  • Cloud-init support

Systemd Services:

Service Purpose
workspace-mount 9p virtfs /workspace mount
konductor-proxy-setup Proxy configuration injection
konductor-ca-setup Custom CA certificate trust
forgejo-runner CI/CD runner daemon

Boot Optimizations:

  • Kernel params: elevator=none, scsi_mod.use_blk_mq=1
  • Sysctl: vm.dirty_ratio=40, vm.swappiness=10
  • Filesystem: noatime, nodiratime, discard, commit=60
  • Virtio drivers: net, pci, mmio, blk, scsi, balloon, console, 9p

KubeVirt Deployments

Base Configuration

VirtualMachine:
  runStrategy: Always
  cpu: host-passthrough, 2 cores
  memory: 4Gi request
  network: masquerade (pod networking)
  storage: containerDisk (ephemeral)
  ssh: QEMU Guest Agent key injection

Overlays

Overlay Network Storage Use Case
base Pod NAT containerDisk 5Gi Quick testing
advanced macvtap + OVS PVC 64Gi Ceph Persistent dev
forgejo-runners OVS + pod PVC 64Gi Ceph CI/CD
proxy Pod NAT containerDisk Corporate firewall

Forgejo Runner Automation (deploy.sh)

Capabilities:

  1. Namespace creation with PSA privileged labels
  2. Shared secret generation/storage (idempotent)
  3. Runner registration on Forgejo server
  4. Cluster CA extraction from cert-manager
  5. Forgejo URL auto-discovery (HAProxy LB → HTTPRoute → Ingress → Service)
  6. SSH key generation and management
  7. Cloud-init userdata secret creation

Commands:

./deploy.sh              # Full deployment
./deploy.sh --teardown   # Clean removal
./deploy.sh --get-ssh-key # Extract SSH key

Configuration System

Hermetic Tool Wrapping

All linters and formatters are wrapped with forced configuration:

# Example: prettier wrapper
pkgs.writeShellApplication {
  name = "prettier";
  runtimeInputs = [ pkgs.nodePackages.prettier ];
  text = ''
    exec prettier --config ${configFile} "$@"
  '';
};

Pattern: Native config file (YAML/TOML/JSON) + Nix wrapper → No escape hatches

Linter Configurations

Linter Language Config Format Key Settings
shellcheck Shell .shellcheckrc enable=all, shell=bash
ruff Python ruff.toml line-length=100, isort+black compat
yamllint YAML .yamllint.yaml max=120, truthy values
hadolint Docker .hadolint.yaml DL3008/DL3013 ignored
eslint JS/TS eslint.config.js Flat config, quality-only
golangci-lint Go .golangci.yml gofmt, staticcheck, errcheck
mypy Python mypy.ini Strict mode, 3.13 target
bandit Python .bandit B101/B601/B602 skipped
markdownlint Markdown .markdownlint-cli2.yaml Prettier-compatible

Formatter Configurations

Formatter Languages Config Format Key Settings
prettier JS/TS/MD/YAML/JSON .prettierrc.yaml printWidth=100, singleQuote
shfmt Shell CLI flags -i 4 -ci -sr -kp -bn
taplo TOML taplo.toml array_trailing_comma
biome JS/TS/JSON biome.json lineWidth=120, doubleQuote

Git Workflow Integration

Lefthook Pre-commit Hooks (20+ checks)

Parallel Execution:

  • Python: ruff, mypy, bandit
  • JavaScript: eslint, biome
  • Shell: shellcheck
  • Nix: nixpkgs-fmt, statix, deadnix
  • Config: yamllint, taplo
  • Docs: markdownlint, cspell, lychee
  • CI: actionlint, hadolint
  • Go: golangci-lint
  • Web: htmlhint, stylelint
  • Security: detect-secrets (always runs)

Commit Message:

  • commitlint with conventional commits
  • Semantic-release compatible types: feat, fix, perf, revert, docs, style, refactor, test, build, ci, chore

Module System

Platform Support

# NixOS configuration.nix
konductor = {
  enable = true;
  enablePython = true;
  enableGo = true;
  enableNode = true;
  enableRust = true;
  enableDevOps = true;
  enableAI = true;
};

# Home Manager home.nix
konductor = {
  enable = true;
  # ... same options
};

# nix-darwin darwin-configuration.nix
konductor = {
  enable = true;
  # ... same options
};

Common Module Pattern:

  • mkOptions: Option definitions with dynamic version descriptions
  • mkPackages: Conditional package composition based on enabled features
  • mkEnv: Centralized environment variables
  • mkAliases: Shell aliases (Home Manager only)

Developer Use Scenarios

1. Quick Polyglot Development

nix develop github:braincraftio/konductor#full
# → All 4 languages + IDE + AI tools ready in seconds

2. Language-Specific Work

nix develop github:braincraftio/konductor#python
# → Python 3.13 + poetry + uv + ruff + mypy + black

3. CI/CD Runner in Kubernetes

cd deploy/kubevirt/overlays/forgejo-runners
./deploy.sh
# → Self-registering runner VM in cluster

4. Self-Hosting/Meta-Development

nix develop github:braincraftio/konductor#konductor
# → Full stack + Docker + QEMU + libvirt for building images

5. Container-Based Development

docker run -it ghcr.io/braincraftio/konductor:dev
# → Full environment in ephemeral container

6. VM-Based Persistent Workstation

nix build github:braincraftio/konductor#qcow2
# → Import to libvirt/KubeVirt for persistent VM

Project Integration Scenarios

Team Standardization

# In project flake.nix
{
  inputs.konductor.url = "github:braincraftio/konductor";

  outputs = { konductor, ... }: {
    devShells.default = konductor.devShells.${system}.full;
  };
}

Outcome: Every team member gets identical tooling, versions, configurations.

CI/CD Pipeline

# Forgejo Actions / GitHub Actions
container:
  image: ghcr.io/braincraftio/konductor:latest
steps:
  - run: ruff check .
  - run: mypy src/
  - run: pytest

Outcome: CI environment matches local development exactly.

Multi-Language Monorepo

project/
├── services/
│   ├── api (Go 1.24)
│   ├── worker (Python 3.13)
│   └── frontend (Node 22)
├── tools/ (Rust 1.92.0)
└── .envrc (use flake github:braincraftio/konductor#full)

Outcome: Single devshell supports entire monorepo with all toolchains.

Air-Gapped Environments

# Pre-build and export
nix build github:braincraftio/konductor#qcow2
skopeo copy docker://ghcr.io/braincraftio/konductor:latest oci-archive:konductor.tar

# Import in air-gapped cluster
skopeo copy oci-archive:konductor.tar docker://internal-registry/konductor:latest

Outcome: Complete development environment deployable without internet.


Key Technical Decisions

Decision Rationale
Nix as foundation Reproducibility, hermetic builds, multi-platform
Snacks.nvim consolidation Single plugin for picker+explorer+terminal reduces complexity
Catppuccin Frappe everywhere Visual consistency across neovim/tmux/opencode
Hermetic config wrappers No configuration drift, enforced standards
QCOW2 lean base Small image, tools pulled from cache on-demand
3-tier Docker caching GHA + registry + local for fast CI builds
Cloud-init integration Dynamic user/proxy/CA configuration in VMs
9p virtfs mounting Seamless host-guest file sharing
Version pinning via overlay Consistent language versions across all shells
Module system abstraction Same config works on NixOS, HM, nix-darwin

Artifacts and Outcomes

Primary Deliverables

  1. OCI Images (ghcr.io/braincraftio/konductor)

    • Multi-arch (AMD64/ARM64)
    • Variants: latest, dev, nix2container
    • ~2GB compressed, ~5GB uncompressed
  2. QCOW2 Images (nix build .#qcow2)

    • EFI boot, 20GB thin-provisioned
    • NixOS 25.11 with all tools pre-installed
    • Cloud-init ready
  3. Nix Devshells (9 variants)

    • Language-specific or polyglot
    • ~30 second cold start, instant warm
  4. Nix Modules (3 platform targets)

    • Drop-in integration for existing NixOS/HM/darwin configs

Secondary Outcomes

  • Standardized toolchain for Python, Go, Node, Rust development
  • AI-first editing with Claude Code, OpenCode, Copilot integration
  • Consistent code quality via lefthook + hermetic linters
  • Self-hosting capability for Forgejo runners
  • Kubernetes-native deployments via KubeVirt

Version Matrix

Component Version Location
NixOS 25.11 src/lib/versions.nix
Python 3.13 src/lib/versions.nix
Go 1.24 src/lib/versions.nix
Node.js 22 src/lib/versions.nix
Rust 1.92.0 src/lib/versions.nix
Nix 2.24.10 src/lib/versions.nix
Neovim 0.10+ nixvim
Tmux 3.4+ nixpkgs
Docker 27+ nixpkgs

File Structure Summary

konductor/
├── flake.nix                    # Entry point, inputs, outputs
├── src/
│   ├── lib/                     # SSOT: versions, env, aliases, users
│   ├── config/                  # Shell, formatters, linters, opencode
│   ├── packages/                # Package composition by category
│   ├── programs/                # Neovim, Tmux, Forgejo, Shell
│   ├── devshells/               # Shell composition hierarchy
│   ├── modules/                 # NixOS, HM, darwin modules
│   ├── overlays/                # Version pinning, plugin fixes
│   ├── oci/                     # OCI container definition
│   └── qcow2/                   # QCOW2 VM definition
├── deploy/kubevirt/             # Kubernetes deployment manifests
├── Dockerfile*                  # Docker build refinements
├── bake.hcl                     # Docker buildx bake configuration
├── lefthook.yml                 # Git hooks configuration
└── runme.yaml                   # Task runner configuration

Detailed Component Analysis

The following sections contain the complete technical deep-dives from each exploratory agent.


PART 2: FLAKE.NIX CORE INFRASTRUCTURE

Flake Inputs and Purposes

Core Nixpkgs Infrastructure:

  • nixpkgs (github:NixOS/nixpkgs/nixos-25.11): Primary package repository, pinned to NixOS 25.11 release channel (synchronized with src/lib/versions.nix)
  • nixpkgs-unstable (github:NixOS/nixpkgs/nixos-unstable): Unstable packages for fast-moving tools (mise, opencode, claude-code)
  • flake-utils (github:numtide/flake-utils): Utility functions for per-system output generation via eachDefaultSystem

Language and Development Tooling:

  • rust-overlay (github:oxalica/rust-overlay): Provides pinned Rust toolchain (1.92.0 stable) via rust-bin.stable attribute
  • nixvim (github:nix-community/nixvim/nixos-25.11): NixOS Neovim configuration framework using declarative plugin/option definitions
  • nix2container (git+https://github.com/nlewo/nix2container): Converts Nix derivations to OCI container images

Infrastructure Provisioning:

  • nixos-generators (github:nix-community/nixos-generators): Generates multiple image formats (QCOW2, ISO, etc.) from NixOS configurations

Dependency Pinning Strategy: All inputs follow inputs.nixpkgs.follows = "nixpkgs" pattern where applicable to prevent version mismatches. Key constraint: nixvim branch must match nixpkgs branch (both 25.11).

Nix Caching Configuration: Configured with nix-community.cachix.org substituter to cache compiled packages, reducing build times.

Flake Outputs

Per-System Outputs (via eachDefaultSystem)

Generated for each system (x86_64-linux, aarch64-darwin, etc.):

Development Shells: Located in src/devshells/, aggregated in flake.nix line 119-120:

  • default: Unopinionated foundation (no languages, no IDE)
  • python: Python 3.13 + related tools
  • go: Go 1.24 + tooling
  • node: Node.js 22 + npm/pnpm
  • rust: Rust 1.92.0 + cargo tools
  • dev: IDE-focused (neovim + tmux + forgejo-cli for human workflows)
  • full: Complete polyglot (all languages + IDE)
  • konductor: Self-hosting meta-shell (full + docker/qemu/libvirt)
  • ci: CI/CD runner environment (all languages + forgejo-runner + build tools)

Packages (Linux-only):

  • oci: OCI container image (requires nix2container)
  • qcow2: QCOW2 VM image (requires nixos-generators)

Cross-System Outputs

Overlays:

  • overlays.default: Composed extension of nixpkgs containing:
    • Version-pinned packages (konductor.python, konductor.go, konductor.nodejs, konductor.rustc)
    • Vim plugin fixes (lualine-nvim test disabling)
    • Unstable packages overlay

NixOS Module:

  • nixosModules.konductor / .default: Imports src/modules/nixos.nix
  • Provides config.konductor.* options for declarative system configuration

Home Manager Module:

  • homeManagerModules.konductor / .default: Imports src/modules/home-manager.nix
  • Provides config.konductor.* options for user-level configuration
  • Usage: imports = [ inputs.konductor.homeManagerModules.default ]

nix-darwin Module:

  • darwinModules.konductor / .default: Imports src/modules/darwin.nix
  • macOS-specific system configuration via nix-darwin framework

Module System Architecture

Module Organization:

src/modules/
├── default.nix          # Aggregator (not used by flake)
├── common.nix           # Shared definitions (CRITICAL)
├── nixos.nix            # NixOS platform
├── home-manager.nix     # Home Manager platform
└── darwin.nix           # macOS/nix-darwin platform

Common Module Pattern (src/modules/common.nix):

Exports three key components:

  1. mkOptions: Option definitions using lib.mkEnableOption and lib.mkOption

    • enable (flag to activate konductor)
    • enablePython, enableGo, enableNode, enableRust (language toggles)
    • enableDevOps, enableAI (feature toggles)
    • All descriptions dynamically reference versions.languages.*
  2. mkPackages: Dynamic package builder accepting { cfg, pkgs, lib, versions }

    • Imports config wrappers from src/config/
    • Imports packages from src/packages/
    • Conditionally includes language/feature packages based on enabled options
    • Enforces wrapped tools (linters/formatters MUST use config wrappers)
  3. mkEnv: Re-exports centralized env vars from src/lib/env.nix

  4. mkAliases: Re-exports shell aliases from src/lib/aliases.nix

Platform-Specific Implementation:

All three platform modules (nixos.nix, home-manager.nix, darwin.nix) follow identical patterns:

NixOS (src/modules/nixos.nix):

options.konductor = common.mkOptions;
config = lib.mkIf cfg.enable {
  environment.systemPackages = common.mkPackages { ... };
  environment.variables = common.mkEnv;
};

Home Manager (src/modules/home-manager.nix):

options.konductor = common.mkOptions;
config = lib.mkIf cfg.enable {
  home.packages = common.mkPackages { ... };
  home.sessionVariables = common.mkEnv;
  home.shellAliases = common.mkAliases;  # Only HM supports aliases
};

nix-darwin (src/modules/darwin.nix):

options.konductor = common.mkOptions;
config = lib.mkIf cfg.enable {
  environment.systemPackages = common.mkPackages { ... };
  environment.variables = common.mkEnv;
};

Critical Design Pattern: Common module accepts config parameter but nixos/darwin cannot pass it due to NixOS module system limitations. Only Home Manager integration fully supports config wrappers for hermetic tools.

Lib Functions Architecture

src/lib/default.nix (Aggregator):

Single entry point aggregating all SSOT modules:

{
  inherit versions users env aliases shellContent meta utils;
}

src/lib/versions.nix (CRITICAL SSOT):

Pure data file (NO pkgs dependency) containing single source of truth:

NixOS Channel:

  • nixos.channel = "25.11" (must sync with flake.nix inputs)
  • nixos.stateVersion = "25.11" (for VMs in src/qcow2/default.nix)

Language Runtimes (maps to package names):

python:  version="313"  display="3.13"  → pkgs.python313
go:      version="1_24" display="1.24"  → pkgs.go_1_24
node:    version="22"   display="22"    → pkgs.nodejs_22
rust:    version="1.92.0" display="1.92.0" → rust-bin.stable."1.92.0"

Container Metadata:

  • OCI image name: ghcr.io/braincraftio/konductor
  • Created timestamp and epoch for reproducibility

Nix Version Requirements:

  • Minimum: 2.24.0
  • Recommended: 2.24.10

src/lib/env.nix (Environment SSOT):

Centralized env vars used by:

  • src/lib/shell-content.nix (generates exports)
  • src/modules/common.nix (via mkEnv)
  • src/devshells/base.nix (via mkShell env)

Key Variables:

  • Editor: EDITOR=nvim, VISUAL=nvim, PAGER=bat
  • Locale: LANG=C.UTF-8, LC_ALL=C.UTF-8
  • Marker: KONDUCTOR=true (identifies Konductor environments)
  • History: HISTSIZE=10000, HISTFILESIZE=20000, HISTCONTROL=ignoreboth:erasedups

src/lib/aliases.nix (Shell Aliases SSOT):

Centralized aliases replacing Unix utilities with modern alternatives:

ll → eza -la --git
cat → bat --paging=never
grep → rg
find → fd
top → btm
du → dust
tree → eza --tree
vi/vim → nvim
gs/gd/gl → git status/diff/log
lg → lazygit
mr → mise run

src/lib/users.nix (User/Group SSOT):

Defines 4 system users for QCOW2 VM environments:

  1. kc2 (uid 1001): Unprivileged user
  2. kc2admin (uid 1002): Admin user with wheel group
  3. runner (uid 1003): CI/CD runner (docker, libvirtd, kvm groups)
  4. forgejo (uid 1004): Forgejo server process (docker group)

UID 1000 reserved for dynamic host user (cloud-init).

src/lib/utils.nix (Helper Functions):

Pure utility functions for package composition:

  • versionToPackage: "3.13" → "313" (Python naming)
  • versionToGo: "1.24" → "1_24" (Go naming)
  • joinPaths: Safe path concatenation
  • mergePackages: Flatten lists with deduplication
  • mkSectionSeparator: Box-drawing ASCII separators for comments

src/lib/shell-content.nix (Shell Configuration Strings):

Generates shell initialization code:

bashrcContentDevshell: Used in mkShell devshells

  • Aliases mapping
  • Shell options (histappend, checkwinsize, globstar)
  • Starship prompt initialization
  • Direnv hook

bashrcContentStandalone: Used in OCI/QCOW2 images

  • Includes envExports + common setup

bashProfileContent: ~/.bash_profile sourcing ~/.bashrc

inputrcContent: Readline configuration (arrow keys, completion)

gitconfigContent: Git configuration with safe directories

welcomeMessageContent: ASCII banner with language versions

src/lib/meta.nix (Metadata Aggregator):

Provides metadata for different targets:

  • OCI labels (title, description, source, licenses, vendor)
  • VM metadata (name, defaultUser, defaultPassword)
  • Project metadata (homepage)

Key Architectural Patterns

Pattern 1: Single Source of Truth (SSOT)

Multiple canonical locations ensure consistency:

  • Versions: src/lib/versions.nix (pure data)

    • Consumed by: overlays, modules, devshells, packages
    • Duplication: flake.nix inputs (cannot import nix files)
  • Environment Variables: src/lib/env.nix

    • Consumed by: shell-content.nix, modules, devshells
    • Ensures same vars everywhere
  • Shell Aliases: src/lib/aliases.nix

    • Single mapping, formatted into shell commands
    • Used in all shells consistently
  • User Definitions: src/lib/users.nix

    • Used in QCOW2 VM configuration

Pattern 2: Layered Package Composition

Explicit hierarchy from base → specialized:

packages.default = core + network + system + cli + linters + formatters + ai
├── base.nix shell uses packages.default
├── language shells add pythonPackages/goPackages/nodejsPackages/rustPackages
├── dev shell adds idePackages
├── full shell adds all languages + IDE
├── konductor shell adds konductor packages (docker/qemu/libvirt)
└── ci shell adds konductor packages + forgejo runner

Each package category (core, cli, linters, etc.) is isolated in separate files with consistent interface: { packages, shellHook, env }.

Pattern 3: Config Wrapper Enforcement

Linters and formatters REQUIRE config wrappers:

# src/packages/linters.nix
packages = if hasConfig then [
  config.linters.shellcheck.package
  config.linters.ruff.package
  ...
] else throw "linters.nix requires config parameter"

Wrappers located in src/config/linters/ and src/config/formatters/

Each wrapper:

  • Writes hermetic config to nix store
  • Creates shell script that exports config path
  • Forces tool to use that config via env var or flag

Pattern 4: Overlay-Based Version Pinning

src/overlays/versions.nix creates pkgs.konductor.* namespace:

pkgs.konductor = {
  python = pkgs."python313".override { ... };
  go = pkgs."go_1_24";
  nodejs = pkgs."nodejs_22";
  rustc = rust-bin.stable."1.92.0".default.override { ... };
};

All language tools reference these pinned packages ensuring consistency across shells.

Pattern 5: Program Modularization

src/programs/ contains large, complex tools structured as submodules:

Neovim (src/programs/neovim/):

  • Flat architecture: options.nix, plugins.nix, keymaps.nix, autocmds.nix, extraConfig.nix
  • Uses nixvim to build nvim derivation
  • Plugin: snacks.nvim for picker, explorer, terminal, dashboard
  • Theme: Catppuccin Frappe with custom highlights

Tmux (src/programs/tmux/):

  • Writes config file to nix store
  • Wraps tmux binary to force config with -f flag
  • Theme: Catppuccin Frappe slanted style
  • Plugins: vim-navigator, yank, extrakto, tmux-fzf
  • Exports KONDUCTOR_TMUX_CONF in shellHook

Forgejo (src/programs/forgejo/):

  • Provides server, runner, and CLI packages separately
  • Exports environment variables (FORGEJO_RUNNER_CONFIG)
  • Used in CI and konductor shells

Shell/Bash (src/programs/shell/):

  • Neovim terminal bash with libvterm support
  • Readline inputrc configuration
  • Wrapper to use with terminal emulator

Pattern 6: Module Inheritance via overrideAttrs

All non-base devshells extend baseShell:

# dev.nix
baseShell.overrideAttrs (old: {
  name = "dev";
  buildInputs = old.buildInputs ++ packages.idePackages ++ programs.neovim.packages;
  shellHook = old.shellHook + ''
    export KONDUCTOR_SHELL="dev"
    ...
  '';
  env = old.env // config.shell.ssh.env;
})

Benefits:

  • Inherits base packages, shellHook structure, env vars
  • Adds incrementally without duplication
  • Consistent shell initialization order

Pattern 7: Hermetic Configuration Strategy

All configuration is either:

  1. Embedded in Nix store (immutable, hashable)
  2. Generated at shell entry time (via shellHook)
  3. Forced via environment variables (can't be overridden)

Examples:

  • Starship config: STARSHIP_CONFIG env var
  • SSH config: KONDUCTOR_SSH_CONFIG env var (generated at shell entry)
  • Tmux config: -f flag passed to tmux binary
  • Git config: Wrapped git binary with forced gitconfig

Pattern 8: Language Isolation

Each language shell is independent composition:

# src/devshells/python.nix
baseShell.overrideAttrs (old: {
  buildInputs = old.buildInputs ++ packages.pythonPackages;
  # Language-specific env vars
  shellHook = old.shellHook + ''
    export UV_SYSTEM_PYTHON="1"
    export PYTHONDONTWRITEBYTECODE="1"
    if [ -d .venv ]; then
      source .venv/bin/activate 2>/dev/null || true
    fi
  '';
})

Languages are NOT interdependent; full shell explicitly adds all.

Pattern 9: Cross-Platform Conditionals

System-specific packages via lib.optionals:

In src/packages/system.nix:

packages = with pkgs; [
  iana-etc # All platforms
  getent
] ++ lib.optionals pkgs.stdenv.isLinux [
  gosu      # Linux only
  linux-pam # Linux only
];

Pattern 10: Explicit vs Implicit Dependencies

Package categories declare what they consume:

# src/packages/cli.nix explicitly depends on config
{ pkgs, config ? null }:

If config is needed but optional, provides fallback:

shellTools = if hasConfig then [
  config.shell.git.package
  config.shell.ssh.package
  ...
] else [
  pkgs.git
  pkgs.openssh
  ...
];

PART 3: DEVSHELLS ARCHITECTURE

Architecture Overview

The Konductor devshells system is a sophisticated modular, layered composition pattern built on pkgs.mkShell from Nixpkgs. It follows a clear architectural principle:

Single Source of Truth: All package composition is centralized in src/packages/ directory, while src/devshells/ only orchestrates how shells inherit and compose these packages.

Shell Hierarchy and Composition Pattern

The system uses a base inheritance model where all shells inherit from baseShell and override its attributes:

baseShell (src/devshells/base.nix)
    ↓
    ├─→ python.nix (baseShell + pythonPackages)
    ├─→ go.nix (baseShell + goPackages)
    ├─→ node.nix (baseShell + nodejsPackages)
    ├─→ rust.nix (baseShell + rustPackages)
    ├─→ dev.nix (baseShell + IDE tools: neovim, tmux, forgejo-cli)
    ├─→ full.nix (baseShell + ALL languages + ALL IDE tools)
    ├─→ konductor.nix (full + container/VM build tools + Forgejo runner)
    └─→ ci.nix (full + container/VM build tools + Forgejo runner/CLI)

Entry Point: /home/usrbinkat/Git/braincraftio/konductor/src/devshells/default.nix

This file aggregates all shells and imports them with necessary context:

  • Imports config from ../config/ (wrapped linters/formatters with hermetic config)
  • Imports packages from ../packages/ (single source of truth for all packages)
  • Imports baseShell from ./base.nix (shared foundation)
  • Exports all 8 named shells: default, python, go, node, rust, dev, full, konductor, ci

Detailed Shell Specifications

1. BASE SHELL (base.nix)

Purpose: The unopinionated foundation - used by all other shells.

Packages Included:

  • packages.default (from packages/default.nix)
    • Core Unix utilities
    • Network tools
    • System integration tools
    • Modern CLI tools
    • Universal linters (language-agnostic)
    • Universal formatters (language-agnostic)
    • AI tools

Environment Variables:

  • Uses centralized env from /home/usrbinkat/Git/braincraftio/konductor/src/lib/env.nix:
    • EDITOR = "nvim" / VISUAL = "nvim"
    • PAGER = "bat"
    • LANG = "C.UTF-8" / LC_ALL = "C.UTF-8"
    • TERM = "xterm-256color"
    • SSL_CERT_FILE = "/etc/ssl/certs/ca-bundle.crt"
    • KONDUCTOR = "true" (marker variable)
    • History settings: HISTSIZE=10000, HISTFILESIZE=20000, HISTCONTROL=ignoreboth:erasedups

Shell Hooks:

  • Sources hermetic .bashrc from ../config/shell/.bashrc (aliases, shell options, prompt)
  • Displays welcome banner showing available shells and version information
  • Sets KONDUCTOR_SHELL="default" and name="default"
  • Banner can be skipped by derived shells using KONDUCTOR_SKIP_BANNER environment variable

2. LANGUAGE-SPECIFIC SHELLS

All follow identical pattern: baseShell.overrideAttrs (old: { ... })

Python (python.nix):

  • Packages: baseShell + packages.pythonPackages
  • Environment: UV_SYSTEM_PYTHON = "1", PYTHONDONTWRITEBYTECODE = "1"
  • Shell Hooks: Auto-activates .venv if present

Go (go.nix):

  • Packages: baseShell + packages.goPackages
  • Environment: GO111MODULE = "on", CGO_ENABLED = "1"
  • Shell Hooks: Creates GOPATH/GOBIN directories, adds to PATH

Node.js (node.nix):

  • Packages: baseShell + packages.nodejsPackages
  • Environment: NODE_ENV = "development"
  • Shell Hooks: Configures PNPM_HOME, adds to PATH

Rust (rust.nix):

  • Packages: baseShell + packages.rustPackages
  • Environment: RUST_BACKTRACE = "1"
  • Shell Hooks: Configures CARGO_HOME, adds bin to PATH

3. DEV SHELL (IDE-focused)

Purpose: Human workflow with interactive IDE tools (NO languages).

Packages:

  • baseShell packages
  • programs.neovim.packages (Neovim editor)
  • programs.tmux.packages (Terminal multiplexer)
  • programs.forgejo.cliPackages (Forgejo CLI for Git operations)
  • packages.idePackages (Other IDE tools)

Shell Hooks:

  • Generates SSH config via ${config.shell.ssh.shellHook}
  • Applies OpenCode Catppuccin Frappe theme
  • Initializes Neovim and Tmux

4. FULL SHELL (Polyglot + IDE)

Purpose: Complete development environment - all 4 languages + all IDE tools.

Packages:

  • baseShell packages
  • IDE: programs.neovim.packages, programs.tmux.packages, packages.idePackages
  • Languages: packages.pythonPackages, packages.goPackages, packages.nodejsPackages, packages.rustPackages

Environment Variables:

  • LD_LIBRARY_PATH="${pkgs.stdenv.cc.cc.lib}/lib:$LD_LIBRARY_PATH" (native library support for pip packages)
  • All language-specific environment variables
  • SSH and OpenCode config env

5. KONDUCTOR SHELL (Self-hosting)

Purpose: "Meta" shell for developing Konductor itself - full polyglot + container/VM build tools + Forgejo runner.

Packages:

  • All from full shell PLUS:
  • programs.forgejo.runnerPackages (Forgejo Actions runner)
  • konductor.packages (from packages/konductor.nix):
    • Docker, docker-compose, buildkit, skopeo, crane
    • QEMU, libvirt, virt-manager, virt-sparsify, OVMF
    • Container tooling

Environment Variables:

  • All language environment variables
  • Docker: DOCKER_HOST="unix:///var/run/docker.sock", DOCKER_BUILDKIT=1
  • Plus all env from konductor.env pkgs, SSH config, OpenCode

6. CI SHELL (Forgejo Actions)

Purpose: Optimized for Forgejo Actions runners - all languages + container/VM build tools + Forgejo tooling.

Packages:

  • baseShell packages
  • All languages: pythonPackages, goPackages, nodejsPackages, rustPackages
  • Forgejo: programs.forgejo.runnerPackages, programs.forgejo.cliPackages
  • Container/VM: konductor.packages

Environment Variables:

  • All language environment variables
  • Docker configuration
  • CI marker: CI="true"

Note: konductor and ci differ - konductor includes IDE tools, ci does not.

Composition Patterns and Inheritance

Pattern 1: baseShell.overrideAttrs()

All derived shells use this pattern:

baseShell.overrideAttrs (old: {
  name = "shell_name";
  buildInputs = old.buildInputs ++ newPackages;
  shellHook = old.shellHook + ''
    # Additional shell configuration
  '';
  env = old.env // { additionalVars };
})

This ensures:

  • Base packages are always present
  • Shell hooks are concatenated (old hooks run first)
  • Environment variables are merged (new ones override old)

Pattern 2: Package Composition Layers

Shells compose packages in layers:

base = core + network + system + cli + linters + formatters + ai

language shells:
  python = base + pythonPackages
  go = base + goPackages
  node = base + nodejsPackages
  rust = base + rustPackages

dev = base + idePackages

full = base + idePackages + pythonPackages + goPackages + nodejsPackages + rustPackages

konductor = full + idePackages + forgejoRunnerPackages + forgejoCliPackages + konductorPackages

ci = base + pythonPackages + goPackages + nodejsPackages + rustPackages
     + forgejoRunnerPackages + forgejoCliPackages + konductorPackages

PART 4: COMPLETE PACKAGE INVENTORY

Core Packages (14)

coreutils, bashInteractive, findutils, gnugrep, gnused, gawk, gnutar, gzip, xz, which, less, ncurses, file, procps

Network Packages (4)

curl, wget, gnupg, cacert

System Packages (6, Linux-specific)

iana-etc, getent, rsync, gosu (Linux), su (Linux), linux-pam (Linux)

CLI Packages (20)

Shell tools (with config wrappers): config.shell.git.package, config.shell.ssh.package, config.shell.starship.package

General CLI: jq, yq-go, sqlite, gh, ripgrep, fd, fzf, unstable.mise, direnv, unstable.runme

Kubernetes: kubectl, kubelogin-oidc, k9s, kubevirt

Infrastructure as Code: pulumi, pulumictl, pulumiPackages.pulumi-python

Linters Packages (12)

Wrapped (via config): config.linters.shellcheck.package, config.linters.ruff.package, config.linters.yamllint.package, config.linters.hadolint.package, config.linters.eslint.package, config.linters.golangci-lint.package, config.linters.mypy.package, config.linters.bandit.package, config.linters.markdownlint.package

Standalone: actionlint, statix, deadnix

Formatters Packages (9)

Wrapped (via config): config.formatters.shfmt.package, config.formatters.prettier.package, config.formatters.taplo.package, config.formatters.biome.package

Standalone: gofumpt, nixpkgs-fmt, stylua, black, isort

AI Packages (3)

unstable.claude-code, codex, github-copilot-cli

IDE Packages (15)

lazygit, htop, btop, bottom, bat, eza, dust, tree, unstable.opencode, tree-sitter, lua5_1.withPackages (luarocks, lua-curl, mimetypes, xml2lua), imagemagick, ghostscript, tectonic, mermaid-cli, python312Packages.pylatexenc

Python Packages (9)

python313.withPackages([pip, ipython, pytest]), poetry, uv, pipx, ruff, mypy, bandit, black, isort

Go Packages (6)

go_1_24, gopls, delve, golangci-lint, gofumpt, gotools

Node.js Packages (6)

nodejs_22, nodePackages.pnpm, nodePackages.yarn, nodePackages.typescript, nodePackages.typescript-language-server, nodePackages.prettier, biome

Rust Packages (7)

rust-bin.stable."1.92.0" (with rust-src, rust-analyzer, clippy, rustfmt), cargo-watch, cargo-edit

Konductor Self-Hosting Packages (13)

docker, docker-compose, docker-buildx, buildkit, skopeo, crane, qemu_kvm, qemu-utils, libvirt, virt-manager, guestfs-tools, OVMF, cdrkit, gnumake, cachix


PART 5: NEOVIM COMPLETE CONFIGURATION

Plugin Stack (52+)

Snacks.nvim Framework

  • bigfile (1.5MB threshold)
  • quickfile
  • notifier (compact, 3s timeout)
  • input
  • indent (animated)
  • scroll (animated, 250ms)
  • statuscolumn
  • words
  • dashboard (62-char width, custom formatters)
  • picker (frecency + cwd bonus)
  • explorer (git/diagnostics)
  • terminal (bottom, 0.3 height)
  • lazygit
  • zen, dim, bufdelete, rename, scratch, toggle, debug, profiler

UI Layer

  • bufferline (slant separators, offsets for explorer/opencode)
  • lualine (catppuccin theme, global status, dynamic lualine_c)
  • which-key (modern preset, 300ms delay, 13 primary groups)
  • mini.icons (replaces devicons)

Editor Layer

  • Treesitter (27 language grammars)
  • Treesitter-textobjects (function/class/parameter selections)
  • Persistence (session management)

Coding Layer

  • 13 LSP servers (nil_ls, lua_ls, pyright, gopls, rust_analyzer, ts_ls, bashls, yamlls, jsonls, dockerls, taplo, marksman)
  • nvim-cmp (Copilot→LSP→luasnip→buffer→path priority)
  • luasnip + friendly-snippets
  • conform-nvim (auto-format on save)
  • autopairs, comment, todo-comments, trouble

Git Layer

  • gitsigns (▎ signs, blame)
  • diffview

AI Layer

  • Claude Code (40% right split, auto-refresh)
  • OpenCode (HTTP port 3232, SSE events, operator motions)
  • Copilot-lua + copilot-cmp

Tools Layer

  • render-markdown.nvim (obsidian preset)
  • markdown-preview (browser)
  • rest.nvim (HTTP client)

Keybinding Scheme (365+ mappings)

Essentials (No prefix)

  • Esc: Clear search
  • jk: Exit insert mode
  • Ctrl+hjkl: Window navigation
  • Ctrl+arrows: Window resize
  • Alt+jk: Move lines
  • Tab: Last window

Quick Actions (<leader> single key)

  • <space>: Find files
  • /: Grep
  • e/E: Explorer
  • .: Scratch
  • ,: Switch buffer
  • :: Command history
  • q: Quit

Workflow Groups

  • <leader>v*: Vibe/AI (Claude, Copilot, OpenCode quick)
  • <leader>o*: OpenCode (prompts, session, agent)
  • <leader>l*: LSP
  • <leader>f*: Find
  • <leader>s*: Search
  • <leader>b*: Buffer
  • <leader>g*: Git
  • <leader>t*: Terminal
  • <leader>w*: Window
  • <leader>x*: Diagnostics
  • <leader>m*: Markdown
  • <leader>r*: REST
  • <leader>q*: Session/Quit
  • <leader>u*: UI toggles

Autocommands (31 groups)

  • Visual feedback (yank highlight)
  • Cursor position restore
  • Window auto-resize
  • Parent directory creation
  • Special buffer close with q
  • Filetype-specific settings (Go tabs, Markdown wrap/spell, Python 4-space)
  • Terminal settings (no numbers, fixed width)
  • OpenCode integration (buffer reload on file.edited)
  • LSP reference highlighting
  • Large file handling (>1MB)
  • Dashboard/Explorer auto-open

Options

  • Leader: Space, Localleader: Backslash
  • Line numbers: absolute + relative
  • Indent: 2 spaces, smartindent
  • Search: ignorecase + smartcase
  • termguicolors, cursorline, scrolloff 8
  • splitbelow, splitright
  • updatetime 200ms, timeoutlen 300ms
  • No swap/backup, undofile enabled (10000 levels)
  • Clipboard: unnamedplus
  • Folding: treesitter expr

PART 6: OCI AND QCOW2 IMAGE BUILDING

OCI Container (nix2container)

System Files & Users

  • 32 nixbld users (uid 30001-30032) for sandboxed builds
  • kc2 (uid 1001): Unprivileged default user
  • kc2admin (uid 1002): Admin with wheel group
  • runner (uid 1003): CI/CD runner

PAM Configuration

  • Passwordless sudo for wheel group
  • pam_rootok.so + pam_permit.so

Root Filesystem Assembly

  • packages.default + programs.neovim + programs.tmux
  • System files (passwd, group, shadow)
  • PAM + Shell configuration
  • Home directories with proper ownership
  • CA certificates + Nix + cachix
  • Flake registry pre-configured

OCI Image Config

  • Default user: 1001:1001 (kc2)
  • WorkingDir: /workspace
  • Entrypoint: /bin/bash -l
  • maxLayers: 256

Dockerfile Refinement

Runtime Fixes

  • Writable /tmp
  • Home directory ownership
  • Tool validation (30+ checks)

Image Variants

Tag Contents
latest Base + CLI tools
dev latest + IDE
nix2container Pure Nix base
amd64/arm64 Platform-specific

QCOW2 VM (nixos-generators)

System Configuration

  • Format: qcow-efi
  • Disk: 20GB
  • State version: 25.11
  • Kernel: linuxPackages_latest

Users

  • kc2 (1001), kc2admin (1002), runner (1003)
  • Passwordless sudo for wheel
  • Runner-specific sudo for docker/nix commands

Pre-installed Packages

  • All 4 language stacks
  • Full IDE (Neovim, Tmux)
  • Container/VM tools
  • Forgejo runner + CLI

Systemd Services

  • workspace-mount (9p virtfs)
  • konductor-proxy-setup
  • konductor-ca-setup
  • forgejo-runner

Boot Optimizations

  • Kernel params: elevator=none, scsi_mod.use_blk_mq=1
  • Sysctl: vm.dirty_ratio=40, vm.swappiness=10
  • Filesystem: noatime, nodiratime, discard, commit=60
  • Virtio drivers for all I/O

PART 7: KUBEVIRT DEPLOYMENTS

Base Configuration

VirtualMachine:
  runStrategy: Always
  cpu: host-passthrough, 2 cores
  memory: 4Gi
  network: masquerade (pod)
  storage: containerDisk
  ssh: QEMU Guest Agent injection

Overlays

Overlay Network Storage Use Case
base Pod NAT containerDisk 5Gi Quick testing
advanced macvtap + OVS PVC 64Gi Ceph Persistent dev
forgejo-runners OVS + pod PVC 64Gi Ceph CI/CD
proxy Pod NAT containerDisk Corporate firewall

Network Attachment Definitions

macvtap NAD (L2 Direct)

  • CNI 0.3.1, MTU 1500
  • Resource: macvtap.network.kubevirt.io/enp3s0
  • Use: DHCP from external network

OVS Bridge NAD

  • CNI 0.4.0, MTU 1500
  • Bridge: br0
  • Use: Nested VMs, internal switching

Forgejo Runner Automation (deploy.sh)

Capabilities

  1. Namespace creation (PSA privileged)
  2. Shared secret generation/storage
  3. Runner registration on Forgejo
  4. Cluster CA extraction
  5. Forgejo URL auto-discovery
  6. SSH key management
  7. Cloud-init userdata creation

Commands

./deploy.sh              # Full deployment
./deploy.sh --teardown   # Clean removal
./deploy.sh --get-ssh-key # Extract SSH key

PART 8: CONFIGURATION SYSTEM

Shell Configuration

Bash (.bashrc)

  • Modern CLI aliases (eza, bat, rg, fd, btm, dust)
  • Starship prompt initialization
  • Direnv hook
  • User override sourcing (~/.bashrc)

Git

  • GIT_CONFIG_SYSTEM forcing
  • defaultBranch: main
  • pull.rebase: true
  • safe.directory: *

SSH

  • Dynamic generation at shell entry
  • Generated to /tmp/konductor-ssh/config
  • Includes user's ~/.ssh/config
  • Ephemeral key generation if none exists

Starship (Catppuccin Frappe)

  • Polyglot format with kubernetes, git, language detection
  • 1s scan/command timeout
  • Custom color palette

Formatters

Formatter Config Key Settings
prettier .prettierrc.yaml printWidth=100, singleQuote
shfmt CLI flags -i 4 -ci -sr -kp -bn
taplo taplo.toml array_trailing_comma
biome biome.json lineWidth=120, doubleQuote

Linters

Linter Config Key Settings
shellcheck .shellcheckrc enable=all, shell=bash
ruff ruff.toml line-length=100
yamllint .yamllint.yaml max=120
hadolint .hadolint.yaml DL3008/DL3013 ignored
eslint eslint.config.js Flat config, quality-only
golangci-lint .golangci.yml gofmt, staticcheck
mypy mypy.ini Strict mode, 3.13
bandit .bandit B101/B601/B602 skipped
markdownlint .markdownlint-cli2.yaml Prettier-compatible

Hermetic Wrapping Pattern

All tools wrapped with forced configuration:

pkgs.writeShellApplication {
  name = "tool";
  text = ''exec tool --config ${configFile} "$@"'';
};

PART 9: GIT WORKFLOW

Lefthook Pre-commit (20+ checks)

Parallel Execution:

  • Python: ruff, mypy, bandit
  • JavaScript: eslint, biome
  • Shell: shellcheck
  • Nix: nixpkgs-fmt, statix, deadnix
  • Config: yamllint, taplo
  • Docs: markdownlint, cspell, lychee
  • CI: actionlint, hadolint
  • Go: golangci-lint
  • Web: htmlhint, stylelint
  • Security: detect-secrets

Commit Message:

  • commitlint (conventional commits)
  • Semantic-release compatible

PART 10: PROGRAMS INTEGRATION

Tmux

  • Theme: Catppuccin Frappe, mauve accents, slanted separators
  • Prefix: Ctrl+a
  • Escape time: 0ms
  • History: 50,000 lines
  • Plugins: sensible, catppuccin, vim-tmux-navigator, vim-tmux-focus-events, yank, extrakto, tmux-fzf
  • Nested tmux: F12 toggle

Forgejo

  • Server: forgejo (v13.x)
  • Runner: forgejo-runner (v11.x, act-based)
  • CLI: forgejo-cli (v0.3.x)
  • Config path: /home/runner/.config/forgejo-runner/config.yaml

Overlays

Version Pinning (versions.nix)

  • konductor.python: python313
  • konductor.go: go_1_24
  • konductor.nodejs: nodejs_22
  • konductor.rustc: rust-bin.stable.1.92.0

Vim Plugin Fixes (vim-plugins.nix)

  • lualine-nvim: dontCheck (tests fail in sandbox)

Unstable Channel

  • Dynamic unstable package access
  • Allows unfree (claude-code)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment