Skip to content

Instantly share code, notes, and snippets.

@RobertWHurst
Last active April 26, 2025 20:42
Show Gist options
  • Save RobertWHurst/46ae633472888ebedf5699ce72489533 to your computer and use it in GitHub Desktop.
Save RobertWHurst/46ae633472888ebedf5699ce72489533 to your computer and use it in GitHub Desktop.

πŸš€ TelemetryX Go Services Style Guide

⏱️ Read time: ~25 minutes | Grab a cup of tea or coffee and settle in for some coding wisdom!

This guide defines the essential coding standards, architectural patterns, and engineering practices for all Go services at TelemetryX. By following these standards, you'll build services that are maintainable, scalable, and consistent with our ecosystem.

Why this matters: Consistent code is easier to understand, review, test, and maintain. These standards have evolved from our team's experience and represent our collective best practices. Whether you're building a new service or enhancing an existing one, this guide will help you create high-quality code that aligns with our engineering culture.

πŸ“š Table of Contents

  1. πŸ“‚ Project Structure
  2. πŸ—οΈ Core Architecture
  3. 🧩 Code Organization
  4. πŸ“¦ Import Ordering
  5. 🐞 Error Handling
  6. πŸ“ Logging
  7. βš™οΈ Configuration Management
  8. πŸ›£οΈ HTTP Routing and Middleware
  9. πŸ”„ Service-to-Service Communication
  10. πŸ“‹ Models and Formatters
  11. πŸ“– Documentation
  12. πŸ§ͺ Testing
  13. 🏷️ Naming Conventions
  14. πŸ”Œ Dependencies
  15. 🧰 Go-Shared Library
  16. ✨ Summary of Key Principles

πŸ“‚ Project Structure

All TelemetryX services follow a consistent project layout that promotes separation of concerns and logical organization. This structure makes navigation intuitive for developers and ensures that components are organized in a predictable way:

service-name/
β”œβ”€β”€ cmd/                    # Command-line applications
β”‚   β”œβ”€β”€ api/                # API service entry point
β”‚   β”‚   └── main.go
β”‚   β”œβ”€β”€ scheduler/          # Scheduler service entry point (separate process)
β”‚   β”‚   └── main.go
β”‚   └── other-commands/     # Other executables if needed
β”‚       └── main.go
β”œβ”€β”€ core/                   # Core business logic
β”‚   β”œβ”€β”€ core.go             # Core service struct for startup/shutdown
β”‚   β”œβ”€β”€ database/           # Database connection and operations
β”‚   β”‚   └── database.go
β”‚   β”œβ”€β”€ logger/             # Logger setup
β”‚   β”‚   └── logger.go
β”‚   β”œβ”€β”€ config/             # Configuration management
β”‚   β”‚   └── config.go
β”‚   └── service/            # Service implementation
β”‚       └── service.go
β”œβ”€β”€ models/                 # Data models/structures
β”‚   └── entity.go
β”œβ”€β”€ formatters/             # Response formatters
β”‚   └── entity.go
β”œβ”€β”€ routes/                 # HTTP route handlers
β”‚   β”œβ”€β”€ entity-a/
β”‚   β”‚   β”œβ”€β”€ entity-a.go     # Route group setup
β”‚   β”‚   β”œβ”€β”€ create.go       # Handler for specific route
β”‚   β”‚   └── get-by-id.go
β”‚   └── meta/               # Health checks, service info
β”‚       └── meta.go
β”œβ”€β”€ scheduler/              # Scheduler-specific logic (if applicable)
β”‚   β”œβ”€β”€ scheduler.go        # Scheduler implementation
β”‚   └── tasks/              # Scheduled tasks
β”‚       └── task.go
β”œβ”€β”€ go.mod
β”œβ”€β”€ go.sum
└── README.md

Configuration Approach: External config files (like service-name.config.toml) are optional. Every service must provide sensible defaults that enable local development without manual configuration. This "zero-config" approach streamlines onboarding and reduces setup friction.

πŸ—οΈ Core Architecture

The core package is the heart of every service, containing essential business logic and infrastructure components. By centralizing these elements, we achieve:

  1. Reusability across different service entry points (API, scheduler, etc.)
  2. Separation of infrastructure concerns from business logic
  3. Testability through clear component boundaries and dependencies

Design your core package with these principles in mind to create maintainable, modular services.

Core Structure

The core structure contains the main service logic and manages the lifecycle of the service:

// core/core.go
package core

import (
    "context"
    
    "github.com/RobertWHurst/blackbox"
    
    "github.com/telemetrytv/service-name/core/config"
    "github.com/telemetrytv/service-name/core/database"
    "github.com/telemetrytv/service-name/core/service"
)

// Core represents the instance of this service. Creating one and starting
// it will create and start a new instance of the service using the given
// configuration and logger.
type Core struct {
    Logger   *blackbox.Logger
    Config   *config.Config
    Database *database.Database
    Service  *service.Service
    // Add other domain-specific components as needed
}

// New creates a new instance of this service using the given configuration
// and logger.
func New(cfg *config.Config, lgr *blackbox.Logger) *Core {
    lgr = lgr.WithCtx(blackbox.Ctx{
        "context": "core",
    })

    db := database.New(cfg, lgr)
    srv := service.New(cfg, lgr, db)
    
    return &Core{
        Config:   cfg,
        Logger:   lgr,
        Database: db,
        Service:  srv,
    }
}

// Start initializes and starts all components
func (c *Core) Start(ctx context.Context) error {
    c.Logger.Info("Starting core")
    
    if err := c.Database.Connect(ctx); err != nil {
        return err
    }
    
    if err := c.Service.Listen(ctx); err != nil {
        return err
    }
    
    c.Logger.Verbose("Core started")
    return nil
}

// Stop gracefully shuts down all components
func (c *Core) Stop(ctx context.Context) error {
    c.Logger.Info("Gracefully stopping core")
    
    if err := c.Service.Close(ctx); err != nil {
        return err
    }
    
    if err := c.Database.Disconnect(ctx); err != nil {
        return err
    }
    
    c.Logger.Verbose("Core stopped")
    return nil
}

Database Management

The database package handles database connections and indexing:

// core/database/database.go
package database

import (
    "context"
    
    "github.com/RobertWHurst/blackbox"
    "go.mongodb.org/mongo-driver/mongo"
    "go.mongodb.org/mongo-driver/mongo/options"
    
    "github.com/telemetrytv/service-name/core/config"
    "github.com/telemetrytv/service-name/models"
)

const MongoConnApplicationName = "service-name"

// Database contains database related logic for connecting and
// interacting with the database.
type Database struct {
    Config      *config.Config
    Logger      *blackbox.Logger
    MongoClient *mongo.Client
}

// New creates a new instance of the database using the given configuration
// and logger.
func New(cfg *config.Config, lgr *blackbox.Logger) *Database {
    lgr = lgr.WithCtx(blackbox.Ctx{
        "context": "database",
    })
    lgr.Debug("Creating database")

    return &Database{
        Config:      cfg,
        Logger:      lgr,
        MongoClient: nil,
    }
}

// Connect connects to the database.
func (d *Database) Connect(ctx context.Context) error {
    d.Logger.Info("Connecting to database")

    clientOpts := options.Client().
        SetAppName(MongoConnApplicationName).
        ApplyURI(d.Config.Database.MongoURI)
    client, err := mongo.Connect(ctx, clientOpts)
    if err != nil {
        return err
    }
    d.MongoClient = client
    d.ensureIndexes(ctx)

    d.Logger.Verbose("Connected to database")

    return nil
}

// Disconnect disconnects from the database.
func (d *Database) Disconnect(ctx context.Context) error {
    d.Logger.Info("Disconnecting from database")

    if err := d.MongoClient.Disconnect(ctx); err != nil {
        return err
    }

    d.Logger.Verbose("Disconnected from database")

    return nil
}

// Collection returns a collection by name
func (d *Database) Collection(name string) *mongo.Collection {
    return d.MongoClient.Database(d.Config.Database.Name).Collection(name)
}

func (d *Database) ensureIndexes(ctx context.Context) {
    d.Logger.Debug("Ensuring indexes")
    if err := d.ensureColIndexes(ctx, models.ApplicationsCollection, models.ApplicationIndexes); err != nil {
        d.Logger.Errorf("Failed to ensure indexes for applications: %w", err)
    }
    if err := d.ensureColIndexes(ctx, models.ArchivesCollection, models.ArchiveIndexes); err != nil {
        d.Logger.Errorf("Failed to ensure indexes for archives: %w", err)
    }
    // Add other collection indexes as needed
    d.Logger.Verbose("Indexes ensured")
}

func (d *Database) ensureColIndexes(ctx context.Context, collectionName string, indexes []mongo.IndexModel) error {
    if len(indexes) == 0 {
        return nil
    }
    _, err := d.Collection(collectionName).Indexes().CreateMany(ctx, indexes)
    return err
}

Indexes should be defined alongside models, as shown in this example:

// models/application.go
package models

import (
    "time"
    
    "go.mongodb.org/mongo-driver/bson"
    "go.mongodb.org/mongo-driver/bson/primitive"
    "go.mongodb.org/mongo-driver/mongo"
    "go.mongodb.org/mongo-driver/mongo/options"
)

const ApplicationsCollection = "applications"

// Application represents an application in the system
type Application struct {
    ID          primitive.ObjectID `bson:"_id"`
    Name        string             `bson:"name"`
    Description string             `bson:"description"`
    CreatedAt   time.Time          `bson:"createdAt"`
    UpdatedAt   time.Time          `bson:"updatedAt,omitempty"`
}

// ApplicationIndexes defines database indexes for the applications collection
var ApplicationIndexes = []mongo.IndexModel{
    {
        Keys: bson.D{
            {Key: "name", Value: 1},
        },
        Options: options.Index().SetUnique(true),
    },
    {
        Keys: bson.D{
            {Key: "createdAt", Value: -1},
        },
    },
}

Service Implementation

The service package implements both HTTP debugging endpoints and Zephyr service functionality:

// core/service/service.go
package service

import (
    "context"
    "fmt"
    "net"
    "net/http"
    
    "github.com/RobertWHurst/blackbox"
    "github.com/RobertWHurst/navaros"
    "github.com/RobertWHurst/navaros/middleware/json"
    "github.com/RobertWHurst/navaros/middleware/set"
    "github.com/RobertWHurst/navaros/middleware/set-value"
    "github.com/nats-io/nats.go"
    "github.com/telemetrytv/Go-Shared/middleware/authentication"
    "github.com/telemetrytv/Go-Shared/middleware/exception"
    "github.com/telemetrytv/Go-Shared/middleware/logger"
    "github.com/telemetrytv/Go-Shared/middleware/pagination"
    "github.com/telemetrytv/Go-Shared/middleware/request-id"
    "github.com/telemetrytv/zephyr"
    natstransport "github.com/telemetrytv/zephyr/nats-transport"
    
    "github.com/telemetrytv/service-name/core/config"
    "github.com/telemetrytv/service-name/core/database"
    "github.com/telemetrytv/service-name/routes/entities"
    "github.com/telemetrytv/service-name/routes/meta"
)

const serviceName = "service-name"

// Service implements both HTTP and Zephyr service capabilities
type Service struct {
    Config          *config.Config
    Logger          *blackbox.Logger
    Router          *navaros.Router
    ZephyrService   *zephyr.Service
    HttpServer      *http.Server
    Database        *database.Database
    ZephyrClient    *zephyr.Client
}

// New creates a new Service instance
func New(config *config.Config, logger *blackbox.Logger, database *database.Database) *Service {
    logger = logger.WithCtx(blackbox.Ctx{
        "context": "services",
    })
    logger.Debug("Creating services")

    s := &Service{
        Config:   config,
        Logger:   logger,
        Database: database,
        Router:   navaros.NewRouter(),
    }

    // Configure debug HTTP server if port is specified
    if config.Service.DebugServerPort != 0 {
        s.HttpServer = &http.Server{
            Addr:    fmt.Sprintf(":%d", config.Service.DebugServerPort),
            Handler: s.Router,
        }
    }

    // Set up middleware and routes
    s.bindMiddleware()
    s.bindRouters()

    return s
}

// Listen initializes connections and starts listening for requests
func (s *Service) Listen(ctx context.Context) error {
    s.Logger.Info("Binding service")

    // Start HTTP debug server if configured
    if s.HttpServer != nil {
        listeningChan := make(chan struct{})

        go func() {
            listener, err := net.Listen("tcp", s.HttpServer.Addr)
            if err != nil {
                s.Logger.Errorf("Failed to start debug server: %w", err)
                return
            }
            s.Logger.Warnf("Debug server listening on %s", listener.Addr())
            close(listeningChan)
            if err := s.HttpServer.Serve(listener); err != nil {
                s.Logger.Errorf("Failed to start debug server: %w", err)
            }
        }()

        <-listeningChan
        s.Logger.Debug("Debug server started")
    }

    // Set up NATS and Zephyr service if NATS URI is configured
    if s.Config.Service.NatsURI != "" {
        s.Logger.Debug("Creating NATS connection")

        natsConn, err := nats.Connect(s.Config.Service.NatsURI)
        if err != nil {
            return fmt.Errorf("failed to connect to NATS: %w", err)
        }

        s.Logger.Debug("Creating Zephyr transport")
        zephyrTrans := natstransport.New(natsConn)
        
        s.Logger.Debug("Creating Zephyr client")
        s.ZephyrClient = zephyr.NewClient(zephyrTrans)

        s.Logger.Debug("Creating and starting Zephyr service")
        s.ZephyrService = zephyr.NewService(serviceName, zephyrTrans, s.Router)
        if err = s.ZephyrService.Start(); err != nil {
            return fmt.Errorf("failed to start Zephyr service: %w", err)
        }
        s.Logger.Debug("Zephyr service started")
    }

    s.Logger.Verbose("service bound")
    return nil
}

// Close gracefully shuts down all service components
func (s *Service) Close(ctx context.Context) error {
    s.Logger.Info("Unbinding service")

    // Stop Zephyr service if it was started
    if s.ZephyrService != nil {
        s.Logger.Debug("Stopping Zephyr service")
        s.ZephyrService.Stop()
        s.Logger.Debug("Zephyr service stopped")
    }

    // Stop HTTP server if it was started
    if s.HttpServer != nil {
        s.Logger.Debug("Stopping debug server")
        if err := s.HttpServer.Shutdown(ctx); err != nil {
            return fmt.Errorf("failed to stop debug server: %w", err)
        }
        s.Logger.Debug("Debug server stopped")
    }

    s.Logger.Verbose("Service unbound")
    return nil
}

// bindMiddleware adds middleware to the router
func (s *Service) bindMiddleware() {
    // Add middleware to router
    s.Router.Use(exception.Middleware(exception.MiddlewareOpts{
        PrintErrors: true,
    }))
    
    s.Router.Use(set.Middleware("database", s.Database))
    s.Router.Use(set.Middleware("config", s.Config))
    s.Router.Use(setvalue.Middleware("serviceClient", &s.ZephyrServiceClient))
    
    s.Router.Use(requestid.Middleware())
    s.Router.Use(logger.Middleware(s.Logger))
    s.Router.Use(authentication.Middleware())
    s.Router.Use(pagination.Middleware())
    s.Router.Use(json.Middleware(nil))
}

// bindRouters attaches all route handlers to the main router
func (s *Service) bindRouters() {
    // Mount all routers
    s.Router.Use(meta.Router)
    s.Router.Use(entities.Router)
    // Add other route packages here
}

Logger Setup

The logger package initializes and configures the blackbox logger:

// core/logger/logger.go
package logger

import (
    "os"

    "github.com/RobertWHurst/blackbox"
    
    "github.com/telemetrytv/service-name/core/config"
)

// New creates a new blackbox logger
func New(cfg *config.Config) *blackbox.Logger {
    lgr := blackbox.NewWithCtx(blackbox.Ctx{"service": "Service Name"})

    if cfg.Logging.UsePrettyFormat {
        lgr.AddTarget(blackbox.NewPrettyTarget(os.Stdout, os.Stderr).UseColor(cfg.Logging.EnableColor))
    } else {
        lgr.AddTarget(blackbox.NewJSONTarget(os.Stdout, os.Stderr))
    }

    lgr.SetLevel(blackbox.LevelFromString(cfg.Logging.Level))

    return lgr
}

Configuration Management

The config package handles loading and parsing configuration files:

// core/config/config.go
package config

import (
    "github.com/RobertWHurst/orale"
    
    "github.com/telemetrytv/service-name/core/logger"
)

type Config struct {
    Logger   logger.Config   `toml:"logger"`
    Database DatabaseConfig  `toml:"database"`
    Service  ServiceConfig   `toml:"service"`
}

type DatabaseConfig struct {
    URL      string `toml:"url" env:"DB_URL" flag:"db-url" desc:"Database connection URL"`
    MaxConns int    `toml:"max_conns" env:"DB_MAX_CONNS" flag:"db-max-conns" desc:"Maximum database connections"`
}

type ServiceConfig struct {
    Port int    `toml:"port" env:"SERVICE_PORT" flag:"port" desc:"Service port"`
    Host string `toml:"host" env:"SERVICE_HOST" flag:"host" desc:"Service host"`
}

// Load loads the service configuration
func Load(serviceName string) (*Config, error) {
    // Load configuration using the application name
    loader, err := orale.Load(serviceName)
    if err != nil {
        return nil, err
    }
    
    // Initialize config with defaults
    config := &Config{
        Logger: logger.Config{
            Level:  "info",
            Format: "pretty",
        },
        Database: DatabaseConfig{
            MaxConns: 10,
        },
        Service: ServiceConfig{
            Port: 8080,
            Host: "0.0.0.0",
        },
    }
    
    // Get all configuration values into the config struct
    if err := loader.GetAll(config); err != nil {
        return nil, err
    }
    
    return config, nil
}

🧩 Code Organization

Well-organized code forms the foundation of maintainable, understandable systems. Our organization principles help developers navigate the codebase efficiently:

  • Group related functionality into coherent packages
  • Implement clean interfaces between components
  • Avoid circular dependencies between packages
  • Keep main.go files small β€” they should only start and stop the core
  • Use kebab-case for file names (e.g., git-to-tar-stream.go)
  • Place test files with _test.go suffix next to the file they test
  • Implement schedulers as separate processes with dedicated entry points

πŸ“¦ Import Ordering

Clean import organization enhances code readability and reveals dependencies at a glance. Our standard is to group imports in this logical order, with a blank line separating each group:

  1. Standard library imports
  2. Third-party library imports
  3. Local application imports

This organization makes it immediately clear which imports are from the standard library, which are external dependencies, and which are internal to the application.

import (
    "context"
    "fmt"
    "net/http"
    
    "github.com/RobertWHurst/blackbox"
    "github.com/RobertWHurst/navaros"
    "github.com/RobertWHurst/orale"
    
    "github.com/telemetrytv/service-name/core"
    "github.com/telemetrytv/service-name/models"
)

🐞 Error Handling

Proper error handling is foundational to building robust, maintainable services. At TelemetryX, how we handle errors directly impacts debugging efficiency and service reliability.

Error Handling Principles

Our services follow these key principles for consistent and effective error handling:

  • Provide context with errors using fmt.Errorf("operation: %w", err) for clearer debugging
  • Define sentinel errors for expected conditions that callers might check for
  • Check errors immediately after function calls to prevent issue propagation
  • Log errors with relevant context at the handling point for complete diagnostic information
  • Avoid panic except in truly unrecoverable situations that require termination
  • Preserve error chains by wrapping errors when propagating up the call stack

This approach maintains clarity about what went wrong and why, making troubleshooting more efficient.

// ProcessEntity retrieves and processes an entity by ID.
func ProcessEntity(id string) (*Entity, error) {
    entity, err := repository.FindByID(id)
    if err != nil {
        return nil, fmt.Errorf("finding entity %s: %w", id, err)
    }
    
    err = entity.Process()
    if err != nil {
        return nil, fmt.Errorf("processing entity %s: %w", id, err)
    }
    
    return entity, nil
}

Consistently applying these patterns creates services that communicate problems clearly and remain maintainable over time.

πŸ“ Logging

TelemetryX services leverage the blackbox library for structured logging. Effective logging is vital for operational monitoring, troubleshooting, and understanding runtime behavior in production environments.

Log Levels

The right log level ensures information is available when needed without creating unnecessary noise. Our services use these logging levels consistently:

  • Info: High-level operational information that provides a clear picture of major activities and state changes. These logs help operators monitor service health and track important workflow progression.

  • Verbose: Supplementary details that complement Info logs, including task completion notifications and contextual information. While not critical, these logs provide valuable operational insights.

  • Debug: Detailed execution flow information that helps developers understand service logic. Debug logs provide more detail than Info logs but remain more focused than Trace logs.

  • Trace: Granular diagnostic information used for pinpointing specific issues during development or complex troubleshooting scenarios.

  • Warn: Indicators of potentially harmful situations that don't represent failures but might require attention to prevent future errors.

  • Error: Operation failures that allow the application to continue running. Always include the error object and relevant context to facilitate troubleshooting.

  • Fatal: Critical error events that prevent normal service operation and require immediate termination.

Logging Patterns

Effective logging creates a consistent narrative throughout the service lifecycle. Each log level should be used appropriately:

Info Level

  • Log service startup and shutdown for operational visibility
  • Log the beginning of important operations such as request handling and batch processing
  • Capture major state changes such as database connections and server initialization
  • Keep messages concise and operationally relevant

Verbose Level

  • Log successful completion of operations started at Info level
  • Provide additional context about operations without cluttering Info logs
  • Indicate successful responses or outcomes of requests
  • Use for confirmation of completed tasks or stages

Debug Level

  • Log parameter values and decision points in the code
  • Show the flow of execution through different components
  • Include details that help understand how the service is working
  • Record intermediate state and progress through complex operations

Error Level

  • Always include the error object with your error message
  • Log at the point where the error is handled, not just where it occurs
  • Provide enough context to understand what operation failed and why
  • Include relevant identifiers to trace the error source (request ID, user ID)

Logging Best Practices

Effective logging practices enhance observability and simplify troubleshooting. Our best practices include:

  • Use structured logging to make logs searchable and filterable
  • Add context to your loggers using WithCtx() to provide relevant metadata
  • Be consistent with log levels across the entire service
  • Include identifiers in log contexts to connect related logs (request ID, entity ID)
  • Log correlation IDs to track requests as they flow through multiple services
  • Never log sensitive data like passwords, tokens, or personal information
  • Make log messages actionable so operators know what to do when issues occur
  • Create contextual loggers for specific components rather than using a global logger
// Creating a contextual logger
logger = baseLogger.WithCtx(blackbox.Ctx{
    "component": "database",
    "operation": "connect",
})

// Adding request context to logs
requestLogger := logger.WithCtx(blackbox.Ctx{
    "request_id": requestID,
    "user_id": userID,
})
package applications

import (
    "net/http"
    
    "github.com/RobertWHurst/navaros"
    "github.com/telemetrytv/Go-Shared/middleware/authentication"
    "github.com/telemetrytv/Go-Shared/middleware/logger"
    
    "github.com/telemetrytv/service-name/core/config"
    "github.com/telemetrytv/service-name/core/database"
    "github.com/telemetrytv/service-name/formatters"
    "github.com/telemetrytv/service-name/models"
    
    "go.mongodb.org/mongo-driver/bson"
    "go.mongodb.org/mongo-driver/bson/primitive"
    "go.mongodb.org/mongo-driver/mongo"
)

// GetApplicationByID handles GET /application/:id requests
func GetApplicationByID(ctx *navaros.Context) {
    // Check authentication first
    if !authentication.IsAuthenticated(ctx) {
        ctx.Status = http.StatusUnauthorized
        return
    }

    // Get dependencies from context
    cfg := ctx.Get("config").(*config.Config)
    lgr := logger.Get(ctx)  // Get logger from middleware
    db := ctx.Get("database").(*database.Database)

    // Log the start of handler execution at Info level
    lgr.Info("Getting application by ID")

    // Parse and validate path parameters
    id, err := primitive.ObjectIDFromHex(ctx.Params().Get("id"))
    if err != nil {
        lgr.Debug("Invalid ID format in request")
        ctx.Status = http.StatusBadRequest
        return
    }

    // Fetch the application from database
    application := &models.Application{}
    err = db.Collection(models.ApplicationsCollection).FindOne(ctx, bson.M{
        "_id":       id,
        "accountId": authentication.GetAccountID(ctx),
    }).Decode(application)

    // Handle not found case with appropriate logging
    if err == mongo.ErrNoDocuments {
        lgr.Verbose("Application not found")
        ctx.Status = http.StatusNotFound
        return
    }

    // Handle other errors with detailed error logging
    if err != nil {
        lgr.Errorf("Failed to get application: %w", err)
        ctx.Status = http.StatusInternalServerError
        return
    }

    // Fetch additional related data if needed
    lgr.Debug("Getting latest successful build for application")
    build := &models.Build{}
    err = db.Collection(models.BuildsCollection).FindOne(ctx, bson.M{
        "applicationId": application.ID,
        "state":         models.SuccessBuildState,
    }).Decode(build)
    
    if err == mongo.ErrNoDocuments {
        lgr.Verbose("No successful build found")
        build = nil
    } else if err != nil {
        lgr.Errorf("Failed to get build: %w", err)
        ctx.Status = http.StatusInternalServerError
        return
    }

    // Log successful response and use formatter for response
    lgr.Verbose("Application found, returning to client")
    ctx.Body = formatters.FormatApplication(cfg.Storage.ApplicationContentBaseURL, application, build)
}

Include contextual information with logs using blackbox.Ctx:

// Add context to a logger instance
logger = logger.WithCtx(blackbox.Ctx{
    "context": "core",
})

// Log with context
logger.Info("Starting core")
logger.Errorf("Failed to connect to database: %w", err)

// Log verbosity levels
logger.Debug("Detailed information for debugging")
logger.Verbose("Core started") // Less critical than debug

βš™οΈ Configuration Management

TelemetryX services utilize the orale library to provide flexible, comprehensive configuration management. This approach ensures services can be easily configured in different environments without code changes.

Configuration Sources

Our configuration system loads values from multiple sources following a clear precedence order, where later sources override earlier ones:

  1. Default values defined in code
  2. Configuration files (TOML format)
  3. Environment variables
  4. Command-line flags

This layered approach offers convenience during development while providing the flexibility needed for production deployments across different environments.

Configuration Best Practices

Effective configuration design improves both development experience and operational reliability. Our configuration best practices include:

  • Define clear structures with descriptive field names and proper documentation
  • Provide sensible defaults that enable zero-configuration local development
  • Support environment variables for all important configuration options
  • Validate configuration at startup to fail fast if required values are missing
  • Group related options into logical sections for better organization
  • Document all options in code comments and README files
  • Use strongly-typed configuration rather than string maps or generic structures

Local Development vs Deployment

Our configuration approach emphasizes developer productivity while maintaining deployment flexibility:

  • Zero-configuration local development - Services should run locally without manual setup
  • Smart defaults - Default values should connect to standard localhost services
  • Deployment options - Operators can customize using:
    • Environment-specific configuration files
    • Environment variables
    • Command-line flags

Environment-Specific Configuration Files

Services often need different configuration settings across development, staging, and production environments. Orale simplifies this with built-in support for environment-specific configuration files:

  • Use the command-line flag --config-environment=<env> to load environment-specific config files
  • This loads app-name.<env>.config.toml instead of the default app-name.config.toml
  • Useful when multiple configurations are needed on a single machine
  • Common use cases include:
    • Testing with different configurations
    • Supporting local development against different environments
    • Running services with environment-specific settings
package config

import (
    "github.com/RobertWHurst/orale"
)

type Config struct {
    Logger struct {
        UsePrettyFormat bool   `config:"usePrettyFormat"`
        EnableColor     bool   `config:"disableColor"`
        Level           string `config:"level"`
        EnableSplash    bool   `config:"enableSplash"`
    } `config:"logger"`
    
    Database struct {
        MongoURI string `config:"mongoUri"`
        Name     string `config:"databaseName"`
    } `config:"database"`
    
    Service struct {
        NatsURI         string `config:"natsUri"`
        DebugServerPort int    `config:"debugServerPort"`
    } `config:"service"`
}

// Default configuration values that allow the service to run locally
var defaultConfig = Config{
    Logger: struct {
        UsePrettyFormat bool   `config:"usePrettyFormat"`
        EnableColor     bool   `config:"disableColor"`
        Level           string `config:"level"`
        EnableSplash    bool   `config:"enableSplash"`
    }{
        UsePrettyFormat: true,
        EnableColor:     true,
        Level:           "info",
        EnableSplash:    true,
    },
    Database: struct {
        MongoURI string `config:"mongoUri"`
        Name     string `config:"databaseName"`
    }{
        MongoURI: "mongodb://localhost:27017",
        Name:     "service-name",
    },
    Service: struct {
        NatsURI         string `config:"natsUri"`
        DebugServerPort int    `config:"debugServerPort"`
    }{
        NatsURI:         "nats://localhost:4222",
        DebugServerPort: 8080,
    },
}

func Load() (*Config, error) {
    // Load configuration using the application name
    // This will handle:
    // 1. Default values
    // 2. Optional config files
    // 3. Environment variables
    // 4. Command-line flags
    loader, err := orale.Load("service-name")
    if err != nil {
        return nil, err
    }
    
    // Start with default configuration
    config := defaultConfig
    
    // Override with values from files, env vars, and flags
    if err := loader.GetAll(&config); err != nil {
        return nil, err
    }
    
    return &config, nil
}

πŸ›£οΈ HTTP Routing and Middleware

TelemetryX services use a combination of navaros and zephyr for handling requests. The navaros library provides HTTP routing and middleware capabilities, while zephyr extends these capabilities across our microservices architecture.

In our architecture, requests typically follow this path:

  1. REST Gateway receives the initial HTTP request
  2. Zephyr Gateway within the REST Gateway routes the request
  3. The appropriate service receives the request via its Zephyr Service component
  4. Navaros within the service handles routing to the specific handler

This approach allows our services to seamlessly participate in the distributed request handling ecosystem while maintaining the benefits of a clean, organized routing structure.

Route Organization

Well-organized routes improve code maintainability and create a clear API structure:

  • Group related routes into dedicated packages by resource/domain
  • Use descriptive handler names that indicate the action being performed
  • Organize routes hierarchically by resource relationships
  • Define package-level routers that can be composed into the main service
// routes/applications/applications.go
package applications

import (
    "github.com/RobertWHurst/navaros"
)

// Package-level router that can be imported by other packages
var Router = navaros.NewRouter()

// Initialize routes in init function
func init() {
    Router.PublicPost("/application", CreateApplication)
    Router.PublicGet("/application", QueryApplications)
    Router.PublicGet("/application/:id([0-9a-f]{24})", GetApplicationByID)
    Router.PublicGet("/application/:name", GetApplicationByName)
    Router.PublicPatch("/application/:id", UpdateApplicationByID)
    Router.PublicDelete("/application/:id", DeleteApplicationByID)
}

Handler Implementation

Route handlers should be focused, readable, and follow consistent patterns:

  • Implement as standalone functions that take a navaros.Context parameter
  • Focus on a single responsibility per handler
  • Use consistent patterns for parameter validation, error handling, and responses
  • Extract common functionality into middleware when appropriate
  • Follow a clear naming convention like VerbNoun (e.g., GetApplicationByID)
// routes/applications/get-by-id.go
package applications

import (
    "net/http"
    
    "github.com/RobertWHurst/navaros"
    "github.com/telemetrytv/Go-Shared/middleware/authentication"
    "github.com/telemetrytv/Go-Shared/middleware/logger"
    
    "github.com/telemetrytv/service-name/core/config"
    "github.com/telemetrytv/service-name/core/database"
    "github.com/telemetrytv/service-name/formatters"
    "github.com/telemetrytv/service-name/models"
    
    "go.mongodb.org/mongo-driver/bson"
    "go.mongodb.org/mongo-driver/bson/primitive"
    "go.mongodb.org/mongo-driver/mongo"
    "go.mongodb.org/mongo-driver/mongo/options"
)

// GetApplicationByID handles GET /application/:id requests
func GetApplicationByID(ctx *navaros.Context) {
    // Check authentication first
    if !authentication.IsAuthenticated(ctx) {
        ctx.Status = http.StatusUnauthorized
        return
    }

    // Get dependencies from context
    cfg := ctx.Get("config").(*config.Config)
    lgr := logger.Get(ctx)
    db := ctx.Get("database").(*database.Database)

    // Log the start of handler execution
    lgr.Info("Getting application by ID")

    // Parse and validate path parameters
    id, err := primitive.ObjectIDFromHex(ctx.Params().Get("id"))
    if err != nil {
        lgr.Debug("Invalid ID format")
        ctx.Status = http.StatusBadRequest
        return
    }

    // Fetch the application from database
    application := &models.Application{}
    err = db.Collection(models.ApplicationsCollection).FindOne(ctx, bson.M{
        "_id":       id,
        "accountId": authentication.GetAccountID(ctx),
    }).Decode(application)

    // Handle not found case with appropriate logging
    if err == mongo.ErrNoDocuments {
        lgr.Verbose("Application not found")
        ctx.Status = http.StatusNotFound
        return
    }

    // Handle other errors with detailed logging
    if err != nil {
        lgr.Errorf("Failed to get application: %w", err)
        ctx.Status = http.StatusInternalServerError
        return
    }

    // Fetch additional related data if needed
    build := &models.Build{}
    err = db.Collection(models.BuildsCollection).FindOne(ctx, bson.M{
        "applicationId": application.ID,
        "state":         models.SuccessBuildState,
    }, options.FindOne().SetSort(bson.M{"finishedAt": -1})).Decode(build)
    
    if err == mongo.ErrNoDocuments {
        lgr.Verbose("No successful build found for application")
        build = nil
    } else if err != nil {
        lgr.Errorf("Failed to get build: %w", err)
        ctx.Status = http.StatusInternalServerError
        return
    }

    // Log successful response and use formatter for response
    lgr.Verbose("Application found, returning to client")
    ctx.Body = formatters.FormatApplication(cfg.Storage.ApplicationContentBaseURL, application, build)
}

Middleware Best Practices

  • Apply middleware at the appropriate level (global, group, or route)
  • Order middleware from most general to most specific
  • Implement middleware that performs a single function
  • Use middleware for cross-cutting concerns (logging, tracing, etc.)
  • Ensure middleware calls ctx.Next() to continue the handler chain
// middleware/logging/middleware.go
package logging

import (
    "time"
    
    "github.com/RobertWHurst/blackbox"
    "github.com/RobertWHurst/navaros"
)

// Middleware creates a logging middleware that logs requests
func Middleware(logger *blackbox.Logger) func(*navaros.Context) {
    return func(ctx *navaros.Context) {
        start := time.Now()
        requestID := ctx.Get("request_id").(string)
        
        reqLogger := logger.WithCtx(blackbox.Ctx{
            "request_id": requestID,
            "method": ctx.Method,
            "path": ctx.Path,
        })
        
        reqLogger.Info("Request started")
        
        // Store logger in context for handlers
        ctx.Set("logger", reqLogger)
        
        // Call the next middleware/handler
        ctx.Next()
        
        // Log completion
        duration := time.Since(start)
        reqLogger.WithCtx(blackbox.Ctx{
            "status": ctx.Status,
            "duration_ms": duration.Milliseconds(),
        }).Info("Request completed")
    }
}

Context Usage

  • Use the context to pass request-scoped data between handlers
  • Access request parameters, headers, and body consistently
  • Set appropriate HTTP status codes and response bodies
  • Handle errors at the appropriate level
  • Use the context's marshalling/unmarshalling capabilities
  • Remember that navaros.Context also implements the Go context.Context interface
  • Use the context for cancellation in long-running handlers when the request is interrupted
// Example context usage
func CreateHandler(ctx *navaros.Context) {
    // Parse request body
    var input CreateInput
    err := ctx.UnmarshalRequestBody(&input)
    if err != nil {
        ctx.Status = http.StatusBadRequest
        ctx.Body = map[string]string{"error": "Invalid request body"}
        return
    }
    
    // Access authenticated user from context
    user := ctx.Get("user").(*models.User)
    
    // Get database from context
    db := ctx.Get("database").(*database.Database)
    logger := logger.Get(ctx)
    
    // Use the context in database operations to support cancellation
    entity := &models.Entity{
        Name: input.Name,
        UserID: user.ID,
        CreatedAt: time.Now(),
    }
    
    // The MongoDB driver accepts the context and will cancel operations
    // if the client disconnects or the request times out
    _, err = db.Collection(models.EntitiesCollection).InsertOne(ctx, entity)
    if err != nil {
        logger.Errorf("Failed to create entity: %w", err)
        ctx.Status = http.StatusInternalServerError
        ctx.Body = map[string]string{"error": "Failed to create entity"}
        return
    }
    
    // Set response
    ctx.Status = http.StatusCreated
    ctx.Body = formatters.FormatEntity(entity)
}

πŸ“‹ Models and Formatters

A clean separation between data models and their presentation is essential for maintaining flexibility and clarity in our services.

Models Package

Models define the core data structures that represent our domain objects. These structures form the backbone of the application and drive database operations, business logic, and API interactions.

Each model should be placed in its own file within the models package, and database collection names and indexes should be defined alongside the model for clear association.

// models/application.go
package models

import (
    "time"
    
    "go.mongodb.org/mongo-driver/bson/primitive"
    "go.mongodb.org/mongo-driver/mongo"
)

// Collection name constant
const ApplicationsCollection = "applications"

// Type definitions for enum-like behaviors
type ApplicationSourceKind string

const (
    GitHubApplicationSourceKind   ApplicationSourceKind = "github"
    GitApplicationSourceKind      ApplicationSourceKind = "git"
    UploadedApplicationSourceKind ApplicationSourceKind = "uploaded"
)

// Nested structure definition
type ApplicationSource struct {
    Kind             ApplicationSourceKind `bson:"kind"`
    GitURL           string                `bson:"gitUrl,omitempty"`
    GitRef           string                `bson:"gitRef,omitempty"`
    GitHubAccount    string                `bson:"githubAccount,omitempty"`
    GitHubRepository string                `bson:"githubRepository,omitempty"`
    GitHubRef        string                `bson:"githubRef,omitempty"`
    BaseImage        string                `bson:"baseImage"`
    BuildWorkingPath string                `bson:"buildWorkingPath,omitempty"`
    BuildScript      string                `bson:"buildScript,omitempty"`
    BuildOutputPath  string                `bson:"buildOutputPath,omitempty"`
}

// Main model struct with appropriate tags for serialization
type Application struct {
    ID          primitive.ObjectID `bson:"_id"`
    Title       string             `bson:"title"`
    Description string             `bson:"description"`
    AccountID   primitive.ObjectID `bson:"accountId"`
    Source      ApplicationSource  `bson:"source"`
    IsEnabled   bool               `bson:"isEnabled"`
    CreatedAt   time.Time          `bson:"createdAt"`
    UpdatedAt   time.Time          `bson:"updatedAt,omitempty"`
}

// Database indexes for this model
var ApplicationIndexes = []mongo.IndexModel{
    // Add your indexes here
}

Formatters Package

Formatters create a clean separation between internal data structures and external API representations. This separation allows our models to evolve independently from our API contracts, improving maintainability.

Our formatters handle several key responsibilities:

  • Data sanitization (removing sensitive fields)
  • Type conversions (ObjectIDs to strings, timestamps to formatted strings)
  • Computed fields that don't belong in the base model
  • Representation of relationships between models

Each model should have a corresponding formatter to handle its transformation for API responses:

// formatters/application.go
package formatters

import (
    "time"
    
    "github.com/telemetrytv/service-name/models"
)

// Response structure with appropriate JSON tags
type FormattedApplication struct {
    ID                  string                       `json:"id"`
    Title               string                       `json:"title"`
    Description         string                       `json:"description,omitempty"`
    Kind                models.ApplicationSourceKind `json:"kind"`
    GitURL              string                       `json:"gitUrl,omitempty"`
    GitRef              string                       `json:"gitRef,omitempty"`
    GitHubAccount       string                       `json:"githubAccount,omitempty"`
    GitHubRepository    string                       `json:"githubRepository,omitempty"`
    GitHubRef           string                       `json:"githubRef,omitempty"`
    BaseImage           string                       `json:"baseImage"`
    BuildWorkingPath    string                       `json:"buildWorkingPath,omitempty"`
    BuildScript         string                       `json:"buildScript,omitempty"`
    BuildOutputPath     string                       `json:"buildOutputPath,omitempty"`
    
    // Computed fields or fields from related models
    LastBuildID string    `json:"lastBuildId,omitempty"`
    CreatedAt   time.Time `json:"createdAt"`
    UpdatedAt   time.Time `json:"updatedAt,omitempty"`
}

// FormatApplication transforms an Application model into a response format
// Additional parameters may be needed for computing fields from related models
func FormatApplication(baseUrl string, application *models.Application, lastBuild *models.Build) *FormattedApplication {
    if application == nil {
        return nil
    }
    
    formatted := &FormattedApplication{
        ID:               application.ID.Hex(),
        Title:            application.Title,
        Description:      application.Description,
        Kind:             application.Source.Kind,
        GitURL:           application.Source.GitURL,
        GitRef:           application.Source.GitRef,
        GitHubAccount:    application.Source.GitHubAccount,
        GitHubRepository: application.Source.GitHubRepository,
        GitHubRef:        application.Source.GitHubRef,
        BaseImage:        application.Source.BaseImage,
        BuildWorkingPath: application.Source.BuildWorkingPath,
        BuildScript:      application.Source.BuildScript,
        BuildOutputPath:  application.Source.BuildOutputPath,
        CreatedAt:        application.CreatedAt,
        UpdatedAt:        application.UpdatedAt,
    }
    
    // Add information from related models if available
    if lastBuild != nil {
        formatted.LastBuildID = lastBuild.ID.Hex()
        // Add more fields from the build if needed
    }
    
    return formatted
}

// FormatApplications transforms a slice of Application models
func FormatApplications(baseUrl string, applications []*models.Application, builds map[primitive.ObjectID]*models.Build) []*FormattedApplication {
    if applications == nil {
        return nil
    }
    
    formatted := make([]*FormattedApplication, len(applications))
    for i, app := range applications {
        var build *models.Build
        if builds != nil {
            build = builds[app.ID]
        }
        formatted[i] = FormatApplication(baseUrl, app, build)
    }
    
    return formatted
}

πŸ”„ Service-to-Service Communication

TelemetryX implements a distributed microservice architecture with several key components working together:

  • Zephyr serves as our primary service communication framework
  • Velaros handles WebSocket communication for real-time features
  • Eurus functions as our service gateway, routing external requests to appropriate services

This architecture enables our services to communicate efficiently while maintaining clear boundaries and responsibilities.

Zephyr Service Framework

At the core of our microservice communication is the zephyr framework, which extends HTTP semantics over transport networks. TelemetryX uses NATS as the transport in all deployed environments, with local transport occasionally used for testing purposes. The framework allows our services to communicate while maintaining familiar request/response patterns:

  • zephyr.NewService creates a service instance that registers with the service discovery system
  • zephyr.NewGateway establishes an entry point that routes external requests to appropriate services
  • zephyr.NewClient provides a client for making requests from one service to another
// Creating a service
import (
    "github.com/RobertWHurst/navaros"
    "github.com/telemetrytv/zephyr"
    "github.com/telemetrytv/zephyr/nats-transport"
)

func main() {
    // Create a router
    router := navaros.NewRouter()
    
    // Register routes
    router.PublicGet("/resource", GetResource)
    router.PublicPost("/resource", CreateResource)
    
    // Create the service
    service := zephyr.NewService(
        "resource-service",
        natstransport.New(natsConn),
        router,
    )
    
    // Start the service
    service.Run()
}
// Making service-to-service requests
import (
    "github.com/telemetrytv/zephyr"
    "github.com/telemetrytv/zephyr/nats-transport"
)

func CallAnotherService() (*Resource, error) {
    // Create a client
    client := zephyr.NewClient(natstransport.New(natsConn))
    
    // Make a request to another service
    resp, err := client.Service("resource-service").Get("/resource/123")
    if err != nil {
        return nil, err
    }
    
    // Handle the response
    resource := &Resource{}
    if err := resp.DecodeJSON(resource); err != nil {
        return nil, err
    }
    
    return resource, nil
}

Scheduler Architecture

For services that require background or periodic tasks, a separate scheduler process must be implemented:

  • All background processing logic must be placed in a scheduler, not in the main service process
  • Schedulers run as a separate executable from the API service
  • They share common functionality with the main service but implement scheduler-specific logic in the scheduler package
  • Scheduler processes should handle graceful shutdown properly
  • Tasks typically follow a theme of related work

Common scheduler use cases include:

  • Building applications or processing data on a schedule
  • Handling background tasks that don't need immediate execution
  • Periodic integrations with external systems
  • Scheduled report generation

Service Communication with Zephyr

TelemetryX uses Zephyr as a microservice framework to handle service-to-service and gateway-to-service communication:

  • Zephyr integrates with Navaros to provide route management, defining which routes are public (accessible via gateway) and which are private (only available to other services)
  • The Zephyr gateway acts as an entry point for external requests, routing them to the appropriate services
  • Services register their routes with the gateway during initialization
  • Services communicate with each other using the Zephyr client, which can access both public and private routes
  • All components share a common transport network (typically NATS)

πŸ“– Documentation

Well-documented code is easier to maintain, understand, and extend. Good documentation serves as both a learning resource for new team members and a reference for experienced developers.

Code Documentation Guidelines

  • Document all exported elements β€” Add comprehensive comments to exported functions, types, and packages
  • Explain complex logic β€” Document non-obvious behavior and edge cases
  • Maintain documentation β€” Keep comments up-to-date when code changes
  • Write clearly β€” Use complete sentences with proper punctuation
  • Be concise β€” Explain purpose and behavior without unnecessary verbosity
  • Focus on why, not just what β€” Explain reasoning behind complex implementations
// Package core provides the central business logic and infrastructure
// components for the TelemetryX Applications Service. It is responsible for
// managing application lifecycles, builds, and application management.
package core

// Builder is an interface that defines methods for building applications from
// different source types (Git repositories, GitHub, or uploaded archives).
// Implementations should handle the full build process including source code
// retrieval, container image building, and output generation.
type Builder interface {
    // Build initiates the build process for an application.
    // It returns a build result containing all relevant metadata
    // or an error if the build process fails.
    //
    // The context can be used to cancel the build process.
    Build(ctx context.Context, build *models.Build) (*models.BuildResult, error)
    
    // GetBuildOutput retrieves the build output (logs, artifacts) for a
    // specific build ID. This can be used to monitor build progress or
    // diagnose build failures.
    GetBuildOutput(buildID string) (io.Reader, error)
}

README Structure

Every service repository should include a comprehensive README.md that clearly explains the service purpose and usage:

Section Purpose
Service Overview Clear description of what the service does and its role
Architecture High-level design and key components
Dependencies Required external systems and libraries
Development Setup Steps to set up a local development environment
Configuration Available options with defaults and environment variables
API Documentation Endpoints with example requests and responses
Common Tasks Reference for frequent development operations
Testing How to run and write tests
Deployment Environment information and deployment procedures

Example README section for API documentation:

Here's how you might document an API endpoint in your README:

# API Endpoints

## Applications

### GET /application/:id

Retrieves an application by its ID.

**Request:**

GET /application/507f1f77bcf86cd799439011 HTTP/1.1
Authorization: Bearer {token}

**Success Response (200 OK):**

{
  "id": "507f1f77bcf86cd799439011",
  "title": "Example Application",
  "description": "An example application",
  "kind": "github",
  "githubAccount": "telemetrytv",
  "githubRepository": "example-app",
  "githubRef": "main",
  "baseImage": "node:16-alpine",
  "createdAt": "2023-01-15T12:00:00Z"
}

**Error Response (404 Not Found):**

{
  "error": "Application not found"
}

πŸ§ͺ Testing

Well-tested code forms the foundation of our reliable services. At TelemetryX, we use the testify/assert package to create structured, readable test assertions that provide consistent patterns and clear failure messages.

Test File Organization

We organize our test files to keep related code together, making it easier to navigate and maintain:

  • Place test files alongside the code they test with a _test.go suffix
  • Keep mock implementations in the same directory with a _mock.go suffix
  • Follow a consistent naming pattern: database.go, database_test.go, and database_mock.go

This approach ensures that when you're working on a component, all its related files are immediately accessible, reinforcing that tests are first-class citizens in our codebase.

Coverage Goals and Quality

We aim for a minimum of 80% code coverage across our services, but we recognize that coverage alone doesn't guarantee quality. A well-tested codebase includes:

  • Tests that verify behavior, not just execution paths
  • Coverage of both happy paths and error conditions
  • Thorough testing of edge cases and boundary conditions

Remember that a test with high coverage but weak assertions provides false confidence. Focus on writing meaningful tests that would catch real issues rather than just increasing coverage numbers.

Testing Patterns and Best Practices

When writing tests for TelemetryX services, consider these approaches:

  • Write focused unit tests for core components before integration
  • Use table-driven tests to efficiently test multiple scenarios
  • Mock external dependencies to isolate the code under test
  • Test error conditions explicitly to ensure proper error handling
  • Use descriptive test names that explain what's being tested
  • Structure your tests with clear setup, execution, and assertion phases
package service_test

import (
    "errors"
    "testing"
    
    "github.com/stretchr/testify/assert"
    "github.com/stretchr/testify/mock"
    
    "github.com/telemetrytv/service-name/core/service"
    "github.com/telemetrytv/service-name/mocks"
)

func TestGetByID(t *testing.T) {
    tests := []struct {
        name          string
        id            string
        mockSetup     func(*mocks.Repository)
        expectedError bool
        expectedApp   *Application
    }{
        {
            name: "should return application when it exists",
            id:   "app-123",
            mockSetup: func(repo *mocks.Repository) {
                app := &Application{ID: "app-123", Name: "Test App"}
                repo.On("FindByID", "app-123").Return(app, nil)
            },
            expectedError: false,
            expectedApp:   &Application{ID: "app-123", Name: "Test App"},
        },
        {
            name: "should return nil when application doesn't exist",
            id:   "app-456",
            mockSetup: func(repo *mocks.Repository) {
                repo.On("FindByID", "app-456").Return(nil, nil)
            },
            expectedError: false,
            expectedApp:   nil,
        },
        {
            name: "should return error when database fails",
            id:   "app-789",
            mockSetup: func(repo *mocks.Repository) {
                repo.On("FindByID", "app-789").Return(nil, errors.New("database connection failed"))
            },
            expectedError: true,
            expectedApp:   nil,
        },
    }
    
    for _, tt := range tests {
        t.Run(tt.name, func(t *testing.T) {
            // Setup
            mockRepo := new(mocks.Repository)
            tt.mockSetup(mockRepo)
            
            svc := service.New(mockRepo)
            
            // Execute
            app, err := svc.GetByID(tt.id)
            
            // Assert
            if tt.expectedError {
                assert.Error(t, err)
            } else {
                assert.NoError(t, err)
            }
            
            assert.Equal(t, tt.expectedApp, app)
            mockRepo.AssertExpectations(t)
        })
    }
}

Notice how this example test:

  • Uses clear, descriptive names for test cases
  • Tests multiple scenarios including error conditions
  • Separates setup, execution, and assertion phases
  • Verifies both the return value and error handling
  • Confirms that mock expectations were met

🏷️ Naming Conventions

Clear, consistent naming greatly improves code readability and maintainability. Our naming conventions follow Go standards while adding consistency across our services:

  • Follow Go's conventions (CamelCase for exported, camelCase for unexported)
  • Use descriptive, unabbreviated names for better understanding
  • Name interfaces based on behavior (e.g., Reader, Writer)
  • Name implementation types concretely (e.g., FileReader, HTTPWriter)
  • Use consistent naming across similar concepts

Common Naming Patterns

  • Handlers: VerbNoun (e.g., GetApplication, CreateUser)
  • Services: NounService (e.g., ApplicationService)
  • Repositories: NounRepository (e.g., ApplicationRepository)
  • Interfaces: Behavior-based (e.g., Publisher, Builder)
  • Implementations: ConcreteInterface (e.g., S3Publisher, DockerBuilder)

Specific Naming Rules

  • Use PascalCase for exported types like ApplicationSource and BuildConfig
  • Use camelCase for variables to distinguish from exported types (sourceReader, buildOptions)
  • Use PascalCase for exported constants to match Go conventions (GitHubApplicationSourceKind)
  • Name interfaces based on behavior rather than implementation (Archiver, Publisher)
  • Limit abbreviations to widely understood terms (db for database, lgr for logger)

πŸ”Œ Dependencies

Proper dependency management is crucial for creating maintainable, testable services. Our approach emphasizes clear ownership and explicit dependencies:

  • Initialize dependencies at startup when possible for better control and visibility
  • Pass dependencies explicitly to objects that need them rather than creating them internally
  • Favor composition over inheritance to keep component relationships clear
  • Use interfaces to define dependencies for better testability and flexibility

This pattern creates clear boundaries between components and makes testing much simpler:

// Service constructor with explicit dependencies
func New(cfg *config.Config, lgr *blackbox.Logger, db *database.Database) *Service {
    return &Service{
        Config:   cfg,
        Logger:   lgr,
        Database: db,
    }
}

🧰 Go-Shared Library

TelemetryX maintains a private repository github.com/telemetrytv/Go-Shared containing common utilities and patterns used across multiple services. This library helps ensure consistency and reduces duplication in our ecosystem.

Key Go-Shared Components

  • Middleware: Authentication handling, structured logging, request ID generation, and pagination support
  • Utilities: Process lifecycle management (ExitCoordinator), synchronization tools, and error handling utilities
  • Common Patterns: Reusable implementations for MongoDB operations, service configuration, and HTTP client functionality

ExitCoordinator

The ExitCoordinator utility from Go-Shared manages service lifecycle with graceful shutdown capabilities:

// In cmd/api/main.go
import (
    "context"
    "fmt"
    "os"
    
    "github.com/telemetrytv/Go-Shared/sync"
    "github.com/telemetrytv/service-name/core"
)

func main() {
    // Create an exit coordinator that implements context.Context
    exCrd := sync.NewExitCoordinator()
    
    // Create and configure services...
    cr := core.New(cfg, lgr)
    
    // Use the exit coordinator's context for startup
    ctx := exCrd.Context(context.Background())
    if err := cr.Start(ctx); err != nil {
        lgr.Errorf("Failed to start core: %w", err)
        os.Exit(1)
    }
    
    // Wait for termination signal
    exCrd.UntilExit()
    
    // The exit coordinator can also provide the stop context
    stopCtx := exCrd.Context(context.Background())
    if err := cr.Stop(stopCtx); err != nil {
        lgr.Errorf("Failed to stop core: %w", err)
        os.Exit(1)
    }
    
    exCrd.ReadyToExit()
}

Using the exit coordinator provides:

  • Signal handling (SIGINT, SIGTERM)
  • Context cancellation when signals are received
  • Timeout management for graceful shutdowns
  • Consistent service lifecycle management across all services

✨ Summary of Key Principles

When creating or maintaining a TelemetryX Go service, follow these key principles:

  1. Consistent Structure

    • Follow the established project structure
    • Maintain clear separation of concerns
    • Use standard package organization
  2. Core Architecture

    • Build around a central core package for service lifecycle management
    • Use subpackages for database, config, logger, and service components
    • Design for reusability across different service entry points
    • Implement all background processing in separate scheduler processes
  3. Code Quality

    • Write clear, idiomatic Go code
    • Use consistent formatting and naming conventions
    • Document public APIs thoroughly
    • Group related functionality in coherent packages
  4. Production Readiness

    • Include comprehensive error handling
    • Implement proper logging with appropriate levels
    • Use structured configuration management
    • Write thorough tests with proper mocking
    • Implement graceful shutdown handling
  5. Service Communication

    • Use zephyr for handling both public and private routes
    • Implement navaros routers at the package level
    • Define route handlers in proper packages by resource type
  6. Data Management

    • Define models in the models package with appropriate schema tags
    • Keep collection names and indexes alongside models
    • Use formatters to transform internal models to API responses
    • Use strongly typed MongoDB queries and filters
  7. Configuration

    • Use orale for configuration from multiple sources
    • Provide sensible defaults for zero-configuration development
    • Support environment variables and command-line flags
    • Enable environment-specific configuration files

These practices create maintainable, consistent services that align with TelemetryX architectural patterns.


πŸŽ‰ Congratulations on making it through the style guide! Now go forth and write beautiful Go code!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment