-
-
Save StevenACoffman/c7eea973042f21f2c47f9c79c4a63368 to your computer and use it in GitHub Desktop.
Tech Books Summary
- good architecture = allow changes w/ flexibility + delay decision
- Entities: pure; small sets of critical business rules. Should be most independent & reusable. Can be an object with methods or data structures & functions
- Use case: not as pure; is an object. contains data elements & operations.
- Entities are lower level than (don’t depend on) use cases since use cases are closer to inputs/outputs.
- Entities & use cases are 2 layers
- Returned data structure should not reference entities since they might change for diff. reasons
- System should describe what it does not what framework it uses
- Use case layer contains application-layer biz rules. Changes at this layer shouldn’t affect entity layer
- Changes to workflow requires updating use case layer
- Adapter layer: convert data from/to use cases & entities to/from external (web/DB). Example: Views and controllers in MVC; other data from/to internal/external form
- Dependency rule: Dependency should point inward (lower level layers)
- If need to call from inner to outer layer, inner should call an interface implemented by outer layer. Both inner & outer will depend on (point to) interface
- Data crossing boundaries should be simple or just function args (no entities)
- Interface adapters: covert data from format of inner layer to the format of external layer
- Presenter accepts data from application and applies formatting. Presenters are testable
- View (humble object) receive view model from presenters w/o processing. Views display the data
- Humble object patterns separates testable (non-humble) and non-testable (humble) parts
- Objects are not data structure. Objects is a set of operations (pubic methods). DS has no behaviors
- Humble object pattern is likely found in every architectural boundary
...
- MS is not the goal, goal might be e.g. faster delivery, team automany, break off from monolith release cycle. Need ROI calculation, ned to align with what business tries to achieve
- Consider microservice alternatives, or try things in parallel. Alternative ideas:
- Modular monolith
- Vertical scaling (more powerful machine)
- Horizontal scaling (more machines)
- Scale up development team
- New technology
- Use sliders to analyze competing priorities
- Faster delivery: should run path-to-production analysis to find the biggest blocker
- MS: Benefits
- More robust architecture (ability to react to expected variations) because of decomposed functionality
- Great options for scale-up after initial success
- MS: When's MS a bad idea
- Unclear domain/decompose prematurely
- Startup/greenfield project
- Customer-installed software
- Lack of good reason
- Incremental migration:
- Start somewhat small
- The impact of decomposition will be relfected in production env, not during development
- Easier places to experiment - start with whiteboarding
- Where to start migration/decomposiition
- Develop domain model with just enough info (e.g. use event-storming)
- identify bounded context
- BC is good starting points for defining MS boundaries
- map out BC dependencies to determind which is easier to extract (Fig 2.6)
- caveat: domain model represents logical view, not how code is organized
- Use trade-off diagram (benefit of decomposition vs ease of decomposition) to prioritize decomposiition (Fig 2.8)
- Reorganizing teams
- DevOps (independent, autonumous teams)
- Changing/improving developer skills
- How to know if transition is working
- Define measures to track
- Regular checkpoints (review quantitative + qualitative measures)
- Quantitative measures e.g. number of deployments, failure rates, cycle time
- Qualitative measures e.g. team's feelings
- Avoid sunk cost fallacy
- Key: take small steps, be open to new approaches
- Misc.
- "Reuse is not a direct outcome people want. Reuse is something people hope will lead to other benefits"
- Irreversible vs reversible decisions
- Do migration over small steps, allow going back if needed. For each step, copy code instead of changing functionality
- First thing to consider is whether monolith will be modified (more flexibility).
- Biggest barrier is code isn't organized around business domains. Can use seam - a seam is a defined around the code to chnage, then work on new implementation, and swap after change has been made.
- Rewrite - try salvage existing codebase first. If not, rewrite small pieces of functionality at a time and release regularly.
- Migration patterns: Strangler fig application
- New & old systems coexist, allow new code to grow incrementatlly and eventually replace old system. Easy rollback if required.
- Useful when: no need to touch existing system/existing system is black box, works well wehn functionality to move isn't deep inside system. Existing code can still be worked on by others.
- Steps:
- 1: Identify asset to move
- 2: Start implmentating new functionality in microservice. Deploy but not released to public. Parallel run with old code
- 3: Redirect calls to microservice. Prereq: need to have clear inbound calls. If not, consider Branch by Abstraction pattern
- Variation: "Shallow" extraction - Existing functionality will be exposed to MS
- Use HTTP proxy to redirect calls
- Migration patterns: UI composition
- Example: Modules are migrated to MFEs one at a time; For mobile app: configs and layout of UI components is defined in declarative fashion on server side, so UI can be changed without new release
- Migration patterns: Branch by Abstraction
- Useful when: functionality to extract is deep inside existing system i.e. strangler fig is not suitable, or when changes will take a long tim. Using long-lived branches with new functionality getting developed is an option but not optimal. Branch by Abstraction will be a better option.
- Try to use strangler fig pattern before considering this one
- Steps:
-
- Create an abstraction for the functionality to be replaced
-
- Change existing client to use the abstraction
-
- Create new implementation using the abstraction. New impl. can be microservice etc/
-
- Switch abstraction to use new implementation e.g. use feature flags
-
- Clean up old code and optionally abstraction
-
- Fallback: if new implementation fails, switch back to old impl.
-
- Migration pattern: Parallel Run
- Both new + old impl. will be called, but only one (typically old impl.) will be considered source of truth.
- Useful for verifying functional and non-functional parameters of new impl. especially for high risk areas
- End goal isn't to replace the implmentations, but to reduce bugs in one of the impls.
- Spy tools can be used to intercept/stub functionality e.g. verify email should be sent without actually sending
- Parallel run isn't canary release, which redirect some users to new functionality.
- Parallel run is a way to implement dark launching (function released but invisible)
- Parallel run, canary releasing, dark launching all work to support progressive delivery
- Migration patterns: Decorating collaborator
- Allow attaching functionality without knowing the underlying logic.
- Use case: monolith returns result -> proxy will intercept, forward to new service -> new service provides additional functionality, optionally new service can make call to monolith to get more data -> new service returns decorated result
- Where to use: when required info can be extracted from inbound request/response. Not recommended when request & response to/from monolith don't contain info needed by new service.
- Migration patterns: Change Data Capture
- Step: Detect any CUD operation to table, then make call to new service
- Ways to implement change data capture: DB trigger, transaction log poller, batch delta copier
- Caveat: unable to intercept either at decorator or strangler, and cannot change underlying codebase
- 7 Encapsulation
- 8 Moving Features
- Hide info from the rest of the system:
- Encapsulate record: "record" - POJO; "object" - class instance. Object is better for mutable data, also hide the internal representation of the data and provides controlled access to internal data
- Encapsulate collection: provide modifier methods, for getter return a copy of collection
- Replace primitive with objects: wrap primitive values in class so behaviors can be added
- To ensure data is calculated in right order: Replace temp by query
- Hide connections between classes:
- Hide delegate:
aPerson.department.manager => aPerson.manager
- Motivation: since there's no need to reveal to client that department is responsible for tracking manager, it's better to hide department from client. - Remove middle man (opposite of above):
aPerson.manager => aPerson.department.manager
- motivation: When too many hidden delegation is added. Note: there's no absolute reason to either remove middle man or hide delegation. - Substitute algorithm: Motivation: Substitute a complicated algorithm by something simple/existing tool. This should be done only if the method is simple enough
- Hide delegate:
- Class reorg:
- Split class: split responsibilities into child class, then apply "change reference to value" so child objects are always replaced like a primitive values instead of getting reference updated
- Inline class: usually happens after refactoring a class which leave little responsibilities to it, or when collapsing context into inline class between spliting.
- Substitute algorithm: Use existing algorithms/API
- Move functions: mainly when it ref elements in other contexts more than its current context. Often a new context/class will be needed if moving a group of functions
- Move fields
- Move (caller) statements into functions: when caller statement is duplicated before calling functions
- Move (function) statements into callers: when function needs to support varying behaviors based on callers, should move those behaviors to the callers
- Replace inline code (loose end/duplicate) with function calls (either extracted into functions or from libraries)
- Slide: i.e. code is easier to understand when things that are related to each other appear together
- Split loop: when code inside a loop does different things, split into multiple loops (refactor, then optimize)
- Replace loop with pipeline: use HOF to run different callbacks for diff. operations instead of having everything in one loop similar to above e.g.
data.filter(...).map(...).reduce(...)
- The design of complex system is a collection of many smaller design decisions, arranged in a decision tree. Each branch = Design option; Each level down = finer decision; Each leaf = valid solution
- "The Method" provides decision tree for system design and project design
- Prune the tree by constraints
- Enforce design through communications: reviews, inspections & mentoring
- Architecture: 1. breakdown of systems into components; 2. perscribes interaction at runtime
- Functionality decomposition: defines building blocks (service) based on functionality. Useful for discovering hidden/implied functionas by drilling down the requirements.
- Functionality decomposition problems: Couple functionality with requirements. Consequences:
- Changing requirement requires new decomposition: one service per variant of functionality, or one mega service
- Precluding reuse: system has A -> B -> C; B itself can't be reused without A, C
- Bloated client: Client will be required to combine functions from different services, instead of just invoking operations/presenting data. Each service also becomes an entry point, leading to multiple points of failure.
- Bloated service: Client has single entry point, but services are coupled to each other (A -> B -> C). e.g. A must be aware of B. e.g. A must handle error when B fails i.e. A becomes bloated service
- Domain decomposition: more complex than functionality decomposition, can cause avoidance of cross-domain connectivity, reducing communication to CRUD/state changes
- Hallmark of bad design: when any change to system affects the client; Ideally, the client and services should be able to evolve independently.
- Trading system decomposition example
- The method: Decompose based on volatility
- Volatility decomposition: Identify areas of potential change, encapsulate those into building blocks, then implement the behavior as interaction b/w the encapsulate areas of volatility.
- The encapsulation can be functional by nature, but can have no meaning for the business.
- Benefits: smaller areas of impact for testing; better maintainability
- Drawbacks: Takes longer to decompose than functional decomposition. Requirement analysis is still required to identify areas of volatility.
- Identifying volatility
- Ask what could change along the axes of volatility: same user over time vs same tiime but different users
- Observe solutions masquerading as requirements, generalize the examples/requirement into the underlying candidate for volatility
- Not everything that is a variable is volatile. Variable: can be controlled using conditional code; Volatility: Changes or risks that are open-ended/has ripple effect.
- Once candidates for volatility are identifed, compile into a list, then transition into architectural diagram.
- Sometimes a component can encapsulate 1+ volatilities
- Sometimes volatilities can be operational concept (e.g. latency)
- Volatilities can also be encapsulated into 3rd party service
- Start with simple decisions, which will constrain the system
- Nature of business (changed rarely) should not be captured as volatility
- Avoid functional decomposition, or domain decomposition
- Avoid bloating clients/thin service
- Decompose based on volatility
- Each areas of change is encapsulated (in a vault) - each represents an area of volatility
- No block if no volatility
- Ask:
- are volatilities in software system?
- any interaction between areas of volatility?
- layers imply top-down. Each layer is also an encapsulation. Within a layer are services (entities)
- layers (top to bottom):
- client layer (user/another system/techs) - volatility in client technology. Try to equalize all clients -- single point of entry to system
- business layer - volatility in changes of system behavior. Good requirement is always behavioral (leave less room for interpretation) not functional i.e. use case/sequence of activities.
- Don't build system based on requirements, but based on use cases
- There should be only a few core use cases, other are variations created by different interactions
- Identify the smallest set of building blocks that satisfy the core use cases (composable design). Ideally there should be ~10 components
- With the above, design can change but core use cases won't
- Composable design
- Doesn't satisfy any use case in particular
- Features are aspects of integration, not implementation
- System must respond to changing requirements. The trick is not to resist, but to contain the change. Funcational decomposition spreads the change
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment