As data-driven and AI-first applications are on the advance, we extend our best practices for DevOps and agile development with new concepts and tools. The corresponding buzz words would be continuous intelligence and continuous delivery for machine learning (CD4ML).

For our current project, we researched, tried different approaches and build a proof of concept for a continuously improved machine learning model. That’s why I got interested in this topic and went to a meet up at Thoughtwork’s office. Christoph Windheuser (Global Head of Artificial Intelligence) shared their experience in this field and gave a lot of insights. The following post summarizes these thoughts [1] with some notes from our learning process.

CD4ML continuous intelligence cycle

The continuous intelligence cycle

1- Acquire data

Get your hands on data sets. There are multiple ways, most likely the data is bought, collected or generated.

2- Store, clean, curate, featurize information

Use statistical and explorative data analysis. Clean and connect your data. At the end, it needs to be consumable information.

3- Explore models and gain insights

You are going to create mathematical models. Explore them, try to understand them and gain insights in your domain. These models will forecast events, predict values and discover patterns.

4- Productionize your decision-making

Bring your models and machine learning services into production. Apply your insights and test your hypothesizes.

5- Derive real life actions and execute upon

Take actions on your gained knowledge. Follow up with your business and gain value. This generates new (feedback) data. With this data and knowledge, you follow up with step one of the intelligence cycle.

Productionizing machine learning is hard

There are multiple experts collaborating in this process circle. We have data hunters, data scientists, data engineers, software engineers, (Dev)Ops specialists, QA engineers, business domain experts, data analysts, software and enterprise architects… For software components, we mastered these challenges with CI/CD pipelines, iterative and incremental development approaches and tools like GIT and Docker (orchestrators). However, in continuous delivery for machine learning we need to overcome additional issues:

  • When we have changing components in software development, we talk about source code and configuration. In machine learning and AI products, we have huge data sets and multiple types and permutations of parameters and hyperparameters. GitHub for example denies git pushes with files bigger than 100mb. Additionally, copying data sets around to build/training agents is more consuming than copying some .json or .yml files.
  • A very long and distributed value chain may result in a "throw over the fence" attitude.
  • Depending on your current and past history, you might need to think more about parallelism in building, testing and deploying. You might need to train different models (e.g. a random forest and an ANN) in parallel, wait for both to finish, compare their test results and only select the better performing.
  • Like software components, models must be monitored and improved.

The software engineer’s approach

In software development, the answer to this are pipelines with build-steps and automated tests, deployments, continuous monitoring and feedback control. For CD4ML the cycle looks like this [1]:

CD4ML Pipelines

There is a profusely growing demand on the market for tools to implement this process. While there are plenty of tools, here are examples of well-fitting tool chains.

stack discoverable and accessible data version control artifact repositories cd orchestration (to combine pipelines)
Microsoft Azure Azure Blobstorage / Azure Data lake Storage (ADLS) Azure DevOps Repos & ADLS Azure DevOps Pipelines
open source with google cloud platform [1] Google cloud storage Git & DVC GoCD
stack infrastructure (for multiple environments and experiments) model performance assessment monitoring and observability
Microsoft Azure Azure Kubernetes Service (AKS) Azure machine learning services / ml flow Azure Monitor / EPG *
open source with google cloud platform [1] GCP / Docker ml flow EFK *

* Aside from general infrastructure (cluster) and application monitoring, you want to:

  • Keep track of experiments and hypothesises.
  • Remember what algorithms and code version was used.
  • Measure duration of experiments and learning speed of your models.
  • Store parameters and hyperparameters.

The solutions used for this are the same as for other systems:

search engine log collector visual layer
EFK stack elasticsearch fluentd kibana
EPG stack elasticsearch prometheus grafana
ELK stack elasticsearch logstash kibana

[1]: C.Windheuser, Thoughtworks, Slideshare: https://www.slideshare.net/ChristophWindheuser/cd4ml-thoughtworks-meetup-munich-christoph-windheuser-may-8th-2019

I went to a great session about CQRS, Event Sourcing and domain-driven Design (DDD) on the Software Architecture Summit. The speaker Golo Roden (@goloroden) did a fantastic job selling these concepts to his audience with a great storytelling approach. He explained why CQRS, Event Sourcing and DDD fit together perfectly while replicating the www.nevercompletedgame.com for his daughter. This is what he shared with us.

Domain-driven Design

The more enterprise-y your customer the weirder the neologisms get.

We – as software engineers – struggle to understand business and domain experts. Once we understand something, we try to map it to technical concepts. Understood the word "ferret"? Guess we need a database table called "ferret" somehow. We then proceed to inform our business colleagues, "deploying a new schema is easy as we use Entity Framework or Hibernate as OR-mapper". He thinks we understood, we think he understood. Actually, nobody understood anything.
As software engineers we tend to fit every trivial and every complex problem into CRUD-operations. Why? Because its "easy" and everyone does it. If it was that easy, software development would be effortless. Rather trying to fit problems in a crud pattern, we should transform business stories into software.
That’s why we should use domain-driven design and ubiquitous language.
Golo Roden proceeds to create a view on the nevercompletedgame with ubiquitous language. So nobody asks, "what does open a game mean" and there is no mental mapping.
I won’t go into detail here, but an example can show why we need this.

  • Many words have one meaning: When developing a software for a group of people, sometimes we call them users, sometimes end-users, sometimes customers etc. If we use different words in the code or documentation and developers join a project later – they might think there is a difference between these entities.
  • One word has many meanings: Every insurance software has "policies" somewhere in its system. Sometimes it describes a template for a group of coverages, sometimes it’s a contract underwritten by an insurance, sometimes a set of government rules. You don’t need to be an expert to guess this can go wrong horribly.

CQRS

Asking a question should not change the answer

Golo Roden jokes, "CQRS is CQS on application level", but actually it’s easy to understand this way, once you read a single article about CQS. Basically, it’s a pattern where you separate commands (writes) and queries (reads): CQS.

  • Writes do not return any values and change the state of an object.
    stack.push(23); // pushes value 23 onto the stack; returns nothing
  • Reads return a value and don’t change the state.
    stack.isEmpty() // does not change state; returns a isEmpty boolean
  • But don’t be fooled! Stacks are not following the CQS pattern.
    stack.pop() // returns a value and changes state

Separating them on application level means: exposing different APIs for reading (return a value; do not change state) and writing (change state; do not return value *). Or phrased differently: Segregate responsibilities for commands and queries: CQRS.

* for http: always returns 200 before doing anything

Enforcing CQRS could have this effect on your application:

For synchronizing patterns see patterns like the saga pattern or 2 phase commit . For more reference see: Starbucks Does Not Use Two-Phase Commit

Event Sourcing

When talking about databases (be it relational or NoSQL) often we save the current state of some business item persistently. When we are ambitious we save a history of these states. Event sourcing follows a different approach. There is only one initial state, change requests to this state (commands) and following manipulating operations (events). When we want to change the state of an object, we set up a command. This triggers an event (that’s published so some kind of queue) and most likely is persistent in a database.

Bank account example: we start with 0 € and do not change this initial value when we add or withdraw money. We save the events something like this:

Date EventId Amount Message
2019-01-07 e5f9e618-39ad-4979-99a7-342cb1962266 0 account created
2019-01-11 f2e98590-7795-4cf7-bdc2-1794ad39874d 1000 manual payment received
2019-01-29 cbf44bfc-7a5e-4514-a906-a313a6e0fb5e 2000 saylary received
2019-02-01 32bc638c-4783-45b8-8c1e-bebe2b4528a1 -1500 rent payed

When we want to see the current balance, we read all the events and replay what happened.

const accountEvents = [0, 1000, 2000, -1500];
const replayBalance = (total, val) =>  total + val;
const accountBalance = accountEvents.reduce(replayBalance);

Once every n (e.g. 100) values we save a snapshot to not have to replay too many events. Aside from the increased complexity this has some side effects which should not be unadressed.

  • As we append more and more events, the data usage increases endless. There are ways around removing "old" events etc. and replacing them with snapshots, but this destroys the intention of the concept.
  • Additionally, as more events are stored, the system gets slower as it has to replay more events to get the current state of an object. Though, snapshotting every n events can get deterministic maximum execution time.

While there are many contra arguments there is one the key benefit why its worth: your application is future proof, as you save "everything" for upcoming changes and new requirements. Think of the account example from the previous step. You can implement/analyze everything of the following:

  • "How long does it take people to pay their rent once they got their salary"
  • "How many of our customers have two apartments? How much is the difference between both rents?"
  • "How many of our customers with two apartments with at least 50% in price difference need longer to pay off their car credit?"

To sum it up and coming back to our initial challenge, our simple CRUD application with domain-driven Design, CQRS and event sourcing would have transformed our architecture to something like this:

While this might solve some problems in application and system development this is neither a cookie-cutter approach nor "the right way" to do things. Be aware of the rising complexity of your application, system and enterprise ecosystem and the risk of over-engineering!

Ein solides File Management ist der erste Schritt zu guter Datenqualität im Datawarehouse. Wie kann man File Management im Modern Datawarehouse realisieren? Ein Ansatz sind die Azure Datafactory und Azure Databricks. Azure Functions bieten hier eine gute Alternative. Bevor ich euch die Lösung zeige, will ich das Problem noch etwas eingrenzen.

File Management

Häufig müssen verschiedenste Arten von Dateien in ein Datawarehouse importiert werden.
CSV-Dateien sind bei großen Datenvolumen, im Vergleich zu HTTP-Schnittstellen, schnell exportiert und schnell importiert.
XLSX-Dateien sind üblich, wenn es eine Schnittstelle mit einem manuellen Prozess gibt und ein Anwender Daten in Excel bearbeitet.
XML-Dateien erlauben den Austausch von komplexen Datenstrukturen zwischen Systemen.
Ein bewährtes Vorgehen ist für Schnittstellen-Dateien Metadaten zu erfassen, etwa wann eine Datei registriert und geladen wurde. Gleichzeitig sollten diese Dateien archiviert werden. Manchmal müssen die Dateien noch konvertiert werden.

Das Modern Datawarehouse

Realisiert man klassische Datawarehouse-Architekturen in der Cloud führt kaum ein Weg an Infrastructure as a Service vorbei. Das größte Potenzial bietet die Cloud aber bei Platform as a Service, da im Leerlauf weniger Ressourcen verbraucht werden und damit Kosten sinken können.
Eine Datawarehouse-Architektur die auf Platform as a Service realisiert werden kann ist das von Microsoft vorgeschlagene Modern Datawarehouse

Das sieht folgendermaßen aus:
Modern Datawarehouse Diagramm

Azure Functions

Azure Functions sind zustandslose Webservices, die auf unterschiedlichste Ereignisse registriert werden können und dann eine bestimmte Aufgabe erledigen sollen. Ereignisse können Zeitpläne, HTTP-Requests oder Änderungen in einem Azure Blob-Store sein. Kosten entstehen pro Aktivierung einer Funktion und skalieren so gut mit der entstandenen Last.

Azure Functions

Problemstellung und Lösungsansatz

Die Datenhaltung im Modern Datawarehouse ist keine Relationale Datenbank sondern ein Data Lake. Ein Data Lake ist im Kern ein Dateisystem.
Die Datenintegration legt Dateien im Lake ab und Konsumenten lesen die Dateien. Dabei ist essentiell, dass die Daten geordnet abgelegt werden. Nach Möglichkeit müssen die Daten partitioniert werden, sodass bei Zugriffen nicht alle Dateien durchsucht werden müssen.

Welche Komponente übernimmt jetzt das File Management? Eigentlich müsste die Azure Data Factory diesen Aspekt abdecken. Ist aber noch recht limitiert und eignet sich eher für simples Data Movement. Alternativ kann das File Management in Databricks implementiert werden.
Azure Functions sind hier aber preislich günstiger und nicht weniger leistungsfähig.

Rechnet man mit wenigen hundert Dateien, die pro Tag verarbeitet werden wird man das Gratisvolumen von 1 Mio. Zugriffen kaum ausschöpfen.
Ein anderer Vorteil gegenüber der Azure Data Factory ist, dass man hier vollwertige Programmiersprachen wie C# oder Python verwenden kann. Mit Konzepten wie Vererbung kann ein großer Teil des Codes wiederverwendet werden und muss pro Datei-Typ nur sehr wenig Code zusätzlich geschrieben werden.

Design

Das Design ist simpel. Es gibt drei Azure Storage Accounts. Einen für den Input, einen für das Archiv und einen für den Data Lake. Im Input und im Archive existiert jeweils für jeden Dateityp ein Blob Container. Im Lake Storage gibt es nur einen gemeinsamen Container.
Jetzt wird eine Datei im Input abgelegt. In meinem Beispiel ist das ein Microsoft Flow, der einen Anhang aus einer E-Mail mit einem speziellen Betreff ablegt. Das könnte aber genauso ein AzCopy-Aufruf von einem Dienstleister sein.
Eine Azure Function ist auf neue Dateien im Input Blob Container registriert und wird aktiviert (Punkt 1 in der folgenden Grafik).
Die Function erzeugt ein Verzeichnis im Archiv Container mit dem aktuellen Zeitstempel und kopiert die Datei unbearbeitet in das neue Verzeichnis. Damit können keine Dateien überschrieben werden oder verloren gehen (Schritt 2).
Im Lake Container gibt es jeweils je Dateityp ein Verzeichnis.
Die Function vergibt der Datei einen neuen eindeutigen Namen, konvertiert bei Bedarf den Zeichensatz und legt die Datei im Lake Storage ab.
Dazu werden noch die Metadaten als eigener Dateityp abgelegt, die bei Analysen verknüpft werden können (Schritt 3). Zuletzt wird noch die Datei aus dem Input gelöscht.

File Management - Flow

Die Struktur der Function ist in der folgenden Grafik gezeigt. Die gesamte Logik liegt in der File-Klasse und ist damit wiederverwendbar. In der spezifischen Klasse wird nur festgelegt in welchem Container der Input liegt und in welchem Verzeichnis die Daten im Lake abgelegt werden sollen. Die Metadaten werden als eigene einheitliche Klasse realisiert und können so leicht serialisiert und im Lake abgelegt werden.

File Management - Structure

Demo

Für die Demo habe ich eine DEV-Umgebung in Azure erzeugt. Zusätzlich zu den erklärten Services benötigen die Azure Functions noch einen weiteren Storage Account und einen App Service Plan. Dazu werden noch die Application Insights verwendet um die Lösung zu überwachen.

Ressource Group

Die Application Insights zeigen auch, dass die Lösung tatsächlich läuft.

Application Insights

Im Storage Explorer kann man dann sehen, wie die Dateien im Lake abgelegt werden. Für jede Input-Datei wurde über eine GUID identifizierbar der Inhalt als CSV und die Metadaten als JSON abgelegt.

Lake - files
Lake - zep

Wie funktioniert Automatisches Deployment von Datawarehouse-Systemen? Im Rahmen von Projekten mit DevOps-Ansatz spielen Continuous Integration und Continuous Deployment eine wichtige Rolle für ein effizientes Arbeiten. Der erste Schritt zu Continuous Deplyoment ist die Deployment-Automatisierung. Das ist bei Datawarehouse-Systemen nicht anders. Eine Anforderung dazu ist die Versionsverwaltung der Quellen. Jede Version kann automatisch in ein Release übersetzt werden. Das umfasst das Erstellen einer Installationsdatei aus den Quellen. Das Deployment ist dann das Aktualisieren einer Systemumgebung (Produktionssystem oder Testsystem). Das vereinfacht insbesondere die Qualitätssicherung, was in Folge auch die Änderbarkeit des Systems verbessert und letztendlich die Qualität an sich erhöht.

CI and CD

Systemdefinition und Versionierung

Bei SQL Server-basierten Datawarehouse-Systemen besteht die Systemdefinition aus Visual Studio Projekten. Für SQL-Datenbanken, Integration Services, Reporting Services und Analysis Services gibt es jeweils Visual Studio Projekttypen. Das ist aber noch nicht ausreichend für das automatische Deployment von ganzen Lösungen. Es fehlen Projekte für SQL Server Agent Jobs und Linked Server. Integration Services Pakete verwenden häufig PowerShell-Skripte für File-Management und Dummy-Dateien für die Validierung der Pakete. Dafür gibt es ebenfalls keine Standardlösung. Reporting Services Projekte unterstützen keine Report Abonnements mit Data-Driven-Subscriptions, die häufig für automatische Exporte von XLSX- oder PDF-Dateien verwendet werden. Ein weiters Problem dieser Projekttypen ist, dass es keine geeignete Möglichkeit gibt um zu definieren, wie diese Projekte zusammenhängen, sodass eine vollständige Lösung deployt werden können und dabei die zielumgebungsspezifischen Reports mit Datenbanken, ETL-Prozessen verknüpft werden können.
Diese Mängel müssen mit zusätzlichen Skripten und Konfigurationsdateien kompensiert werden. Ein Build-Skript erzeugt ein Release-Archiv mit allen Dateien, die für das Deployment nötig sind. Ein Deployment-Skript verwendet das Release-Archiv und den Namen der Zielumgebung als Parameter. Eine Deployment-Konfiguration enthält die Informationen zur Verknüpfung der Komponenten. Eine Umgebungs-Konfiguration enthält die umgebungsspezifischen Parameter wie Hostnamen und Benutzernamen.

DWH Release

Implementierung

Es gibt diverse Möglichkeiten diesen Ansatz zu implementieren. Bewährt haben sich XML-Dateien für hierarchische Konfigurationen, wegen guter Lesbarkeit und Unterstützung von internen Variablen, die z.B. JSON nicht unterstützt. Tabellarische Konfigurationen wie z.B. umgebungsspezifische Parameter können gut in CSV-Dateien abgelegt werden, da diese einfach verarbeitet werden können und leicht in Excel gepflegt werden können.
Die Build- und Deployment-Schritte können effizient in PowerShell implementiert werden. Grafische Werkzeuge wie Microsoft Azure Pipelines, CA Application Release Automation sind für kleinteilige Deployment-Schritte sehr umständlich und eignen sich maximal zur Ausführung von Deployment-Skripten.

Recently, I attend the JavaScript Days 2019 where I participated in two awesome workshops (TypeScript Deep-Dive and Advanced black magic in TypeScript) dealing with advanced TypeScript features. Within this blog post I want to share some of the findings and aha moments I had during these sessions.
Note that these learnings are very personal, not necessarily interrelated and often quite opinionated. I hope that you can nevertheless profit from my experiences. If you have any questions or want to discuss some of the more controversial topics, please leave a comment down below.

Contents

  • Where to and where not to add types
  • How to think about function signatures
  • Enums vs. Const Enums vs. Union Types
  • Immutability in TypeScript

Where to and where not to add types

Let’s start of with a pretty fundamental question: at which places should you add types to your plain JavaScript? As we all know, TypeScript allows us to type variables and function signatures. Furthermore, there is the possibility to write "OOP-style" TypeScript using well-known constructs like classes and interfaces together with concepts like inheritance and polymorphism.
Let’s suppose we want to start using TypeScript in our existing JavaScript project. Where should we start using types?
First of all, I would highly recommend that you turn on the compiler option noImplicitAny in your .tsconfig. According to the comment in the boilerplate .tsconfig, this option

raise[s] [an] error on expressions and declarations with an implied ‘any’ type.

Of course, the thing we as TypeScript developers hate the most is the any type. Once we come across e.g. a local variable of type any, we will not get any further information about it and autocompletion or refactoring features don’t work – we are basically back to the stone age of plain JavaScript. With the noImplicitAny option turned on, the compiler so to speak warns us where type inference fails to do its job and points us directly to the places in our code where we need to add types.
When googling something like "TypeScript tutorial", nearly every introduction starts by explaining the syntax for typing a variable. You usually encounter something like this:

let message: string = "Hello World";

One of the very few exceptions to this rule is the official quickstart from the TypeScript team itself – it seems like they know what they are doing 😉
I would suggest: don’t annotate your variables with types. You don’t need to. Instead, use the powerful builtin type inference to your advantage. This is trivial for primitive types like in the example above.

let message = "Hello World";

This is semantically exactly the same as the previous line, since TypeScript infers that the variable message is of type string.
So don’t type variables explicitly, but rather type function parameters and return values and use some of the OOP constructs if required. However, there is one exception to this rule: Literal types.
I makes sense to type these explicitly if you don’t want type widening to happen.

How to think about function signatures

Consider the following example:

interface Address {
  street: string;
  city: string;
}

interface Customer {
  name: string;
  dateOfBirth: Date;
  addresses: Address[];
}

function getAddressesByCity(city: string, customer: Customer) {
  return customer.addresses.filter(address => address.city === city);
}

Since we initially plan to call the function getAllAddressesByCity in the context of a customer, we require the function to always take a customer as an input. However, the function really doesn’t operate on a customer, but rather on an Address array. Think about the plain JavaScript which the TypeScript compiler emits. It looks like this:

function getAllAddressesInCity(city, customer) {
  return customer.addresses.filter(function(address) {
    return address.city === city;
  });
}

Since none of the TypeScript types exist at run time, the compiled function doesn’t care at all if the object we pass it is really a customer or if it is something completely different as long as it has an array of objects each containing at least a city property. So why don’t we generalize our initial implementation of the getAllAddressesByCity function a bit.

function getAllAddressesInCity<T extends { addresses: Customer["addresses"] }>(
  city: string,
  input: T
) {
  return input.addresses.filter(address => address.city === city);
}

Not limiting the inputs artificially, but rather describing what the function could in principle do, results in two distinct advantages. Of course, this style of coding leads to more reusability throughout your code base. E.g. we could easily use getAllAddressesByCity for vendors as well (assuming that they can indeed have multiple addresses).
The second benefit arises when it comes to testing our new function. Using our initial implementation we would have to mock a whole Customer object, even if the only thing we really need is an array of Addresses. With our more general implementation we can work with much smaller and therefore more manageable mocks. However, this second advantage becomes less important, when using a mocking framework like typemoq.
So in a nutshell, describe functions in terms of what they CAN do, instead of what they SHOULD do. This makes functions more reusable and mocking much easier.

Enums vs. Const Enums vs. Union Types

Chances are you are using enums or const enums to organize a collection of related values you use throughout your codebase. If so, consider replacing your enums with union types. The following table compares enums, const enums and union types.

Type Runtime Artifact Opaque
enum number Yes No
const enum number No No
string enum string Yes Yes
const string enum string No Yes
literal union type arbitrary No No

As you can see, one big advantage of literal union types is, that you can use them with arbitrary types (e.g. with object literals). Additionally they don’t have a runtime artifact, which saves precious bundle size when developing frontends. The fact that they are not opaque means that you can e.g. directly assign the literal string "Country" to a variable of type GenreUnion – if using a (const) string enum, you would have to write GenreEnum.Country instead:

enum GenreEnum {
  Country = "Country",
  Western = "Western"
}
type GenreUnion = "Country" | "Western";

let westernFromEnum: GenreEnum = "Western"; //Type '"Western"' is not assignable to type 'GenreEnum'.
let westernFromUnion: GenreUnion = "Western";

Note that if using a modern code editor like VS Code, you don’t have to worry about fat-fingering the strings – you get autocompletion for them, and even if a typo does happen, you get an immediate compiler error.

Of course, there are some edge cases where you might need to use regular enums (e.g. you can’t iterate over union types), but most of the time, literal union types work just as well and provide the discussed advantages.

Immutability in TypeScript

Let’s suppose we want an Address object to be immutable. The first thing which comes to mind could be to use the const keyword for local variables like this:

const address = {
  street: "Elsenheimerstraße 53",
  city: "Munich"
};

address.city = "Chicago"; // Mutation of object properties is possible.

However, according to the specification,

[const] are like let declarations but, as their name implies, their value cannot be changed once they are bound. In other words, they have the same scoping rules as let, but you can’t re-assign to them.

So the const keyword does not help us with creating an immutable object, it merely prevents us from reassigning another object to the address object.
A viable approach for making an object immutable would be to mark all its properties as readonly like this:

interface Address {
  readonly street: string;
  readonly city: string;
}

const address: Address = {
  street: "Elsenheimerstraße 53",
  city: "Munich"
};

address.city = "Chicago"; // Cannot assign to 'city' because it is a read-only property.

If you don’t want your Addresses to be immutable always, a cleaner solution would be to use the built-in utility type Readonly, so that you don’t have to create a whole new interface (or class or type):

interface Address {
  street: string;
  city: string;
}

const address: Readonly<Address> = {
  street: "Elsenheimerstraße 53",
  city: "Munich"
};

address.city = "Chicago"; // Cannot assign to 'city' because it is a read-only property.

Starting with TypeScript 3.4 there is a third option for achieving immutability for objects: const assertions.

// Type '{ readonly street: "Elsenheimerstraße 53", readonly city: "Munich" }'
const address = {
  street: "Elsenheimerstraße 53",
  city: "Munich"
} as const;

address.city = "Chicago"; // Cannot assign to 'city' because it is a read-only property.

Note that additionally the type of address is now extremely specific, since no type widening (e.g. no going from "Munich" to string) takes place.

Take-aways

  • Take advantage of the powerful type inference and only type variables explicitly if really necessary
  • Describe functions in terms of what they can do, instead of what they should do
  • Use literal union types instead of enums
  • const is not enough for creating an immutable object

Surely, you’ve heard the fairytales from microservices and monoliths. Or on a similar note, the tales about distributed (big) balls of mud from people like Simon Brown.


Usually these posts point out what goes wrong and how unexperienced teams go for a "hype-driven/tunnel-vision architecture". But how do you actually cut microservices? How do you design interfaces? What are techniques to find weak points in your application or system architecture?
In this post I digest the intends and views of a talk on the software architecture summit in Munich.

Speakers

Herbert Dowalil @hdowalil on Twitter
Stefan Zörner @STefanZoerner on Twitter

Microservices vs. Monolith

The spectrum of architecture definitely isn’t as binary as most blog posts suggest it is. There are way more types, here is a short recap of some of the most famous ones.

Microservices

There are different definitions out there, but most of them share key points like a domain-driven modules cut, outstanding flexibility and network distribution. One (generally accepted) definition in a FAQ-style by Jimmy Bogard.

Self-contained Systems (SCS)

Similar to microservices, but usually bigger services and fewer in a system.

Deployment Monolith(aka. Modulith)

You have a valuable product, but it does not suite a SaaS business model? Most of your customers won’t offer a Kubernetes clusters or you have a hard time building a secure deployment pipeline? This does not stop good architecture! You can still cut your software in great modules and deploy it as one package.

Architecture Monolith

Such a monolith has no defined structure. It consists of modules, but they are referenced across the whole system. Looks like this:

SOA

A blueprint of doing application architecture to generate standard services showing actual business processes steps. The main benefit should’ve been the orchestration of services to full processes and composition of new processes.
While SOA might have failed for most, it is "survived by its offsprings" (Anne Thomas Manes: SOA is Dead; Long Live Services). For a comparison between SOA and microservices see this O’REILLY report.

All these types focus on cutting a big software block into modules. Microservices for example focus on the smallest autonomous boundaries, SOA on reusability and composition. Because of this, the following ideas ‘how to define modules’ can be used for microservers, but don’t stop there. You can use them to define Java packages, C++ namespaces or create C# assemblies.

LET’S START

Often we start with question groups like "How many parts do I need?" and "Where do I cut?"

Sometimes we don’t even know if we should initially start with a monolith (Martin Fowler: MonolithFirst) or with microservices (Stefan Tilkov: Don’t start with a monolith). We can answer this particular question now: it doesn’t really matter when designing modules and interfaces. When you need a push for microservices think about upcoming complexity. The speakers phrased it nicely:

Modularization makes complexity manageable. Investment at beginning makes complexity manageable at the end.

For the first questions, we follow an enhanced version of the SOLID criterias by Robert C. Martin. Herbert Dowalil calls them "5C". Such principles sound nice, but what does strong cohesion or lose coupling mean?

"If you can’t measure it – you can’t manage it"

The key here is using "old school" metrics like cyclomatic complexity or average relative visibility. That’s it; here are the 5C:

Cut

Cut your modules in a way where they only have one concern/a single responsibility. Try to maximize cohesion inside a module.
Metrics: e.g. LCOM4, Relational-Cohersion, cyclomatic complexity

Conceal

Hide the internals of a module inside. Nothing outside the module needs to know about the technologies used or any work (changes) done inside.
Metrics: e.g. low relative visibility, low average relative visibility and low global relative visibility

Contract

Design small, specific and easy to understand interfaces between modules.
Metrics: e.g. Depth-of-Inherence

Connect

Explicitly declare every connections between modules.
Metrics: e.g. RACD and NCCD from John Lakos. Stability from the software package metrics.

Construct

Build new modules by connecting modules together. Move from a lower level of modules to a bigger view on the system. For higher level, modules follow the same patterns as for lower level ones.
Metics: e.g. automated tools like ArchUnit, Sonorgraph or Teamscale

For a deeper dive, see the "Architektur Spicker" (GER only) and follow @Herbert Dowalil on Twitter for updates on his new book.

Once you’ve cut your modules you can decide if you want to distribute them over the network. Pros can be:

  • Independent technological decisions:
    This reduces macro architecture from your software and lets you use specialized tools and languages for each module (like .NET for a web API and python for the underlying data science or spring boot for your user handling and c++ for a calculation core).
  • Technology roll-over:
    You can upgrade your technology components step by step. An upgrade from the Java or Node.js runtime might improve performance and eliminate security threads or you can change a module from an outdated version to the newest one (Angular.js to Angular). This can be useful when a system runs longer than anticipated.
  • Developer autonomy:
    Every part of your software can be developed and deployed independently. This shortens feedback cycles and enables developers to evaluate ideas faster.

While this sounds awesome (and it is!), there are also downsides:

  • Troubleshooting and debugging:
    Following calls and processes through your distributed software is harder than debugging a single component. Tracing makes troubleshooting easier, but replicating an error state across your whole application can be hard.
  • Complicated Ops:
    Operating your software can be way harder. For example, there might be issues with creating secure pipelines and authorization concepts through company boarders.
  • Consistency:
    There are ways to build around eventual consistency and distribution and sacrifice strong consistency. Examples would be the 2 phase commit (but you shouldn’t) or the saga pattern. While these are working and are proven successful, the initial set up is harder and they certainly don’t simplify troubleshooting.

As usual, this trade-off needs to be evaluated individually.

Take-aways

Modularization is hard. If you decide to go with microservices or any other way, doesn’t really matter in this context. You can distribute your modules to enforce principles, but if you cut your modules wrong, things will only get harder.
Don’t be afraid of "old school" or "university-style" metrics. Identify metrics with your team. Collaboratively search for weak points in your software (architecture). Enforce selected metrics, but never let a metric break the build and never dictate metrics top down!

LITERATURE and FOLLOWUP READING

Das Microservice-Praxisbuch by Eberhard Wolff

Arc42

Software systems architecture by Nick Rozanski and Eoin Woods

Microservice Patterns by Chirs Richards

[Modulith First Der angemessene weg zu microservices]() by Herbert Dowalil

Architektur Spicker #8

Microservice FAQ by Jimmy Bogard

SOA is dead; long live services by Anne Thomas Manes

What distinguishes a junior developer from a senior and the senior from a software architect? This is a commonly asked question and there are plenty of very good sources out there. One argument can be found on every blog post about this topic: a junior mostly makes small decisions and consumes knowledge. With the level of seniority, the level of decision making and knowledge sharing increases. This is why we often find people like "advocates", "fellows" or "heroes" on conferences and summits.
I went to one of these summits (Munich Software Architecture Summit) and want to share my experiences about the sessions and talks here. Let’s start with the key note of the first day. Weiterlesen

Die Auswertung vom Windows Event Log auf Fehler und Muster darin, ist mit dem Event Viewer nur begrenzt möglich und sehr mühsam. Eine einfache Alternative ist der Export der Events mit der Powershell zu CSV und der Import mit Analyse in Power BI. Weiterlesen

Worum gehts?

Konferenzen und E-Books sind voll davon: Container. Wenn über Container in der Software-Entwicklung gesprochen wird, fällt es meist zusammen mit dem Wort Docker. Dieser Blog-Post beschäftigt sich mit Docker und den Containern und soll einen kurzen Überblick über die damit im Zusammenhang stehenden Begriffe geben. Weiterlesen

Powershell Skripte können je nach Anwendungsfall komplex werden. Daher sind auch für Robustheit und Qualität automatisierte Tests wichtig. Das Powershell Module Pester hat sich hierfür bewährt und lässt wenig Wünsche offen.

Powershell kommt aber auch häufig zum Einsatz, wenn es darum geht verschiedene Systeme oder Dienste zu integrieren, also für Dateiaustausch oder Deployment. Das Mocken von Funktionen kommt dabei schnell an Grenzen. Tests gegen existirerende Systeme sind nicht immer möglich oder schwer zu isolieren.

Eine mögliche Lösung ist hier Docker Container zu verwenden. Im Test-Setup können die benötigten Dienste und Server als Container instanziiert werden und im Tear-Down einfach wieder gelöscht werden. Zur besseren Integration von Powershell und Docker habe ich dazu PSDocker entwickelt. Weiterlesen