Code Reuse – the Good, the Bad and the Ugly

Recently, I had a very intriguing conversation around reusability, team independence and self-contained services. Since I’ve been reading about this in the context of microservices and modern immutable infrastructure, figured out it is an interesting topic for a post.

There are a number of practices and technologies claiming to solve the problem with dependencies by preferring self-contained services or no-code-sharing policy. Although this is a useful concept, we (software engineers) sometimes underestimate the complexity of composite systems regardless if they are monolithic or service-oriented. We should analyze the dependencies in our system carefully, track them and monitor them instead of trying to ignore and hide their very existence.  Service-reuse is a powerful tool, but it is transforming the problem of handling dependencies and not solving it.

First of all, why do we have dependencies in our code? Following the logic of Conway’s law, the challenges of managing dependencies are closely related to the broader concept of reuse and are a reflection of the people developing the software. If you are not a lone wolf working on some small (pet) project, you have to deal with dependencies, regardless if you are aware of that or no and no fancy hype like containers or serverless can save you from that; you have to sit down and engineer you way around this.

Classic Dependency Management

The standard form of reusing code is through dependency management tooling. We describe the libraries and components we are using with their versions in some declarative form and use the corresponding tool (gem, npm, Maven, nuget, CMake) to resolve them and bring the artifacts locally. After that we link them to our program either at compile time (statically) or at runtime (dynamically) and we are good to go.

As software is becoming more complex and more libraries are added, we eventually face the dependency-hell problem. Different components of our solution require different version of the same library while in a single runtime you can have only one. And there are even more challenges: we need to update those dependencies regularly, we scan them for vulnerabilities, exploits and bugs.

Self-Contained Microservices

Microservices to the rescue! If we split our software into hundreds of small self-contained programs, each running with its own view of the shared libraries and filesystem, we no longer have the problem of dependencies, right?

Let’s take a look on a higher level. Those services communicate with some form of RPC, so how is the mechanism implemented? In every service? What if we need internal trust mechanism? Then we can have some common code (note we are not tackling the challenges of polyglot system) to handle this. But what happens if we update the mechanism, e.g. a key length, algorithm, protocol? Would two services be able to communicate with each other if one of them has the new version of the trust library, but not the other? If we are not touching this part, we are OK, but if we change something we have to roll-out the entire system (or subsystem) with all microservices. Note that this particular challenge can be handled differently, but that’s a topic for another post.

No-code reuse! Only service reuse will be allowed in our awesome microservice architecture.

Let’s explore that for a minute and for simplicity we will continue with the internal trust problem. If we are self-contained, we have dependencies to other services on API and protocol level, so if we change any of those we are pretty much in the same situation. And while in a monolithic system we can easily detect the dependency-hell, on API level in a distributed system it is much harder.


I am not saying that service-reuse is not a valuable tool for solving big composite systems problems, but we cannot solve one complexity with another complexity. We need to think about dependencies both on technical and on organizational level (e.g. how do we require a feature/bugfix for another team’s service), establish our aggregates carefully, have backward compatible APIs, use versioned APIs etc. Microservices give us a lot of agility, but from time to time we still need to think holistically about our system.

I will follow-up with another post that drills in the service-reuse concept..


Automating at Scale

Writing code has never been so accessible before. You can bootstrap a web application in 15 minutes with Rails and deploy it for 10 minutes with Heroku or have a mobile game installed on your Android device in 30 minutes with Unity. However, going beyond hello-world barrier is hard and going to multi-people development is even harder. It requires a well-structured and engineered approach, tooling, process and people.

In this post I am talking about infrastructure automation with VMware’s vRealize suite and the way to scale development efficiently.

vRealize Automation does incredible job at manage the enterprise hybrid-cloud and offering self-service to internal customers. You can use the marketplace to quickly bring popular infrastructure and service blueprints to your enterprise – from simple Linux/Windows instances to popular databases or even multi-tier applications. The web interface allows you to visually design your own blueprints, capturing your specific architecture needs and deploy it on vSphere, AWS, Openstack. You can define forms and publish your work to the self-service portal.

In a nutshell, the product covers a multitude of use-cases for cloud management and you can pretty much customize everything else.You have a full-blown scripting engine at your disposal – the vRealize Orchestrator works natively with vRA’s Day-1 and Day-2 operations, extending existing or defining entirely new services. You can integrate with anything – backup, monitoring, logging, configuration management, storage, CMDB, billing, ITSM… well – anything.

So, you write a simple script for backup, then a second script for the monitoring, then a third one for billing with an email on error… but then you go back to the first two scripts to add the same error handling logic. Before you know it, you have a hundred scripts, calling one another in a mix that has “happened”.

What is the alternative? Actually, you probably need to go through that stage. For example, Twitter was a Rails app at its very beginning before going to a more powerful and inevitably more complex architecture. But if you are already planning big, or you have multiple people you need to consider the alternative – software development.

Software Development for the vRealize Platform

We have developed a toolchain, combining a set of open-source tools with VMware products to deliver the best development experience for vRO and vRA content and we use it on an every-day basis to apply software development best practices for our customer solutions.

Most of the code is in actions – we manage them as pure JavaScript functions with JSDoc annotations for types. Usually, we prefer object-oriented JavaScript – function-classes – but sometimes we go for a more functional/procedural approach.

* Register a vRA machine as a node in Chef
* @param {VCAC:VcacHost} host - IaaS host
* @param {VCAC:VirtualMachine} machine - IaaS machine to register as node in Chef
function registerMachineInChef(host, machine) {
    var Chef = System.getModule("com.vmware.pscoe.example.chef").Chef()
    var chef = new Chef();
    var client = chef.createClient(machine.virtualMachineName);
    var machineProperties = System.getModule("com.vmware.pscoe.library.vra.iaas").getVirtualMachineProperties(host, machine.getEntity());
    var environment = machineProperties.get("Chef.Environment");
    var role = machineProperties.get("Chef.Role");
    var node = chef.createNode(machine.virtualMachineName, environment, client.private_key);

We use Visual Studio Code, which is an awesome, open and lightweight IDE. Our extension turns it into a vRO-compatible code editor with all the autocomplete you need (core scripting API, plug-in objects and actions). You can also push content to your live (preferably dedicated) development environment for testing it out or pull the things you develop in vRO (workflows, configuration elements etc.) and vRA (blueprints, forms) locally to integrate them in your project and afterwards – in Git.

With the toolchain and some clever mocking, we have unit tests to verify our code before even touching the environment.

The best part is that those tests are run headless (emulating vRO scripting environment), they run very quickly (100+ tests in several seconds) and this allows us to run them on every commit. Nothing gets merged into the master if tests are failing, allowing us to have automated regression testing and to ensure proper backward compatibility and superior quality of essential components. As an advocate of TDD and this is a major productivity booster for me.

Since most code is JavaScript, we can leverage the diff tools of popular source control systems and do effective code reviews – another personal favourite of mine – that nurture knowledge transfer, code ownership and reveals defects that are almost impossible to spot in QA phase.

The other cool thing is dependency management. The reason it is so easy to bootstrap a fully functional web application with Rails are all the gems you can just grab and use (e. g. Facebook authentication). With Maven, you can have similar experience for your automation – just add the libraries you need, and your new use-case is almost done.

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns=""
  xmlns:xsi="" xsi:schemaLocation="">

Finally, your code can be released as a single artifact with semantic versioning (e.g. 2.3.4) and all necessary dependencies. You can install this bundle to QA/Staging, both vRO and vRA, test it and if you are satisfied with the results – install it on any number of production environments the CLI provided in the zip.

kvuchkov-a02:bundle kvuchkov$ ls
bin	repo	vra	vro
kvuchkov-a02:bundle kvuchkov$ ./bin/installer ../
Started in non-interactive mode. Using Environment Profile ../
21:19:07.538 [main] INFO  com.vmware.pscoe.iac.artifact.strategy.StrategySkipOldVersions - STRATEGY| INFO | Apply Configuration strategies for import
21:19:07.540 [main] INFO  com.vmware.pscoe.iac.artifact.strategy.StrategySkipOldVersions - STRATEGY| SKIP | Source.Version <= Destination.Version
21:19:07.541 [main] INFO  com.vmware.pscoe.iac.artifact.strategy.StrategySkipOldVersions - PACKAGE | SKIP | com.vmware.pscoe.library.logging (2.0.1) <= (2.0.1)
21:19:07.541 [main] INFO  com.vmware.pscoe.iac.artifact.strategy.StrategySkipOldVersions - PACKAGE | SKIP | com.vmware.pscoe.library.nsx (2.8.0) <= (2.8.0)
21:19:07.541 [main] INFO  com.vmware.pscoe.iac.artifact.strategy.StrategySkipOldVersions - PACKAGE | SKIP | com.vmware.pscoe.fwaas (1.0.0-SNAPSHOT) <= (1.0.0-SNAPSHOT)
21:19:07.541 [main] INFO  com.vmware.pscoe.iac.artifact.strategy.StrategySkipOldVersions - PACKAGE | SKIP | (2.4.3) <= (2.4.3)
21:19:07.541 [main] INFO  com.vmware.pscoe.iac.artifact.strategy.StrategySkipOldVersions - PACKAGE | SKIP | com.vmware.pscoe.library.util.wfs (1.0.1) <= (NaN)
21:19:07.541 [main] INFO  com.vmware.pscoe.iac.artifact.strategy.StrategySkipOldVersions - PACKAGE | SKIP | com.vmware.pscoe.library.validator (2.0.1) <= (2.0.1)
21:19:07.541 [main] INFO  com.vmware.pscoe.iac.artifact.strategy.StrategySkipOldVersions - PACKAGE | SKIP | com.vmware.pscoe.fwaas.actions (1.0.0-SNAPSHOT) <= (1.0.0-SNAPSHOT)
21:19:07.541 [main] INFO  com.vmware.pscoe.iac.artifact.strategy.StrategySkipOldVersions - PACKAGE | SKIP | com.vmware.pscoe.library.util (2.16.1) <= (2.16.1)
21:19:07.542 [main] INFO  com.vmware.pscoe.iac.artifact.strategy.StrategySkipOldVersions - PACKAGE | SKIP | com.vmware.pscoe.library.class (2.2.0) <= (2.2.0)
21:19:07.542 [main] INFO  com.vmware.pscoe.iac.artifact.strategy.StrategySkipOldVersions - STRATEGY| PASS | Source.Version > Destination.Version
21:19:07.542 [main] INFO  com.vmware.pscoe.iac.artifact.strategy.StrategySkipOldVersions - PACKAGE | PASS | com.vmware.pscoe.library.util.wfs (1.0.1) > (NaN)
21:19:07.542 [main] INFO  com.vmware.pscoe.iac.artifact.VroPackageStore - Package | IMPORT | com.vmware.pscoe.library.util.wfs-1.0.1.package
21:19:07.749 [main] DEBUG - IMPORT  | com.vmware.pscoe.library.util.wfs-1.0.1.package
21:19:07.759 [main] INFO - vRA authentication token has expired. Acquiring a new one.
21:19:08.001 [main] DEBUG - Found com.vmware.pscoe.vra-1.0.0-SNAPSHOT.vra on server
21:19:08.001 [main] INFO  com.vmware.pscoe.iac.artifact.strategy.StrategySkipOldVersions - STRATEGY| INFO | Apply Configuration strategies for import
21:19:08.001 [main] INFO  com.vmware.pscoe.iac.artifact.strategy.StrategySkipOldVersions - STRATEGY| SKIP | Source.Version <= Destination.Version
21:19:08.002 [main] INFO  com.vmware.pscoe.iac.artifact.strategy.StrategySkipOldVersions - PACKAGE | SKIP | com.vmware.pscoe.vra (1.0.0-SNAPSHOT) <= (1.0.0-SNAPSHOT)
21:19:08.002 [main] INFO  com.vmware.pscoe.iac.artifact.strategy.StrategySkipOldVersions - STRATEGY| PASS | Source.Version > Destination.Version


You can always start small with any project, either it is vRA or something else, and keeping it simple. However, once you go beyond a certain threshold and you start developing your own solution using vRA more as a platformand less as a product, you have to consider software development. Otherwise you will repeat mistakes and face challenges that have been solved in the industry decades ago.