Configuring a SonarQube analysis with VSTS / TFS2015 vNext build

I must admit setting up a SonarQube analysis on the new build platform of VSTS / TFS2015 is a breeze compared to the previous approach.

In the old days it could take me a week to setup a TFS build server, capable of running a SonarQube analysis including unit testing and code coverage analysis. One had to deploy all kinds op OSS tools for unit testing and code coverage analysis or customize the build process template to transform the Visual Studio Test Results to an XML file SonarQube would accept.

All that is completely gone. You don’t even necessarily have to add a sonar-project.properties file to your Visual Studio Solution although you could. However there was one challenge which took me half a day to finally resolve. And for this reason I reckoned it would be a good idea to share the solution for this challenge with this post.

Preparation

Obviously you need a SonarQube server up and running to publish the SonarQube analysis results. The SonarQube server requires the C#  version 5.0 plugin, a plugin with is compatible with SonarQube 4.5.2+ (LTS). If you are on an older SonarQube version, you will be required to upgrade.

You also need your SonarQube Endpoint configured on TFS and have a build agent available with the capabilites java, msbuild, visualstudio and vstest. If you can’t find out how to do this, just drop me a comment and I’ll explain this in another blog post.

Basic approach

To get started just add the following build steps to your definition as shown on the picture below:

  1. SonarQube for MSBuild – Begin Analysis
  2. Visual Studio Build
  3. Visual Studio Test
  4. SonarQube for MSBuild – End Analysis
SonarQube build process overview
SonarQube build process overview

Configuring SonarQube MSBuild – Begin Analysis Step

One could say this build step is the new and improved replacement of the sonar-project.properties file, which was used to configure sonar-runner. With just 5 settings you are all set.

SonarQube Server
In the SonarQube Server section the SonarQube Endpoint has to be selected from the top down list. If you don’t know how to configure the endpoint, just drop me a comment.

SonarQube Project Settings
In the next section you configure your project key, project name and project version. The Project Key is the technical identification of your project in SonarQube. In my example I used com.capgemini.DevOps. The project name is your common name for your project, in my case DevOpsDemo. And the Project Version, is obviously the current version of your solution you are working on, in my case 1.0.

Advanced
My biggest challenge with the SonarQube analysis was getting the Visual Studio Unit Test results published in SonarQube. To my surprise, although the unit test coverage results are uploaded with no additional settings, the unit test results themselves are not. I suspect this to be an undocumented minor bug in the SonarQube for MSBuild – End Analysis task, currently on version 1.0.36.

To workaround this issue it is required to set the following Additional Setting:

/d:sonar.cs.vstest.reportsPaths="$(Common.TestResultsDirectory)\*.trx"

Common.TestResultsDirectory is a one of the default build variables available to tweak your build process to the actual environment the build agent is running in. This variable will be set to the local path on the agent where the test results are created.

Configuring Visual Studio Build Step

Just configure this step as you see fit. For a SonarQube analysis no special configuration is required.

Configure Visual Studio Test Build Step

Again you can basically configure this step as meets your unit test requirements. However for the SonarQube analysis you want to enable the option “Code Coverage Enabled”.

Configure SonarQube for MSBuild – End Analysis STEP

Although this is the build step which actually executes the SonarQube analysis, the build step itself requires no configuration, except for the step has to be Enabled, which is the default setting.

Configure General Settings

As already mentioned in the preparation section above a SonarQube analysis requires the capabilites java, msbuild, visualstudio and vstest from the build agent.

To assure you build ends up at an agent with these capabilities you need to add these four capabilities to the Demands section on the build definition’s General tab with the type “exists”

Setting the required Agent capabilities.
Setting the required Agent capabilities.

Conclusion

With the settings described above you are basically all set to run your SonarQube analysis. Obviously a build definition needs some more configuration with regard to Options, Repository, Triggers, General settings and Retention, but I consider these out side of the current scope of this blog post. If you want to know about these, just drop me a comment.

The Digital Transformation’s impact on a software development team

The other day my management asked me about my view on what a software development team would look like to face the challenges imposed by the current trends like for example digital transformation, the digital customer experience, SMAC, Big Data & cloud computing, to which I like to refer to with the term “the third platform technologies” .

When thinking about the engineering skills and capabilities my team would require to successfully face these challenges, I could list all existing ones and a few new ones. With this approach I would assure myself  I have the complete software engineering body of knowledge readily available in my team and this would enable the team to successfully address any challenge during the delivery of a solution.

However if I would move this forward, I would eventually end up with a team of such a considerable size, I highly doubt my business sponsor would provide me the funds required to assemble such a team. Also the size of the team would also create considerable overhead for the effort required for team coordination and alignment. This would greatly jeopardize the team’s capability to deliver value to the client fast and swiftly respond to any changes in the client’s business priorities or markets. I’d rather like my team to be as compact as possible, thus driving  down cost and to provide an agile environment capable to handle change. This implies I have to make few conscious choices regarding the team’s skills and capabilities, which made me decide to describe a brief profile for each member the team.

Considering the third platform technologies, which we are to leverage to deliver the solution, will for sure require new sets of skills for software engineers. The third platform era also requires a change of mindset and priorities for the team. A central theme in the digital transformation we are to support, is our clients’ transition from product to service; from only selling products to an end-to-end client experience with added value through added value services.  Service is prime, hence rather of talking about the back office for such a solution, I’d rather leverage the name service office for the same. This also distinguishes the back bone of our solution from the existing (legacy) back office of our clients, which is most likely  product & ERP focused, supported by packages like for example Siebel or SAP. This also warrants for a change of names for a few team member profiles and even the introduction of a few new profiles.

The service assurance engineer
This engineer assures the functional quality of the services provided by our solution to its users. In the second platform era this engineer was known as test engineer. However the end users’ perception of our solution and the services it provides, will greatly impact the solution’s success in the market. Assuring its quality is among the highest priorities of the team, as any remaining functional issues in a release will negatively impact the service’s reputation and the client’s bottom-line. Leveraging the traditional test engineers skill set and capabilities, the engineer’s focus shifts from validating detailed ,functional requirements, to assessing if the feature set of a solution release meet the end users’ service end user experience expectations.

Service analyst
The Service Analyst analyses the service organization and designs its processes and systems, creating new business models and their integration with technology. In the second platform era this engineer was known as business analyst. Leveraging the traditional business analyst skill set and capabilities, the focus shifts from aligning the product focus business and IT to creating new business models, leveraging all available assets of a company and venturing into new markets and services. Obviously this still requires a deep understanding of a business domain, but rather of thinking about how to support a given business domain with IT, there will be a shift towards thinking about extending this domain.

Data analytic engineer
The data analytic engineer is responsible for providing the means for the service delivery organization to govern and manage the data leveraged by the service. In the second platform era this engineer was known as the business intelligence engineer. Levering the traditional BI skill set and capabilities, the focus shifts from reporting business performance and supporting business decisions to providing data to support and enhance the service experience. This engineer will also require big data analysis skills, which will enable him to create big data solutions to provide the means for the service solution to align the service delivery with the profile, the expectation and  social context of each individual end user.

Gamification engineer
The gamification engineer is responsible for creating the part of the solution which provides the incentives which will seduce, persuade and engage the service end user to provide feedback regarding the service experience.  In the second platform era this engineer was known as computer game engineer. Levering the computer game engineer’s skills and capabilities, the focus shifts from creating stand-alone games, to adding game concepts to the service solution which focus on collecting data with have regard to the service experience, which when leveraged by the service solution will improve the service experience. As an example consider how Foursquare leveraged gamification to create a world wide data set with detailed information about venues.

Infrastructure-as-code engineer
The infrastructure-as-code engineer is responsible for creating the part of the service solution which provides the means to automatically deploy and configure the Information System (IS) & Infrastructure Technology (IT) components which deliver the service from the cloud to the devices of our service’s end user. In the second platform era this engineer was known as Infrastructure Engineer. Levering the Infrastructure engineers skills as capabilities, the focus shifts from designing  and in most cases manually deploy install and configure the solution’s infrastructure and software components, to creating solutions which provides the same through automated processes captured in software code. Ideally these solutions will in real-time monitor the performance of these components and automatically deploy or decommission components as required to provide an acceptable service experience, while at the same time minimize infrastructure service costs. This will require the Infrastructure engineer to learn about software engineering concepts & constructs, which will enable him to create maintainable and changeable code constructs. The constructs leveraged should assure a pluggable architecture which provide the means to exchange infrastructure service components from one provider by the components from another provider to overcome the issue of vendor and/or platform lock-in.

Service experience engineer
The service experience engineer is responsible for designing and specifying the service experience at a conceptual level. In the second platform era this engineer had multiple faces. First of all, the engineer would have been the User Experience Engineer, who leveraged RDV (-Rapid Design & Visualization-) to create prototypes for the screen-based user interface which dominated the era. Second of all, the engineer was known as Requirement Engineer responsible for creating detailed functional specification and acceptance criteria. However for our third platform solution I would like to combine the two roles into one and add Industrial Design skills and capabilities as well to enable the engineer to create the user interfaces of the third era, like for example augmented reality, to name just one concept looming just beyond the horizon. This would create an engineer which would be able to design and specify a service experience with a compelling user interaction surface, which will attract users to the service merely by the ergonomic, visual and audible quality of the end-user surface, which can be so much more than textual representation of data on a screen.

(Mobile) Front-end engineer
The (Mobile) Front-end engineer is responsible for actually constructing the user interaction surface, leveraging the design and specification co-created with the service experience engineer. In the second era this engineer was known as the front-end software programmer. Leveraging the traditional skills and capabilities of a programmer the focus will shift from forms based interfaces to interfaces which leverage all the available user interfacing and sensor and location capabilities of the third platform devices. Obviously this will require the engineer to learn the APIs available on these devices, to enable the engineer to leverage the devices’ capabilities to their full extend.

Service Integration engineer
The service integration engineer is responsible for designing, specifying and constructing the integration between the service solution under construction with the APIs of the services provided by other service providers. (-In a case study I was involved in it was required to integrate with the APIs of transportation service providers to book a ticket or to retrieve a bus schedule for example-). In the second platform era this engineer was known as System Integration Engineer.  Leveraging the traditional skills of the System Integration Engineer the focus will shift from integrating the services provided by an organization’s internal back office information system components within one business organization to integrating the services provided by external services providers. Considering a service solution should be able to exchange the services provided by one service provider with the services provided by another, in the third platform era the engineer needs to assure all external system integrations are pluggable.

Service office engineer
The service office engineer is responsible for designing and constructing the back bone of the service solution, the defined service office information system component mentioned above. This back bone aggregates the services provided by the business for which the solution is constructed, with the services provided by other (external) service providers and interacts with the service end users’ interaction surfaces. In the second platform era this engineer was known as back office programmer. Leveraging the skills and capabilities of the back office programmer the focus will shift from retrieving, transforming and persisting data from relational databases to retrieving, transforming and persisting data through services made available by the efforts of the service integration engineer.

Back office engineer

The back office engineer is responsible for adapting the business’ (legacy) back office information system components, to enable these components to provide services to the service office. Even in the third platform era the existing back offices of our “old business” clients contain valuable assets which can be leveraged in new business models and services. However it may be required to adapt these components to enable them to expose these assets for a service. Hence in my dream team a software engineer who is able to adapt the legacy systems is a valuable asset; most likely he also knows a lot about the business domain for which the service is constructed.

Resource management engineer
The resource management engineer is responsible for managing the resources which enable the team to create the service solution. The most important resources for the team are people, time and budget. In the second platform era this engineer was known as project manager, hence one can argue if this engineer should be considered a software engineer. Leveraging the traditional skills and capabilities of the project manager the focus will shift from a plan, command and control leader to a facilitating servant leader capable of providing the resources which enable the team to incrementally develop the service leveraging agile practices. Obviously this requires this engineer to be the scrum master of the team. But the resource management engineer must also be capable to create a business case for the next release of the service solution and secure the budget required for the realization, hence the old school tricks and trades of a project manager are a valuable asset to acquisition these funds from the business who may not completely grasp the potential of the digital transformation enabled by the service solution.

Support & communication engineer
The support & communication engineer is responsible for handling all communication channels with the service’s end users with regard to the front-end applications deployed on their service interaction surfaces. (- as opposed to the service desk members which handle support and communication with regard to the service provided -). In the second platform era this engineer was known as help desk engineer. Leveraging the traditional skills and capabilities of the help desk engineer the focus will shift from providing in depth guidance to users to overcome their impediments while they are trying to use the solution without ever reading the manual to a community manager capable of positively contributing to the perception and reputation of the service solution and the front-end applications leveraged to deliver the same to the service’s end users. The engineer must be able to mitigate any technical issue, true of false, which may surface on any social network or media. He guards the solution’s reputation on the internet and as first line for support identifies and analyses any urgent issues which require immediate action. This enables the other members of the team to focus on the new service features for next release and the high priority issues reported by the service’s community members.

To wrap up this blog post the conclusion is that digital transformation not only requires a transformation of the business. A transformation of the software engineering community, its members, their skills and capabilities is also required. This will enable software engineers to take on the challenge to venture off into to new technology landscapes and business models with the business and be able to focus on the objectives required to successfully complete the journey; never the less leveraging the existing pragmatic software engineering skills and capabilities to guarantee maintainability and changeability of the service solutions they construct.

Software Assembly Line Reference Model – First draft complete

Today with pages about the Platform, Infrastructure and Support layers of the Software Assembly Line Reference Model published, a first draft of the model was completed.

This draft primarily focuses on the  capability assessment of a software assembly line. You feedback on this draft would be highly appreciated and you are invited to submit your feedback through the comments section on each of the pages.

Going forward to the next edition of the model methods, techniques, tools and rules will be added to each of the aspect areas of the model.

Platform

Infrastructure

Support

 

Straight from the trenches: Engineering experiences with Microservices Architecture

Andrew Harmel-Law interviewed by Maurice Driessen

Earlier this month, while working on a proposal which turned out successful, I got the opportunity to have a discussion with Andrew Harmel-Law, around his personal experience with constructing a system based on a Microservices Architecture leveraging a evolutionary approach. It made sense to us to share this experience with all of you software engineers. Focal point of the discussion was how the non-functional characteristics of a Microservices Architecture contribute to the business continuity and agility of his client and helps software engineers to sustain a high quality and complex solution while improving the collaboration with the business.

Could you tell me something about your client and how your collaboration with the client evolved?
My client is large mail & parcel fulfillment provider in the UK. We build and run an array of their services for them. Initially started with eBussiness, the public facing websites and micro sites. We used to interface into legacy backend systems, provided by others, not the client and not us. We recently won contracts to take over a good proportion of the legacy backend systems as well. They have an integration gateway, sitting in front of lots of backend systems, ranging from old mainframes to .NET based smaller solutions, the typical standard estate of large and small business critical system one can expect in any large organization grown over time, with all kinds of integration patterns, from point to point to ESB.

Within the program, individual projects driven by the client are run on individual business cases. Because there has not been an overall integration program and as a result how projects are funded and the individual scope of these projects, we are again and again requiring an integration between micro site A, with backend systems B & C,  we are currently leveraging a microservices based architecture to address this challenge. We found the reuse of the microservices between front office and back ends as we went forward project by project, which is typical for the agile approach used to execute the projects. The evolution of the microservices is the current vision how to address the integration challenge, as opposed to almost a decade ago were we would do a large scale SOA analysis, where we would try to identify services and agree how we would produce & consume them. With the current microservices approach we see the benefit of the microservice evolving and we see the benefit of an individual microservice being added to the mix later on. Applying a microservices architecture enabled us to develop a very large and complex system in a more modular and flexible approach.

Although the client are driven by projects, which are not very agile, even within the scope of a fixed price project we do run with our the client, we are required to accommodate for last minute changes due to changing business needs required to run their business, the cost of those changes, whether they are early or later in the project, are constant. While in a traditionally architected project, where you will incur technical debt, the cost of these changes tend to increase at the end or after the project.

From a business perspective the evolution of the microservices has created a transparency which helped us to make the business understand the complexity any request for change to the evolving microservices ecosystem. This is because the business could grasp how the microservices map to their business, (c.f. Conway’s Law) and as a result the business could also grasp and understand how a change to business would be a more or less complex change to the microservices. This the complete opposite of the old days with the traditional monolith approach, where we as software engineers could only tell them about this “blob” of integration code & change challenge, which we could not make them understand and the business was required to trust us.  As a result the microservices architecture contributed to improving the business agility of our client and improving the collaboration with the business to accommodate change and involving the business in the evolution of their system architecture.

What can you share with regard to the maintainability of your microservices solution?
The microservice architecture created a very maintainable code base because for each microservice a separate relatively small code base is created, this code base is put in a separate repository, the name of the microservice makes sense to developers and business, and this code is deployed and maintained as a physically separated executable component. The typical kind of technical debt we see in a microservice, is a piece of business logic which somehow got into the service but which should be in another service, due to responsibilities of each service. Because of the small code base, these kind of issues stand out and are identified easily.

In the old monolithic approach obviously, we would also create the same modular code, but we would wrap this in a single executable with a relatively simple architecture, but in the end very complex codebase due to the size of this monolith. If one would by honest mistake create a class in a wrong location within the architecture and this would not get noticed and others would build upon this, in the end the quality of your code will just drop and eventually become un-maintainable.

A microservices architecture also tends to have an evolutionary lifecycle. Because each microservice makes sense to a product owner and software engineers, they can have decent and meaningful conversations around them. If they agree a microservice needs to be split up, the development team just go ahead and do it and don’t over analyze the decision. After all, the product owner and software engineers are ultimately responsible for them. This a result of the fact the microservices architecture is very bare, open and explicit. It is because a microservice is an individual component, which exposes a real and explicit interface, which is documented and which may be consumed by a component developed by the guy sitting next to you. All of this however requires software engineering discipline and craftsmanship, because if you don’t do, it it won’t work. The design and engineering has to happen, because it can’t happen by accident.

Suppose we would have created two versions of a microservice, we would have done this explicitly for good reasons. Then again, if we decide to retire an old version, because the version is  deprecated, we can do so explicitly and will get rid of the old version of the code as a whole. If we compare this with removing code from a monolith application, removal from a monolith comes at a cost and would be neglected. As a result the code base would just be growing all the time, because nobody removed unused code and over time the solution’s maintainability would decrease.

Because in a microservices architecture your deployment units of are a lot smaller, you are most likely to fix things faster in case there is for example a security issue with a library used by the microservices in the solution. You can pull down all these microservices, make the changes required to fix the issue, do a build and a test of all microservices. If for example 7 out of 8 microservices pass the tests, you could deploy the 7 fixed and patched microservices back into production. If the remaining issue with the 8th microservice requires more time to fix, than that is regrettable. But key message here is 7/8th of your system is already up and running. Compared to a traditional monolith that is a great improvement. Because with the monolith you would be required to fix and test everything, before you can make your solution available again.

In a microservices’ based solution any discussion on removing an old version of a microservice from the solution environment will require explicit conversations and decisions.  The migrating from old version of a microservice to an new version will impact depend service consumers. This needs to be planned and organized explicitly. But you are removing a complete moving part (the old version of the microservice) from your solution environment as a whole and replacing it with another moving part, the new version.  You don’t need to remove any unused pieces of code from a monolith, you just stop deploying the old version and it is gone.

How does this contribute to the business continuity of your client?
Well, we engineered our microservices to be stateless and our persistence tier (where our cross-request data resides) is a combination of MongoDB, MySQL and Redis. This enables us to scale up or down elastically, deploy new microservices, retire old microservices, without outage. We do service outages, because our client has gating processes and a way to release software into production, but we have build our solution in the same way as Netflix or other companies do. We could deploy new version of microservices and then use load balancers to redirect a fraction of the network traffic to the new ones and see if it works. And if they are happy, we could direct some more traffic or if it is bad and it’s blow up, we could redirect traffic back to the old versions. We have used this technique in our test environments giving zero-downtime upgrades.

How do you actually deploy?
We are looking at being more mature. Currently we are packaging the microservices as jar files, which are executable. They contain Netty, which is our http listener and Camel to do the pipeline processing. Http client and Hystrix are used to call down-stream services or MongoDB or Redis. These jar files are promoted trough our testing environments, levering Puppet and Capistrano to automate our deploys and Jenkins and JMeter to run our smoke test.

We are considering moving to using Docker, because Docker is good in development as well as in production. Docker provides process isolation – “bulkheading”. But Docker would also provide us with the opportunity to deploy in a cloud based or a PaaS platform. This can be done without code changes and limited changes to our provisioning scripts and would enable us to move to any private, hybrid or public cloud environment. These capability is inherent in the architecture.

What is your experience with regard to performance?
We do need to spec everything for the Christmas season, because around Christmas people tend to sent lots of things and that is what our client takes care of. Obviously around that time our systems get the highest load. Take for example the service which provides stamps. A user can sent in a request of up to 200 stamps and that request will turn into 10 calls to a set of sub-microservices, which are 2000 transactions.  For stamp requests we can handle around 10 to 15 a second, resulting in 20 to 30 thousand transactions a second overall within this solution. This is not massively high scale, but what we have proven is that we can scale horizontally almost linearly because we do not require any co-ordination between the microservice instances. Our scaling bottleneck is actually our 5 node MongoDB cluster. Because providing stamps is basically like printing money, obviously we must keep track of what we sell to the client’s customers. The synchronized write across the 5 node MongoDB cluster, guaranteed to have written to disk on at least 3 nodes, is our limiting step with regard to our transactional capacity.

For everything we have constructed in our microservices landscape is very strongly non- functionally tested, specifically for throughput and scalability. We spend a lot of time tuning timeouts, tuning thread pool sizes, tuning connection pool sizes. We spend considerable effort putting logging and monitoring in place, so we can see what is actually going on in the system. Sometimes you want things to fail and fail fast. As a result the configuration is set on how we perceive the demand for our microservices, making sure we do fail at the right point of actual demand and in our case that is just because some of the backend systems, our solutions is depend on, are really slow and have limited capacity. So in case of unforeseen issues we do need to set timeout lower and we have to make sure there are any knockout effects. Again, the good engineering practices that Capgemini is good at, are again brought right to the front. You can get very high throughputs with a microservices architecture, but you need to check. You won’t get it for free. You need to make sure all your configuration settings are setup correctly. As long as you stay in control of your configuration, you can basically scale linear, however knowing the bottlenecks in your system environment is key.

In lots of ways with microservices architecture we are doing SOA, an architecture Capgemini talked a lot about in the early 2000s, but which was at that time SOAP based. Nowadays with microservices architecture we are evolving a REST based SOA architecture at a very high level.

What about the choreography of all these individual microservices? What does it take to make these individual microservices act a fully fledged enterprise level computing system?
In our solution we ended up with choreography services. Take the example the case of making a stamp. To make a stamp you need to have a tracking number, which are pre-allocated. You need to reserve a tracking number, get the tracking number, use that tracking number to make the stamp. When you have created the stamp with the tracking number, you need to mark the tracking number as used. Then finally you need to make a barcode image of the tracking number and make that into the label. So that is our business process which business people do understand. Our solution has services which sit at that level and marshal the top level request, which we protect from the choreography services, with what we call adapters but are basically microservice session facades.

We do expose our microservices to various consumers, which get a distinct functional and non-functional flavors of our services. Consumers like for example Amazon and Ebay and the client’s own website. These consumers do their own decoration and adaptation to the request, but in the end when they need stamps they call our choreography class with the actual request for stamps. Then the choreography class calls down to the various resource classes; the resource microservices which make a tracking number, give a barcode, ect. These are all responsible for timing out effectively, tiding things up when things go wrong, making sure no mess is left behind.

Have you seen any patterns evolve in your solution?
In our solution we see the following reoccurring pattern. We have a single microservice sitting in front of each data store, which for example passes out tracking numbers. We also typically have a management microservice for a data store to setup items in the data store, like for example setting up new tracking numbers, which is deployed separately, because there is no need to massively scale out  this kind of service, a few will just do fine.  Typically there is also a reporting microservice, if there is a resource which needs reporting on. Again we have that as a separate service. So most logical resources in a system, like for example the resource tracking numbers , have these 3 types of microservices allocated  to them. But you only need to scale the microservices which handle to public available request, in our example the “pass out a tracking number” microservice. The other two types will not be hit with a heavy load, they just need to be available when there is a demand for their service. This approach enables us to scale our system at a more granular level, at the level of microservices, compared to the tradition monolith systems, which in turn enables us to more effectively and efficiently leverage the computing resources available and elastically scale the services for a given load profile.

Any last advice for our  software engineering community members?
The biggest thing about the microservices architecture to remember is, people had to remember they are software engineers and they don’t get a set of readymade, given practices to leverage on a plate. You do need to be aware of good engineering practices when constructing a microservices based solution, which makes a good case for Capgemini, because Capgemini is known in the market for the expertise of their engineers. But the core around microservices is fun; our engineers like creating these kind of architectures.

_________________________________

If you want more information on our experiences with microservices, check out the engineering blog at http://capgemini.github.io/categories/index.html#architecture. A reading list Andrew suggested on this topic is shared on http://bit.ly/MicroservicesArchitecture.

A reading list on Microservices Architecture

When I wanted to dive into and understand Microservices Architecture the following reading list was suggested to me:

Life beyond Distributed Transactions:an Apostate’s Opinion
by Pat Helland
This paper explores and names some of the
practical approaches used in the implementations
of large-scale mission-critical applications in a
world which rejects distributed transactions. It discusses the management of fine-grained pieces of
application data which may be repartitioned over
time as the application grows. It also discusses the design patterns used in sending messages
between these repartitionable pieces of data.

Migrating to Microservices
by Adrian Cockcroft
In this presentation Adrian Cockcroft discusses strategies, patterns and pathways to perform a gradual migration from monolithic applications towards cloud-based REST microservices.

Idempotence Is Not a Medical Condition
by Pat Helland
An essential property for reliable systems.

Distributed systems theory for the distributed systems engineer
A blog post on The Paper Trail with many links relevant for distributed systems engineering. This will keep you busy for some time.

Getting Started with Build vNext on TFS2015RC

Because my experience with regard to the deployment and configuration of a Build vNext agent along side of my on-premise TFS2015RC test configuration which was everything but an easy walk in the park, I decided to share a little write-up about what I have discovered till today because it might be of value to anybody who wants to get started with the new build architecture of Team Foundation Server 2015

In this blog post I describe how to deploy and configure the build agent. In the second part I intend to describe the creation and configuration of a build definition. I already have a built up and running, it is just I currently don’t have the bandwidth available for the write-up.

This blog post refers to the experience provided by TFS2015 RC. Be advised today my experience with the new TFS build architecture is still limited. The guidance provided is just to get you started on this subject and is not intended for use in software development production environments.

Objective

The objective is to deploy a Build vNext agent on a dedicated build server, have the agent build get the latest version of the source code, retrieve required NuGet packages, build the application and drop the build result in TFS.

Context

My TFS2015RC environment consists has the following configuration:

Server Role
tfs14dc01 Domain controller for the domain TFS14
tfs14at01 TFS2015RC application tier
tfs14dt01 TFS data tier leveraging SQL Server 2014
tfs14rm01 TFS2015RC build server
Tfs14ws01 Visual Studio Enterprise 2015 RC workstation

In TFS2015 I have created a Team Collection and Team Project, both with the name TFS2015, and submitted an out of the box C# Single Page Application web solution with the name Mvc to source control. Just to be clear on this point, the NuGet packages the solution depends on are not submitted to source control and need to be retrieved from NuGet.org during the build.

In source control the base folder for the solution is $TFS2015Mvc

Figure 1. Source Control

1. Create Build service account

To be able to run the agent as a service I created a dedicated build service account on the build controller tfs14dc01. In this example this account is TFS14svcTfsBuild.

This build service requires authorization to access the solution in TFS Source control. To accomplish this I browsed to the Member page of the Project Collection Build Service Accounts on the Security Panel of the team collections control pannel and added the build service account as a member.

Figure 2. Build Service Accounts
Figure 2. Build Service Accounts

Because my solutions lives in the Team Project TFS2015, I decided to create a dedicated build pool for this team project. To accomplish this I browsed to the Agent Pool tab on the root of the control panel and created a new Agent Pool with the name TFS2015.2.  Build pool configuration

Figure 3. Agent Pool Creation
Figure 3. Agent Pool Creation

Next step is to expand the new Agent Pool in the tree explorer, select the Agent Pool Service Account in the tree explorer and double click on the Agent Pool Service Accounts group in the main window. A window pops up and in this window I added the build service account to the group.

Figure 4. Agent Pool Service Account
Figure 4. Agent Pool Service Account

You need to be a bit cautious here. With the above approach my build service account has been added to a group, which members are authorized to act as agent pool service accounts for all build pools. This fits my personal requirements, as I am planning to reuse the same service account across all my build pools. However if you just want to authorize the build service account for this specific build pool, you should add your build service account directly to the pool-level security list by clicking the Add button in the main window.

3.     Avoid the Build vNext Agent provided with TFS2015RC

With the previous 2 steps completed we are all set to install and configure the Build vNext Agent. This step actually required some time and effort to figure out and complete successfully.

Figure 5. Agent Pool Configuration Screen
Figure 5. Agent Pool Configuration Screen

4.     Get yourself  a Visual Studio Online account

If you would follow the current guidance from Microsoft on your build server you would browse to the Agent Pool configuration screen of your TFS server and hit the Get Latest Agent button above the Build Pool explorer. This provides you with the version of the build vNext Agent installation package which was shipped with TFS2015RC. I ventured off this path only to discover the package still has tons of issues, which is to be expected for a preview. After trying to resolve all the issues, with not much success I must say, I was about to quit my investigation of the new build architecture. At the same time Microsoft announced the new build architecture was released for public preview on Visual Studio Online. Taking an educated guess the build agent provided on VSO, would most likely be an improved version, I took a gamble and decided to continue my effort with build agent available on VSO. I don’t know for sure if with this approach I inadvertently opened up a can of worms, but at least, as you will see, till date the approach turned out to be the road to success.

To be able to complete this step you do require access to Visual Studio Online. Considering I expect most of my readers to have an MSDN subscription with one of the benefits provided being a Visual Studio Online account, I trust you will be able to figure out yourself how to setup a VSO account.

Figure 6. MSDN Subscription Landing Page
Figure 6. MSDN Subscription Landing Page

5. Prepare build server

To run the build agent it is currently required to install Visual Studio 2013 of 2015 on the build server. You will also need to have PowerShell 3 or newer available.

6. Authorization for build service account on build server

Because I ran into all kinds of authorization issues with the build service account when  trying to get the build to run, at some point in time I decided to provide the build service account with System Administration privileges on the build server.  To accomplish this added the build service account to the local Administrators group of the build server with Computer Management.

Figure 7. Computer Management Console
Figure 7. Computer Management Console

7. Download Build vNext Agent from VSO

Obviously, considering it is possible to trigger a PowerShell script during the build, this authorization  puts you build server at risk for malicious scripts. I definitely need to revisit this configuration at a later date and investigate how the build service account can be locked down by providing just enough authorizations.

It is about time to head down to Visual Studio Online to get a copy of the build agent. On the build server browse to the Agent Pools tab on the Control Panel of your VSO subscription available at URL https://{youraccount}.visualstudio.com/DefaultCollection/_admin and  click Download  agent to download the agent.zip file containing the agent. Safe the zip to disk.

Figure 8. Agent Pool tab on Control Panel on VSO
Figure 8. Agent Pool tab on Control Panel on VSO

I decided to unzip the downloaded file to the directory c:Agent, which provides the directory structure shown in the picture bellow. I recon this structure will be subject to change as the agent evolves in future releases.

Figure 9. Directory Structure
Figure 9. Directory Structure

 

8. Configure PowerShell environment

(I grabbed the guidance in this paragraph  from https://msdn.microsoft.com/library/vs/alm/build/agents/windows , section “Download and configure the agent for Visual Studio Online” which you might just want to check for an update)

  1. Run PowerShell as Administrator.
  2. Enable PowerShell to run downloaded scripts signed by trusted publishers:
     Set-ExecutionPolicy RemoteSigned
  3. Disable strong name signing. Make sure to run both versions. (Applies only if you downloaded the agent software. We expect to eliminate this step when we ship the RTM version.)
     C:Program Files (x86)Microsoft SDKsWindowsv8.2AbinNETFX 4.5.3 Toolssn.exe -Vr *,*
    C:Program Files (x86)Microsoft SDKsWindowsv8.2AbinNETFX 4.5.3 Toolsx64sn.exe -Vr *,*
  4. Change to the directory where you unziped the agent.
     cd c:Agent
  5. Unblock the PowerShell scripts.
     Get-ChildItem -Recurse * | Unblock-File

You can skip the paragraph the next time you configure the agent.

9. Configure the build agent

While you are still in the PowerShell as Administrator in the directory c:Agent execute the following command:

.ConfigureAgent.ps1

The build agent’s PowerShell  configuration script will kick off and it will ask you to provide input for following questions:

Enter the name for the agent?

Respond with: Agent1.
Enter the url for the Team Foundation Server?

In my case I responded with: https:// tfs14at01:8080/tfs/
Configure this agent against which build pool?

In my case I responded with : TFS2015, refering to the build pool created in section 2.
Enter the path of work folder for this agent?

I my case I responded with c:agent1.
This will create a subdirectory which will contain the file work space for the build agent.
Would you like to install the agent as a Windows Service? (Y/N)

I responded with Y.
Enter the name of the user account to user for the service? (default is NTAUTHORITYNetwork)

I responded with my build service account: tfs14svcTfsBuild.

Next you will be prompted for the password of the build service account.
Next question will be if you would like to unconfigure any existing agent(Y/N).

Just respond with N and the configuration script will start with the actual configuration of the agent.

You do want the read the messages
“Configure agent succeeded. Agent is running as a Windows Service.

10. Check result

If you want to check if the  agent indeed registered correctly with TFS, just head down to the Agent Pool s tab on the Configuration Panel of your TFS server. In the TFS2015 pool you will see Agent1 listed with a green bar at the left, indicating the agent is available and in good health.

Figure 10. Running Agent1 running in TFS2015 agent pool
Figure 10. Running Agent1 running in TFS2015 agent pool

When did you have your last check up?

I was recently re-certified as Enterprise Certified Software Engineer by Capgemini’s Global Certification Board. In this article I elaborate on how I used the Software Engineering Certification Program for my personal career development.

Having a regular check up on the things that surround you has become a fact of live in our society which depends highly on technology. Sometimes these inspections are required by law,like the periodic motor vehicle test for my car. You have the condition of your central heating checked in the fall, to make sure it doesn’t break down on the first day of winter. Or you just want to take good care of yourself and have your teeth checked by your dentist every half year to spot tooth decay early. So basically you have a regular check on to make sure everything functions to expectation and is up to the mark.

As a certified and qualified professional software engineer I also need to make sure my skills are up to the mark. The professional standards which came with my engineering degree actually demand me to keep up with current technology. This should not come as much of a surprise in an industry which seems to reinvent itself every 5 to 10 years. Ever since I decided to have a career in ICT Industry I have been aware of this and certification has always been my tool to retain my market value as a professional engineer.

When you start your career as an engineer you tend to focus on on a publicly recognized certification program in the market from our partners like for example Microsoft’s MCTS. This will identify you as an able professional in a particular technical field and add to your value on the job market as well as for Capgemini. But what you are actually doing with this kind of certification is making our partner’s solutions a valuable success for our clients. That is just fine and valuable for Capgemini. But what it does not provide you is explicit recognition of your personal added value to Capgemini, which is the value you personally provide on top of the external certification and which distinguishes you from other certified engineers in the job market.

And this is exactly the reason why I invest time and effort to pursue Capgemini’s own SE Certification Program. This has just now taken me to Level 3, the entry level for Capgemini’s SE Club d’Experts, the best skilled software engineers within Capgemini. It has also significantly contributed to my recent appointment as Managing Consultant and thus provided a boost in my career.

The SE Certification Program sets concrete targets to achieve, not only with regards to personal technical knowledge but also with regard to your personal contribution to sales, project delivery, knowledge sharing and last but not least the Capgemini way. I used these criteria as input for my personal development plan and define smart targets for my annual evaluation. This in turn provided me with assignments and training to set the right context for my personal development. I created a professional environment for myself to grow and in the mean time made my personal contribution to Capgemini explicit.

As a result my recent SE Level 3 certification provides the proof I am ready to contribute to the Capgemini’s most complex engagements according to corporate guidelines, the law. The certificate also vouches for the condition and health of my personal and professional skills in a wider perspective than just technical excellence. So I now know for sure my skills are up to the mark and I am ready for my next professional challenge and the next step in my career.

So if you like to be challenged and drive your professional future within Capgemini, my advice to you is to use SE certification to steer your personal professional development and boost your career. Contact your regional Software Engineering Board representative if you are ready for your professional check up.

How do you define scope yet remain agile?

Here is a common question I get regularly. In this case it was Howard, a co-worker from the UK: “Probably a dumb question but…how do you define scope yet remain agile? Is it woolly wording, do you define it using some kind of abstraction or do you define in terms of number of iterations? The key being that, in my experience, most people what some idea of what they’re getting up front, yet in remaining agile we want to defer that sort of decision until later.” This is not at all a dumb question because it identifies the key difference between agile and traditional project delivery. Here is my response to the questions.

In my projects scope is a matter of defining this at the right abstraction level, or maybe more precise defining it without much detail. First thing you need is an concrete idea about the project’s scope boundaries. You can use the business case and vision for this purpose. They should provide the constrains with respect to for example business processes to support or available time and budget.

Next step is to make a first inventory of the functionality to be developed. You can use a business process model and/or use case model for this purpose. At this point in time you only define the outline for each process and/or use case with just enough detail to get a general idea about it’s function in the process and a preliminary idea of the complexity, i.e. the expected cost for the realization. This provides both a general idea of what will be build at what cost. This is the infamous backlog everybody is talking about.

And to make the project delivery agile, here is the key point: The project team has to be willing to change the content of the backlog as the project gets along, more detailed information about the problem at hand comes available and the client’s needs change. And this change actually should be exchange. If a new backlog item is identified this should be exchanged by a low priority backlog item we thought we would up front. You probably do want to keep the cost for the project constant or otherwise you need to find you a sponsor for new backlog items.

This is also the reason backlog grooming is so important. The team should keep the backlog up to date and the high priority backlog items should be more detailed than those with low priority. This constant reevaluation of the backlog in collaboration with the client is your guarantee you will actually deliver the solution the client actually needs and not the solution he thought he needed at the start.

So bottom line: you do need to set a scope for a project, but only with just enough detail to get an initial cost estimate and budget. And if you want to deliver the project agile, both you and the client must willing exchange low priority backlog items with newly discovered functionality which has a higher priority or value to the client, keeping project cost constant.