Enabling team collaboration with ArchiMate 3.0

How to share your ArchiMate model with your fellow architect leveraging open source tools. 

Expressing concerns that need to be addressed by the business and IT systems within the organization with architecture views in ArchiMate make sense, well at least to Enterprise Architects. However collaboration on and sharing the ArchiMate model across a team of architects is a common challenge. This challenge can be addressed leveraging a set of open source software tools:

  1. Archi 4, a low cost entry, open source ArchiMate modeling solution for users who are looking for a free, cross-platform ArchiMate 3.0 modelling tool for their team
  2. GRAFICO, which stands for the “Git FriendlyArchi File Collection”, an Archi plugin which provides a way to persist an ArchiMate model in a bunch of XML files (one file per ArchiMate element or view).
  3. GitExtensions, an open source graphical user interface for Git that allows you to control Git without using the command line.
  4. Git, the free and open source distributed version control system designed to handle everything from small to very large projects with speed and efficiency; commonly used for software development and other version control tasks and providing a distributed revision control system aimed at speed, data integrity and support for distributed, non-linear workflows.

To set this all up on your workstation just follow all the following steps:

  1. Download the latest Archi 4 full installer and install Archi 4.
  2. Download the latest GRAFICO plugin
  3. Install Archi by unzipping the downloaded zip file to a suitable location.
  4. Install the GRAFICO plugin:
    copy the downloaded .jar file into the Archi plugin folder (archi/plugins directory)
  5. If you are on Windows download and install GitExtensions-vX.Y-SetupComplete.msi from GitExtension. This setup package will provide you with all the required helper tools used by GitExtension. Keep in mind that any Git client tool will do to share your ArchiMate model with Git.
  6. Create a Git repository; there are plenty of options available which can host a Git repository, like for example GitHub. Just ask your software developers for a recommendation. And if you just want to this check out, why don’t you just use the model in my public Archi repostitory on GitHub. This does however require a free GitHub account.
  7. Install GitExtension, a guideline for this task is available in the Git Extension documentation.
  8. Clone the Git repository to a local directory,
    It will be empty the first time this is completed when you created your own empty repository.
  9. Start Archi 4 beta
  10. If you have pulled a current model from a Git repository,  then you must import via GRAFICO
    • “File->Import->Model as GRAFICO”
    • Choose the location from step 8 as the root of your local Git repository
  11. However if you want to start working from scratch, create a new model
    • “File->New->Empty Model”
  12. Create/Update the Architecture Model.
  13. Export architecture model via GRAFICO
    • “File->Export->Model as GRAFICO”
    • Choose the location from step 8 as the root of your local Git repository
  14. In your Git tool, stage all local changes and Push them to the master repository. Feel free to leverage my Archi repository for this purpose as this repository was specifically setup as a test environment.

With this setup complete your fellow architects can follow the same steps to get hold of copy of the model and start collaboration on the model as a team. However I also have to caution you. First of all, if you stuck to the above process, the model you work on never gets persisted in a local archimate file. If you forget to export with GRAFICO your changes are lost. Git is also not a file system, it is a version control system. Your team and you must setup a version control workflow and governance to assure changes to the model are persisted and promoted appropriately. You should definitely check out the best practices and concepts around Git when your team and you venture down this track. Git provides the tools to appropriately baseline your team’s view on architecture but if this tool is not used as it should it may become messy pretty quickly. I would expect your software engineering development team members, which probably have years of experience with Git, is able to provide some coaching on this topic.

Please follow and like us:
RSS
Follow by Email
Facebook
Google+
https://www.dsec.nl/2016/12/enabling-team-collaboration-with-archimate-3-0/
LinkedIn

Configuring a SonarQube analysis with VSTS / TFS2015 vNext build

I must admit setting up a SonarQube analysis on the new build platform of VSTS / TFS2015 is a breeze compared to the previous approach.

In the old days it could take me a week to setup a TFS build server, capable of running a SonarQube analysis including unit testing and code coverage analysis. One had to deploy all kinds op OSS tools for unit testing and code coverage analysis or customize the build process template to transform the Visual Studio Test Results to an XML file SonarQube would accept.

All that is completely gone. You don’t even necessarily have to add a sonar-project.properties file to your Visual Studio Solution although you could. However there was one challenge which took me half a day to finally resolve. And for this reason I reckoned it would be a good idea to share the solution for this challenge with this post.

-Note. Unfortunately due to a server crash the images of this post got lost. Restore of the images is still in progress. –

Preparation

Obviously you need a SonarQube server up and running to publish the SonarQube analysis results. The SonarQube server requires the C#  version 5.0 plugin, a plugin with is compatible with SonarQube 4.5.2+ (LTS). If you are on an older SonarQube version, you will be required to upgrade.

You also need your SonarQube Endpoint configured on TFS and have a build agent available with the capabilites java, msbuild, visualstudio and vstest. If you can’t find out how to do this, just drop me a comment and I’ll explain this in another blog post.

Basic approach

To get started just add the following build steps to your definition as shown on the picture below:

  1. SonarQube for MSBuild – Begin Analysis
  2. Visual Studio Build
  3. Visual Studio Test
  4. SonarQube for MSBuild – End Analysis
SonarQube build process overview
SonarQube build process overview

Configuring SonarQube MSBuild – Begin Analysis Step

One could say this build step is the new and improved replacement of the sonar-project.properties file, which was used to configure sonar-runner. With just 5 settings you are all set.

SonarQube Server
In the SonarQube Server section the SonarQube Endpoint has to be selected from the top down list. If you don’t know how to configure the endpoint, just drop me a comment.

SonarQube Project Settings
In the next section you configure your project key, project name and project version. The Project Key is the technical identification of your project in SonarQube. In my example I used com.capgemini.DevOps. The project name is your common name for your project, in my case DevOpsDemo. And the Project Version, is obviously the current version of your solution you are working on, in my case 1.0.

Advanced
My biggest challenge with the SonarQube analysis was getting the Visual Studio Unit Test results published in SonarQube. To my surprise, although the unit test coverage results are uploaded with no additional settings, the unit test results themselves are not. I suspect this to be an undocumented minor bug in the SonarQube for MSBuild – End Analysis task, currently on version 1.0.36.

To workaround this issue it is required to set the following Additional Setting:

/d:sonar.cs.vstest.reportsPaths="$(Common.TestResultsDirectory)\*.trx"

Common.TestResultsDirectory is a one of the default build variables available to tweak your build process to the actual environment the build agent is running in. This variable will be set to the local path on the agent where the test results are created.

Configuring Visual Studio Build Step

Just configure this step as you see fit. For a SonarQube analysis no special configuration is required.

Configure Visual Studio Test Build Step

Again you can basically configure this step as meets your unit test requirements. However for the SonarQube analysis you want to enable the option “Code Coverage Enabled”.

Configure SonarQube for MSBuild – End Analysis STEP

Although this is the build step which actually executes the SonarQube analysis, the build step itself requires no configuration, except for the step has to be Enabled, which is the default setting.

Configure General Settings

As already mentioned in the preparation section above a SonarQube analysis requires the capabilites java, msbuild, visualstudio and vstest from the build agent.

To assure you build ends up at an agent with these capabilities you need to add these four capabilities to the Demands section on the build definition’s General tab with the type “exists”

Setting the required Agent capabilities.
Setting the required Agent capabilities.

Conclusion

With the settings described above you are basically all set to run your SonarQube analysis. Obviously a build definition needs some more configuration with regard to Options, Repository, Triggers, General settings and Retention, but I consider these out side of the current scope of this blog post. If you want to know about these, just drop me a comment.

Please follow and like us:
RSS
Follow by Email
Facebook
Google+
https://www.dsec.nl/2016/04/configuring-a-sonarqube-analysis-with-vsts-tfs2015-vnext-build/
LinkedIn

The Digital Transformation’s impact on a software development team

The other day my management asked me about my view on what a software development team would look like to face the challenges imposed by the current trends like for example digital transformation, the digital customer experience, SMAC, Big Data & cloud computing, to which I like to refer to with the term “the third platform technologies” .

When thinking about the engineering skills and capabilities my team would require to successfully face these challenges, I could list all existing ones and a few new ones. With this approach I would assure myself  I have the complete software engineering body of knowledge readily available in my team and this would enable the team to successfully address any challenge during the delivery of a solution.

However if I would move this forward, I would eventually end up with a team of such a considerable size, I highly doubt my business sponsor would provide me the funds required to assemble such a team. Also the size of the team would also create considerable overhead for the effort required for team coordination and alignment. This would greatly jeopardize the team’s capability to deliver value to the client fast and swiftly respond to any changes in the client’s business priorities or markets. I’d rather like my team to be as compact as possible, thus driving  down cost and to provide an agile environment capable to handle change. This implies I have to make few conscious choices regarding the team’s skills and capabilities, which made me decide to describe a brief profile for each member the team.

Considering the third platform technologies, which we are to leverage to deliver the solution, will for sure require new sets of skills for software engineers. The third platform era also requires a change of mindset and priorities for the team. A central theme in the digital transformation we are to support, is our clients’ transition from product to service; from only selling products to an end-to-end client experience with added value through added value services.  Service is prime, hence rather of talking about the back office for such a solution, I’d rather leverage the name service office for the same. This also distinguishes the back bone of our solution from the existing (legacy) back office of our clients, which is most likely  product & ERP focused, supported by packages like for example Siebel or SAP. This also warrants for a change of names for a few team member profiles and even the introduction of a few new profiles.

The service assurance engineer
This engineer assures the functional quality of the services provided by our solution to its users. In the second platform era this engineer was known as test engineer. However the end users’ perception of our solution and the services it provides, will greatly impact the solution’s success in the market. Assuring its quality is among the highest priorities of the team, as any remaining functional issues in a release will negatively impact the service’s reputation and the client’s bottom-line. Leveraging the traditional test engineers skill set and capabilities, the engineer’s focus shifts from validating detailed ,functional requirements, to assessing if the feature set of a solution release meet the end users’ service end user experience expectations.

Service analyst
The Service Analyst analyses the service organization and designs its processes and systems, creating new business models and their integration with technology. In the second platform era this engineer was known as business analyst. Leveraging the traditional business analyst skill set and capabilities, the focus shifts from aligning the product focus business and IT to creating new business models, leveraging all available assets of a company and venturing into new markets and services. Obviously this still requires a deep understanding of a business domain, but rather of thinking about how to support a given business domain with IT, there will be a shift towards thinking about extending this domain.

Data analytic engineer
The data analytic engineer is responsible for providing the means for the service delivery organization to govern and manage the data leveraged by the service. In the second platform era this engineer was known as the business intelligence engineer. Levering the traditional BI skill set and capabilities, the focus shifts from reporting business performance and supporting business decisions to providing data to support and enhance the service experience. This engineer will also require big data analysis skills, which will enable him to create big data solutions to provide the means for the service solution to align the service delivery with the profile, the expectation and  social context of each individual end user.

Gamification engineer
The gamification engineer is responsible for creating the part of the solution which provides the incentives which will seduce, persuade and engage the service end user to provide feedback regarding the service experience.  In the second platform era this engineer was known as computer game engineer. Levering the computer game engineer’s skills and capabilities, the focus shifts from creating stand-alone games, to adding game concepts to the service solution which focus on collecting data with have regard to the service experience, which when leveraged by the service solution will improve the service experience. As an example consider how Foursquare leveraged gamification to create a world wide data set with detailed information about venues.

Infrastructure-as-code engineer
The infrastructure-as-code engineer is responsible for creating the part of the service solution which provides the means to automatically deploy and configure the Information System (IS) & Infrastructure Technology (IT) components which deliver the service from the cloud to the devices of our service’s end user. In the second platform era this engineer was known as Infrastructure Engineer. Levering the Infrastructure engineers skills as capabilities, the focus shifts from designing  and in most cases manually deploy install and configure the solution’s infrastructure and software components, to creating solutions which provides the same through automated processes captured in software code. Ideally these solutions will in real-time monitor the performance of these components and automatically deploy or decommission components as required to provide an acceptable service experience, while at the same time minimize infrastructure service costs. This will require the Infrastructure engineer to learn about software engineering concepts & constructs, which will enable him to create maintainable and changeable code constructs. The constructs leveraged should assure a pluggable architecture which provide the means to exchange infrastructure service components from one provider by the components from another provider to overcome the issue of vendor and/or platform lock-in.

Service experience engineer
The service experience engineer is responsible for designing and specifying the service experience at a conceptual level. In the second platform era this engineer had multiple faces. First of all, the engineer would have been the User Experience Engineer, who leveraged RDV (-Rapid Design & Visualization-) to create prototypes for the screen-based user interface which dominated the era. Second of all, the engineer was known as Requirement Engineer responsible for creating detailed functional specification and acceptance criteria. However for our third platform solution I would like to combine the two roles into one and add Industrial Design skills and capabilities as well to enable the engineer to create the user interfaces of the third era, like for example augmented reality, to name just one concept looming just beyond the horizon. This would create an engineer which would be able to design and specify a service experience with a compelling user interaction surface, which will attract users to the service merely by the ergonomic, visual and audible quality of the end-user surface, which can be so much more than textual representation of data on a screen.

(Mobile) Front-end engineer
The (Mobile) Front-end engineer is responsible for actually constructing the user interaction surface, leveraging the design and specification co-created with the service experience engineer. In the second era this engineer was known as the front-end software programmer. Leveraging the traditional skills and capabilities of a programmer the focus will shift from forms based interfaces to interfaces which leverage all the available user interfacing and sensor and location capabilities of the third platform devices. Obviously this will require the engineer to learn the APIs available on these devices, to enable the engineer to leverage the devices’ capabilities to their full extend.

Service Integration engineer
The service integration engineer is responsible for designing, specifying and constructing the integration between the service solution under construction with the APIs of the services provided by other service providers. (-In a case study I was involved in it was required to integrate with the APIs of transportation service providers to book a ticket or to retrieve a bus schedule for example-). In the second platform era this engineer was known as System Integration Engineer.  Leveraging the traditional skills of the System Integration Engineer the focus will shift from integrating the services provided by an organization’s internal back office information system components within one business organization to integrating the services provided by external services providers. Considering a service solution should be able to exchange the services provided by one service provider with the services provided by another, in the third platform era the engineer needs to assure all external system integrations are pluggable.

Service office engineer
The service office engineer is responsible for designing and constructing the back bone of the service solution, the defined service office information system component mentioned above. This back bone aggregates the services provided by the business for which the solution is constructed, with the services provided by other (external) service providers and interacts with the service end users’ interaction surfaces. In the second platform era this engineer was known as back office programmer. Leveraging the skills and capabilities of the back office programmer the focus will shift from retrieving, transforming and persisting data from relational databases to retrieving, transforming and persisting data through services made available by the efforts of the service integration engineer.

Back office engineer

The back office engineer is responsible for adapting the business’ (legacy) back office information system components, to enable these components to provide services to the service office. Even in the third platform era the existing back offices of our “old business” clients contain valuable assets which can be leveraged in new business models and services. However it may be required to adapt these components to enable them to expose these assets for a service. Hence in my dream team a software engineer who is able to adapt the legacy systems is a valuable asset; most likely he also knows a lot about the business domain for which the service is constructed.

Resource management engineer
The resource management engineer is responsible for managing the resources which enable the team to create the service solution. The most important resources for the team are people, time and budget. In the second platform era this engineer was known as project manager, hence one can argue if this engineer should be considered a software engineer. Leveraging the traditional skills and capabilities of the project manager the focus will shift from a plan, command and control leader to a facilitating servant leader capable of providing the resources which enable the team to incrementally develop the service leveraging agile practices. Obviously this requires this engineer to be the scrum master of the team. But the resource management engineer must also be capable to create a business case for the next release of the service solution and secure the budget required for the realization, hence the old school tricks and trades of a project manager are a valuable asset to acquisition these funds from the business who may not completely grasp the potential of the digital transformation enabled by the service solution.

Support & communication engineer
The support & communication engineer is responsible for handling all communication channels with the service’s end users with regard to the front-end applications deployed on their service interaction surfaces. (- as opposed to the service desk members which handle support and communication with regard to the service provided -). In the second platform era this engineer was known as help desk engineer. Leveraging the traditional skills and capabilities of the help desk engineer the focus will shift from providing in depth guidance to users to overcome their impediments while they are trying to use the solution without ever reading the manual to a community manager capable of positively contributing to the perception and reputation of the service solution and the front-end applications leveraged to deliver the same to the service’s end users. The engineer must be able to mitigate any technical issue, true of false, which may surface on any social network or media. He guards the solution’s reputation on the internet and as first line for support identifies and analyses any urgent issues which require immediate action. This enables the other members of the team to focus on the new service features for next release and the high priority issues reported by the service’s community members.

To wrap up this blog post the conclusion is that digital transformation not only requires a transformation of the business. A transformation of the software engineering community, its members, their skills and capabilities is also required. This will enable software engineers to take on the challenge to venture off into to new technology landscapes and business models with the business and be able to focus on the objectives required to successfully complete the journey; never the less leveraging the existing pragmatic software engineering skills and capabilities to guarantee maintainability and changeability of the service solutions they construct.

Please follow and like us:
RSS
Follow by Email
Facebook
Google+
https://www.dsec.nl/2016/02/the-digital-transformations-impact-on-a-software-development-team/
LinkedIn

Software Assembly Line Reference Model – First draft complete

Today with pages about the Platform, Infrastructure and Support layers of the Software Assembly Line Reference Model published, a first draft of the model was completed.

This draft primarily focuses on the  capability assessment of a software assembly line. You feedback on this draft would be highly appreciated and you are invited to submit your feedback through the comments section on each of the pages.

Going forward to the next edition of the model methods, techniques, tools and rules will be added to each of the aspect areas of the model.

Platform

Infrastructure

Support

 

Please follow and like us:
RSS
Follow by Email
Facebook
Google+
https://www.dsec.nl/2016/01/software-assembly-line-reference-model-platform-added/
LinkedIn