30 days of DevOps: DC/OS

Let’s start with a typical scenario…We have multiple teams requiring multiple services on different clouds. One team comes to you and asks about deploying a new Kafka Rest Proxy server to support an integration microservices solution. Another team asks you to set up a Jenkins and Nexus server in order to build your Build & Release pipelines. And, Developers want to analyse the application logs, so they request the creation of an ELK setup for monitoring transactions and creating alerts.

Does this ring a bell? It happens every day in most software development teams. We try to be “smart” and make some short-term decisions about how and where to deploy these services. In some cases, we just want to move away from the complexity of creating the service from scratch so we decide to go from Virtual Machines to SaaS, for example moving to Elastic Cloud instead of deploying an AWS ELK.

Without realising, you’ll have created a problem, which is that managing, monitoring, optimising and maintaining all these systems separately becomes unmanageable.

Another discussion point – cost. How can we optimise the cost of all these resources, ensure that the performance is adequate, able to scale up & down when needed and check our overall processing and data usage etc?

Most people try to optimise in the following ways:

  • Automatic Scaling
  • Moving from Physical Servers to VMs
  • Moving from VMs to Containers
  • Moving from VMs or Containers to Serverless architectures
  • Shutting down on idle times
  • Constant monitoring and tweaking the performance

At MagenTys we have been transforming companies moving them from VMs to containers and Serverless architectures so, why not going one step ahead?

All of these points led me to explore a different approach and to look into solutions such as DC/OS.

Let me briefly tell you about DC/OS first…

DC/OS (Data Center Operating System) is a cluster manager and system to run data services. It allows customers to run services and containers into their own clouds or infrastructure. As a data centre operating system, DC/OS is itself a distributed system, a cluster manager, a container platform, and an operating system.

It belongs to a company called Mesosphere and it’s being built on the top of Apache Mesos.

To add more details into DC/Os, we could say that it’s a modern data-driven application model for containerised microservices, data services and AI/ML tools that is supported on the top of Hybrid Clouds.

Just one slide courtesy of Mesosphere that explains this a bit better:

One of the great advantages of DC/OS is that as a cluster manager, it runs in every infrastructure. This could be Physical Servers, Virtual Servers, Private Clouds or Public Cloud Providers such as Google, AWS and Azure.

This means that you can move DC/OS from one cloud to another, which is as easy as creating new nodes (VMs) in another cloud and move just the machines that are conforming DC/OS, gradually. This means you’re not tied into a specific cloud provider anymore.

DC/OS also includes a unique section where you can administer the security, compliance of your infrastructure & services and where you can schedule when, and how your applications are deployed and for how long.

As part of DC/OS we can find:

  • Application-Aware Automation
  • Intelligent Resourcing
  • Security
  • Service Catalog
  • Hybrid & Edge Cloud
  • Operations & Troubleshooting
  • Multitenancy
  • Networking & Storage

Don’t forget that DC/OS is built on the top of Apache Mesos, a distributed systems Kernel that provides applications (e.g., Hadoop, Spark, Kafka, Elasticsearch) with APIs for resource management and scheduling across entire datacenter and cloud environments.

If you want to compare DC/OS with Linux, we could go as far as to say that DC/OS is like a distribution of Apache Mesos Cluster Management.

As a container platform, DC/OS includes two built-in task schedulers (Marathon and DC/OS Jobs (Metronome)) and two container runtimes (Docker and Mesos). All tasks on DC/OS are containerised, which allows the system to optimise and distribute the hardware resources in a more efficient way, adding also resiliency to these containers.

selected image

Adding to this abstractions beast, we have an offer of 100+ frameworks (resources) available on the catalogue, which can be deployed into DC/OS with a couple of clicks.

Image result for dc/os

Why Industry Leaders choose Mesosphere?:

  • Build and deploy time from weeks to minutes.
  • Save 50% on AWS or Azure bills.
  • Don’t want to get locked to a cloud provider.
  • Fast data + Microservices: With Kubernetes on DC/OS we can use the container orchestrator alongside big data services with a common set of security, maintenance, and management tools”.

One picture I saw during my workshop with DC/OS which I really liked (see below) shows a typical technology backend that we find in most of our projects, with the difference being that all those are hosted in DC/OS instead on multiple clouds as PaaS or SaaS.

To summarise, it was really worth it to explore this solution as I found out that instead  of having all these services and applications such as Jenkins, Artifactory, Kafka, ElasticSearch, Kibana, Hadoop, Bitbucket, Kubernetes etc. in different providers/clouds/offerings, I have seen that it makes more sense to have them all together under the same umbrella. With DC/OS you basically provide the hardware power you want based on the number of nodes (VMs) that you want to deploy and use those resources as a pool for all the services that you will deploy inside, making a better use of your resources.

If you want to know more just explore it here

References:

http://mesos.apache.org/documentation/latest/architecture/

https://docs.mesosphere.com/1.11/overview/what-is-dcos/

https://dcos.io/

Advertisement
Posted in DevOps | Tagged | Leave a comment

What the duck is DevOps?

I wrote my first article about what DevOps is and how to start with it in October 2015. Back then, there were not much information about DevOps, but Agile was peaking up, and some processes like Application Lifecycle Management and SDLC were very well known. Likewise with “DevOps” tools, there were some, but not to the degree of sophistication we can expect from tools we have nowadays.

Then on January of this year (2018), I wrote down another article about a DevOps journey that I took part in during Christmas time, and I thought, “Great! I just added my little grain of sand to the DevOps community” which I hoped was already very well known, and I assumed (bad from me) that by now mostly everyone knew about DevOps and what it’s about.

But, I’m finding myself in many situations where people still have the wrong concept or idea about what DevOps is.

Here are a few examples:

  • “Yes, we have a new project that needs a database developer for 3 weeks to create some views on an Oracle DataBase, let’s bring a DevOps consultant for this work.”
  • “I need a Web Frontend Developer for this Mobile Application, we should hire a DevOps engineer.”
  • “Yes, if it is a DevOps person has to be an expert on Java, C#, JavaScript, Python and Perl, and be able to deploy them into production.”

I could go on but you get my point…

I have a view of what DevOps means and what a DevOps “Consultant/Engineer” should be capable of. I’m not saying it’s the right view, but as always, we can open a constructive debate about it.

Let’s start with the official definition of DevOps by Mr. Wikipedia:

DevOps (a clipped compound of “development” and “operations“) is a software engineering culture and practice that aims at unifying software development (Dev) and software operation (Ops). The main characteristic of the DevOps movement is to strongly advocate automation and monitoring at all steps of software construction, from integration, testing, releasing to deployment and infrastructure management. DevOps aims at shorter development cycles, increased deployment frequency, and more dependable releases, in close alignment with business objectives

My favourite sentence is: “The main characteristic of the DevOps movement is to strongly advocate automation and monitoring at all steps of software construction, from integration, testing, releasing to deployment and infrastructure management”

So, if you ask me again what a good DevOps consultant should be aware of, I would suggest the following:Code – Code development for Automation, Source Code Management, Code Reviews, Branching Strategy, Code Analysis tools

Build – Build Strategy, Continuous integration, build status

Test – Continuous testing tools, Automation for Functional and Non Functional testing

Packaging – Artefacts generation, quality labelling and packages readiness

Release – Change management, release approvals, release automation, environments generation, automatic deployment

Configure – Infrastructure configuration and management, Infrastructure as Code tools

Monitor – Applications performance monitoring, alerts, reports, end–user experience

On top of that you can include the technologies which are crossing through these areas. Technologies like, Jenkins, Terraform, Ansible, Grafana, Chef, VSTS, Splunk, ELK, Python, Nuget, TeamCity, Cloudwatch, Lambdas, OMS, etc. They are just tools for a purpose, and they are meant to be used for the categories previously mentioned.

There is a common misconception that a DevOps consultant needs to be a Cloud Expert. This is not true at all, Cloud Services are just another set of servers, services and tools that are used for the same purpose.

A good place to check out what tools are trendy at the moment and how they can be used in each of the above areas is https://xebialabs.com/periodic-table-of-devops-tools/. This for me, so far, is one of the best representations of what an engineer specialised in DevOps should know about.

When someone asks me what a DevOps consultant is, I answer that it’s a mixture of all these things, with strengths in 1 or 2 specific areas and an overall average/good knowledge of the other areas. They are also capable of working with any of these with ease. This doesn’t mean a DevOps person is exclusive to being a pure developer, a pure test automation dev, or even a pure operations person. It’s the knowledge about of these areas of DevOps which makes the difference.

Saying that, I will be interested to know what your understanding of DevOps is and what everybody now (maybe incorrectly) calls, a DevOps engineer.

Posted in DevOps, Uncategorized | Leave a comment

Bitbucket vs GitHub

In keeping with our journey around DevOps @MagenTys, today I wish to focus on DevOps tools.

It’s not the first time that someone has come to me with the same problem, they want to move their version control solutions from a source code version that was “imposed” some time ago, to something more modern, broadly accepted and that can easily integrate with their CI/CD pipelines.

I came across a variety of teams with a range of control server versions, and despite what many may think, not everything out there is just Git and GitHub. There were teams using TFVC moving to Git, others deep into Perforce moving to SVN, even some with Visual SourceSafe (still!) migrating to Assembla Git! The most common of all were teams moving from the above to Git or from Git to GitHub or Bitbucket.

Within this article, I will be comparing two of the top rated and most-known version control systems on the market: Bitbucket and GitHub. I won’t just be looking at their On-Premises versions, but their alternatives on the cloud as a service as well.

From a developer’s point of view, both tools allow you to carry out common actions which are required by most Dev teams nowadays, these include:

  • Pull request
  • Code review
  • Inline editing
  • Issue tracking
  • Markdown support
  • Two factor authentication
  • Advanced permission management
  • Hosted static web pages
  • Feature rich API
  • Fork / Clone Repositories
  • Snippets
  • 3rd party integrations

Bitbucket

From the Atlassian suite, Bitbucket comes as a distributed version control system based on Git. Very well-known for its full integration with other products in the Atlassian family such as Jira, Fisheye and Bamboo.The advantage Bitbucket holds over its competitors is that Bitbucket offers unlimited number of private repos.

To elaborate further on the features Bitbucket has to offer:

  • Supports Mercurial and Git
  • Has direct integration with Bamboo, Jira, Crucible and Jenkins
  • Supports external authentication with Github, Facebook, Twitter and Google
  • Can import repos easily from Git, Codeplex, Google Code, HG, SourceForge and SVN
  • Has branch comparison and commit history
  • Has a Mac and Windows client called SourceTree and an Android app called BitBeaker
  • Release with Jira Software and Bitbucket, which are seamlessly integrated from branch to deployment. Create Bitbucket branches from within Jira Software or transition issues without ever leaving Bitbucket.
  • Pipelines are built within Bitbucket Cloud giving you end-to-end visibility from coding to deployment.

One downside, however, is that it does not include support for SVN but this can be easily amended migrating the SVN repos to Git with tools such as SVN Mirror for Bitbucket .

There are currently two versions of Bitbucket to choose from:

Bitbucket Cloud (Hosted by Attlassian):

If you don’t want to host your own Bitbucket server, this is the best option for you. It also offers a flexible pay as you go option to scale up and down your teams on a click. The plans offered are as follow:

Free Plan: $0 per month

Free upto 5 users
Jira software integration
Unlimited private repos
Projects
Pipelines

*Includes: 50 min/month of builds and 1 Gb/month of LFS storage

Standard Plan: $2 per user/month

Starts at $10/month
Jira software integration
Unlimited private repos
Projects
Pipelines
Unlimited users

*Includes: 500 min/month of builds and 5 Gb/month of LFS storage

Premium Plan: $5 per user/month

Starts at $10/month
Jira software integration
Unlimited private repos
Projects
Pipelines
Unlimited users

*Includes: 1000 min/month of builds and 10 Gb/month of LFS storage

If you want to scale up in terms of using the building and storage capabilities of BitBucket, it offers 1000 more minutes for $10/month and 100 GB for also $10/month.

Payments are made on a monthly subscription basis and currently, there is no option for annual subscriptions on Bitbucket Cloud.

Bitbucket Server (Hosted in our private clouds or IT infrastructure)

If you wish to move from a single server deployment to a highly available, active-active cluster with Bitbucket Data Center. If you decide to install your own Bitbucket server, you have a choice of two variants:

Server

Git based code hosting and collaboration for teams
A single server deployment
Perpetual license + free year of maintenance

Datacenter

Git based code hosting and collaboration for teams
Active-active clustering for high availability
Smart mirroring for performance across geographies
Annual term license + maintenance

 At MagenTys we have implemented both options many times, and I have to say that is a very good option if you know how to optimize your Cloud resources properly, as it’s not only about installing in in a VM with the right access and network properties, but things like the cloud storage accounts can be monitored and optimized properly to save space or transactions and similar with the VMs usage, as you should be tweaking again and again these machines until they have their optimal resources usage according to your team habits.

Github

Among the general features of Github, there are a few features which really stand out, these include:

  • Github Pages and Github Gists
  • It supports SVN clients
  • Supports GIT
  • Direct integration with: Zendesk, Cloudbees, Travis, AWS, Codeclimate, Azure, Google Cloud and Heroku
  • Can easily import repos from Git, SVN, TFS and HG

The disadvantage of using Github is that you have to pay for having a private repository, something that comes for free with every Bitbucket plan.

Github.com (Hosted by Github)

GitHub is very well-known development platform with more than a million teams forming part of its community. From open source to business, you can host and review code, manage projects, and build software alongside millions of other developers.

As you probably know it’s free, BUT, despite this free plan allows you to create unlimited number of public repositories, it doesn’t allow you to have private repositories. For that you need to upgrade your free plan to one of the following subscription plans:

Free Plan: $0 per month

Free plan, unlimited public repositories

Developer Plan: $7 per month

Personal account

Unlimited public repositories

Unlimited private repositories

Unlimited collaborators

Team Plan: $9 per month

Organization account

Unlimited public repositories

Unlimited private repositories

Team and user permissions

Starting at $25 / month which includes your first 5 users.

$9per user/month or $108 per user/year

Business Plan:

 Includes everything in the Team plan plus:

$21 per user/month or $250 per user/year or $2500 per 10 users /year (same as GitHub Enterprise)
Organization account
SAML single sign-on
Access provisioning
24/5 support with 8-hour response time
99.95% uptime SLA

 For more information you can access to: https://github.com/pricing

Github Enterprise (Hosted by us on our private clouds)

GitHub Enterprise is the on-premises version of GitHub.com.

All repository data is stored on machines that you control, and access is integrated with your organization’s authentication system (LDAP, SAML, or CAS).

GitHub Enterprise is delivered as a virtual appliance, this includes all software required to get up and running. The only additional software required is a compatible virtual machine environment.

The following minimal hardware requirements are suggested for production deployments:

Processor: Two 3.0 GHz CPU cores (or virtual equivalent)

Memory: 14-16 GB (minimum dependent on selected platform)

Disk: 80 GB VM root partition

100 GB data partition (our recommendation, actual size depends on your data)

Storage: High-performance SAN or locally attached storage

As Virtual Environments supported, the following are available:

Platform Image Format
VMware OVF
OpenStack KVM QCOW2
XenServer VHD
Hyper-V VHD
Microsoft Azure VHD
Amazon Web Services (AWS) AMI
Google Cloud Platform (GCP) Google Compute Engine (GCE) image

One of the advantages of using GitHub enterprise is the integration with LDAP directory or IAM systems.

For hosting projects, as GitHub Enterprise is a private instance, it is designed for internal collaboration only, so public projects are not allowed in it.

GitHub Enterprise is not following the traditional pay per user per month model as GitHub.com, it is licensed under an annual subscription model where seats are purchased for a one-year period, this includes full support and access to all updates and upgrades.

Licenses must be purchased in 10 seat packs, with the price of $21 per user/month.

This includes as we commented before:

  • Multiple organizations
  • SAML, LDAP, and CAS
  • Access provisioning
  • 24/7 support for urgent issues
  • Advanced auditing
  • Host on your servers, AWS, Azure, or GCP

But again, seats are sold in packs of 10 users and billed annually.

More information here: https://enterprise.github.com/features#pricing

Integration with Jira

Despite Bitbucket being an Atlassian product, both tools have full integration with Jira. When developers create a branch, create a pull request, commit code, etc, everything gets reflected on your Jira tickets.

JiraIntegration

But not only that, using the DVCS plugin in Jira, you will be able to change some properties in your tickets just by adding the right comment in your repository operation, further capabilities include:

  • comment on issues
  • record time tracking information against issues
  • transition issues to any status (for example ‘Resolved’) defined in the  JIRA Software project workflow.
  • add labels

In terms of quantity of integrations, The Atlassian Marketplace blows GitHub out of the water. Unifying your extensions under Atlassian means easier implementation and more coherent workflows, so you won’t have to juggle between multiple app providers. The built-in compatibility will allow developers to leverage more tools more easily and code a better product.

Integration with Jenkins

One thing to take very seriously is how your Version Control System integrates with other tools that form part of your CI/CD pipeline. Among the most popular CI/CD Automation Servers we have Jenkins, which is used extensively for many dev teams to build, test and deploy solutions in an automated way.

People may view Jenkins is being an Open Source tool as it has hundreds of plugins and customisations and it’s very easy to integrate with any VCS. However, the same can’t be said about integrating it with Github or Bitbucket than with TFVC or Perforce. Integration can be quite straightforward or be a real nightmare.

Both of them Github and Bitbucket have the classic integration with Jenkins, such as trigger on commit or pull request.

Jenkins, itself located on GitHub, has a number of plugins for integrating into GitHub. The primary avenues for integrating your Jenkins instance with GitHub are:

  • “build integration” – using GitHub to trigger builds
  • “authentication integration” – using GitHub as the source of authentication information to secure a Jenkins instance.

More info here.

For Bitbucket is a bit more clunky but it can be also done. There is a nice article from Tomas Bjerre about how to do it.

Atlassian also has a paid plugin for Jenkins here which is not really expensive and can get it done more efficiently.

For migrating from GIT to GitHub or Bitbucket, both of them follow similar procedures and are straightforward, no workarounds needed.

Costs

Finally I want to scope the cost for an average of 50 users (dev, test, ops, etc) that want to work with private repositories over the course of a year:

GitHub.com: Team Plan: $5160
Team Plan is $9 per month per user on the top of a payment of $25 per month (which includes 5 users at $5 cheaper rate).
Estimation 50 users on Team plan per year:  $25×12(5 users) + $9×12*45 = $300 + $4860 = $5160
It can be paid monthly or per year
GitHub Enterprise: Business Plan: $12,600

$21 per user / month = $252 per year
Estimation 50 users on Enterprise Plan : $21x12x50 = $12,600
It has to be paid per year

Bitbucket Cloud: Standard Plan:  $1,200

Costs are: $10 / month for first 5 users + $2 / user / month for additional users

Given this, 50 users under a Standard license would be: $10 (5 users) + $2*45 users =$100 per month = $1,200 per year

Bitbucket Enterprise: $3,600
Perpetual license on a payment based on the number of users (upgrade-able)

Total cost: 50 users for a year is $3,600
Prices are reduce when acquiring more users (for example 100 users is $6,600)
(this estimation applies to the server installation, datacenter is slightly cheaper.Bitbucket Server has a perpetual license while Data Center has an annual term license that includes updates and support as long as your term license is active. Data Center licenses expire and are not perpetual like our server licenses)

There is an awesome calculator for Bitbucket to calculate how much it will cost you for cloud and on-premises here: https://www.atlassian.com/purchase/product/bitbucket

Summary

The hosted options look very attractive for both Github and Bitbucket, integrating with Jira and Jenkins indistinctly. It is also much more affordable, also taking in consideration that if we want to host on our own servers we don’t have to pay for the licenses or for hosting the server in our own infrastructure or cloud. However, thinking about the size of the VM to support the server installation, the server and data redundancy, the 24/7 support and the scalability, makes me seriously doubt whether it would be the best choice… unless you have some constraints about where your IP needs to be hosted in.

In terms of quantity of integrations, The Atlassian Marketplace blows GitHub out of the water. Unifying your extensions under Atlassian means easier implementation and more coherent workflows, so you won’t have to juggle between multiple app providers. The built-in compatibility will allow developers to leverage more tools more easily and code a better product.

In terms of cost, moving Bitbucket for most teams would be the smart choice. Also migrating the Git repos to Bitbucket is not a heavy task and can be achieved without losing any history.

But as usual, my main recommendation would be to try both and then decide, as in the end its better to see it with your own eyes:

Opening a free bitbucket cloud account:just log in with your Attlassian account here: https://bitbucket.org/

Opening a free github.com account: (you have one, you can create an organisation inside for free with public repos): https://github.com/

I hope this information is useful for you to decide which version control repository is more suitable for you and if you would like further advice and interested in moving your version control solutions to Bitbucket or Github, speak to us today.

And remember, DevOps is not a tool, its not a guide, its not a methodology, it’s a Journey.

Interesting Links

Jira-Bitbucket integration: https://www.atlassian.com/software/jira/bitbucket-integration

github vs bitbucket:
https://www.upguard.com/articles/github-vs-bitbucket
https://bitbucket.org/product/comparison/bitbucket-vs-github

GitHub.com (hosted by GitHub): https://github.com/pricing
GitHub Enterprise (hosted by us in our AWS or Azure VMs):  https://enterprise.github.com/features#pricing
Bitbucket Cloud (hosted by Attlassian):  https://bitbucket.org/product/pricing?tab=host-in-the-cloud 
Bitbucket Enterprise (hosted by us in our AWS or Azure VMs): https://bitbucket.org/product/pricing?tab=host-on-your-server

Bitbucket calculator: https://www.atlassian.com/purchase/product/bitbucket

Migrate from Git to Github: https://stackoverflow.com/questions/5181845/git-push-existing-repo-to-a-new-and-different-remote-repo-server/5181968#5181968

Migrating from SVN to Bit Bucket: https://akrabat.com/migrating-to-bitbucket-from-subversion/

Jira-Bitbucket integration: https://www.atlassian.com/software/jira/bitbucket-integration

Posted in DevOps | Leave a comment

A DevOps journey

We recently finished another DevOps work, this time in a Microsoft house, so we wanted to share with you the challenges we faced during our journey. As usual at Magentys the traditional DevOps engagement doesn’t take longer than 6-8 weeks depending on de complexity of the solutions and the DevOps practices to reinforce, here the DevOps level of maturity of the team is critical to accomplish this on time, as DevOps doesn’t consist on just implement a CI – CD pipeline with some tools and display some green builds on the screen, is also about the adoption of some patterns and practices from the development team.

As some of you know, Microsoft has a long time experience on Application lifeccycle Management and owns a bunch of very good tools that cover its main areas, these are:

  • Project/Portfolio Management
  • Source Code Management
  • Build Management
  • Release Management
  • Test Management
  • Package Management

So we did, proceeding as follows:

1) DevOps Healthcheck

There is no time to lose and we have to get ready for action so, previous every DevOps engagement, we sent out a pre-engagement questionnaire where the development team (sometimes is the Dev Lead, or Head of IT, other times just the team sit for 20 minutes and fill it up) answer some questions about what patterns and practices and tools they are using in the team/s in regards of Agile Development and Agile Delivery approaches.

In these areas we are targeting questions related to the principal areas of DevOps, which involve most of the main practices such as Source Code Management, Build Management, CI/CD pipelines, Quality Gates, Team Collaboration, Configuration and Provisioning, Cloud and others.

With this information in our hands, the second step is to define the level of maturity in each of the areas of DevOps and use it to draft a plan/s and accelerate its adoption.

2) Plans and Strategies definitions and reviews

DevOps is not just about bringing the best trendy and shiniest tools that allow the team to orchestrate each of the pieces of the software delivery process successfully, faster and fully automated, no, it’s also about enabling the best practices in each of the SDLC areas, it’s about reviewing the current processes, reviewing how to optimize and collaborate on improving them.

Source Code Management: Is the team using the right source code repository given the nature of the of project? Does the repo allow the team to operate given their coding practices? Is the team practicing Code Reviews, Pull Request? Does the team have a specific branching strategy that allows them to deliver continuously, automate it and easy to manage?

There are plenty of possibilities nowadays, we are finding teams using:

  • GIT
  • GITHub
  • TFVC
  • Subversion
  • Perforce
  • Share folders (!)

Some of them are very well-known for most development teams and broadly adopted, easy to integrate in CI pipelines and build systems, like git or github. Others like Perforce are not very straightforward to work with but they have some advantages over git such as storing large binary files.

Then another aspect of the SCM model is about how to split (or not split) the different projects into one or multiple repos. We found that some teams prefer to have multiple repositories for the same product, each one for a different project or service so it can be easily shared with external teams, or just have a different business model associated and its own life cycle. Endless discussions about one repo vs multiple repos or use one repo with multiples git modules need to be always supported by its business cases, as it is not only up to the developers how they want to organise the source code, everyone in the team has said something about it.

One Repo vs Multiple Repos:

  • Developers, Software Development Engineer in Tests and team members specialized on DevOps : Might prefer to have one repository only, easy to access, branch out, easy to generate build definitions, release in between branches, solve dependencies, etc. But some can disagree and say they don’t want to have too much noise in the same repo having multiple projects in it, specially if they have several commits per day coming from different teams.
  • Business owners: they take a big part of this discussion as they know how those projects are going to be released, why, where, for how long, it here are going to be released as a SaaS, or PaaS or will just be a website. If the product or products will be sold as a white label product and then comes with a different business model. Sometimes they want to sell the IP but not all. According to these business cases the repo strategy will change.

Here some interesting articles that can help you to understand the differences between single or multiple repos.

https://www.benday.com/2016/11/04/one-tfs-build-multiple-git-repositories-with-submodules/

http://www.drmaciver.com/2016/10/why-you-should-use-a-single-repository-for-all-your-companys-projects/

http://blog.shippable.com/our-journey-to-microservices-and-a-mono-repository

https://cacm.acm.org/magazines/2016/7/204032-why-google-stores-billions-of-lines-of-code-in-a-single-repository/fulltext

Another critical part of the plan is the branching strategy and what is the purpose of those branches and most important, how changes are merged or delivered in between branches.

The strategy we chose for this particular project was quite simple (always try to keep it simple from the beginning , there is always time to add complexity):

BranchingSimple

Some interesting links that can help you decide what branching strategy could be adequate for you:

https://guides.github.com/introduction/flow/
https://docs.microsoft.com/en-us/vsts/tfvc/branch-strategically
https://docs.microsoft.com/en-us/vsts/articles/effective-tfvc-branching-strategies-for-devops
https://docs.microsoft.com/en-us/vsts/git/concepts/git-branching-guidance
https://docs.microsoft.com/en-us/vsts/tfvc/branching-strategies-with-tfvc

Project/Portfolio Management

What is the current project management tool the team is using? does it have full traceability end to end from the story conception, to the code written for it, to the deployment in production systems? Are any tests attached to these stories? Does the team have full visibility to the work taken on each of the areas through these tool? Is the tool chosen able to implement properly the process practices to follow? (Agile, Scrum, XP, Kanban, …)

Well, most of the teams lack of the ability to track the lifecycle of a story across the different development areas, we found most have preference for Atlassian Jira, others TFS/VSTS or Target Process, the thing is,  not much of importance is how these tools are managing those stories (as more or less all of them allow you to do the very same things, just with different look and feel) but how they track the work done to complete those stories.

One example Atlassian Jira is capable of more than just create issues on nice Kanban boards, it can integrate fully Github so you can check the work done by the dev team, it has plugins to bring your test cases into them or even link them to the release pipelines for specific external tools.

In our case we used Visual Studio Team Services as the main tool for project management. We migrated all the epics, stories and tasks from their legacy system (Target Process) to VSTS.

ProjectBacklogProjectDashboard

(images taken from local demo not real environment)

Build Management

What do we have to build? How often? Can we automate it? What technologies? What’s the output of it?

We need to define a plan around how we are going to build our software (input) and what is going to be the outcome of that. Factors to take in consideration:

  • Build definitions: We need to think on what do we want to build, if we want to run tests on them and which ones, if static code analysis will be part of them, if we are going to build artefacts that will be deployed later on as a part of a CD pipeline, etc.
  • Projects types to build: Web, Services, Apps, DataBases, Scripts, Mobile, etc.
  • Frequency: how often we have to run those builds. Is on demand? Do we have nightly builds? are we implementing Continuous Integration at every branch level?
  • Outcome: What do we want to generate as a part of the build definition output? Are we creating packages, binaries or any kind of artefacts? Do we want to run a set of tests and analyse the results? Are we storing these packages in an artefact library or shared folder?
  • Artefacts: How are we versioning them? Are we creating Packages? How do we establish the quality of these artefacts?
  • Build tool: Jenkins, Bamboo, TeamCity, TFS/VSTS, others?
  • Build/Test/Deploy Agents: Can we host these agents locally or will be deployed in the cloud? How many do we need? Do then need different capabilities?

In our case we defined Automated Builds running on CI on every branch, filtering by projects paths, triggered by pull requests and also nightly builds.
BuildDeploy.png

The tool chosen for this was VSTS as it provided us all the above and also integration with SonarQube and was capable to build projects of different kind and technologies.

BuildSQ

SQReport

Release Management

Defining the release strategy can be something that would take some time define as it’s not only about creating a Delivery pipeline, but it’s more about what needs to be release, when, if will be released to the cloud or on-premises, if this release is manual or automated and how to automate it in case is manual, how we can make this process repeatable, etc.

Factors we take in consideration when defining the release strategy:

  • What do we have to release? It could be just services, web, desktop, or mobile applications. It could be Infrastructure as Code such as deploying virtual machines, Docker containers, load balancers, it could be even databases!
  • How are we going to release it? How this release is generated? Is this process automated? It requires any manual approval in any of the stages?
  • What steps need to be taken for releasing our artefacts? What quality metrics and quality gates are we adding to this process?
  • Do we have a rollback plan? Do we a have disaster recovery plan?
  • What environments I’m going to need for releasing my product? Which teams will use them? Will they be static or dynamically generated?
  • Will be deployed locally or in the cloud?

In our case we had to deploy them all, web apps, services, databases, infrastructure, environments, all of it, and our target environment for those was Microsoft Azure. For this project in particular it was Azure SaaS such as Azure App Services, Azure Elastic Pools and Azure API Management. There was a component of IaaS which was about virtual machines, network infrastructure, hybrid infrastructure, containers, etc. Which was more focused with interoperability with legacy systems.

For controlling these releases and deployments we used VSTS Release Management which also allowed us to enabled Continous Deployment and easily visualize which versions of our releases are being deployed and where.

Releases

Test Management and Quality Gates

I’m not going to deep into this as it might be a good topic for another blog post, but we can give you some insights about what were the main quality gates and test management we took in consideration when reviewing the Test Management and Test Automation strategy followed by our client.

Absolutely priority number one to move towards agile practices and DevOps implementations was Automate all the testing all over the SDLC.

The main quality gates we proposed were:

  • At story conception level: Have agreed definition of done and for every story a well-defined acceptance criteria, written on Gherkin syntax to help the dev and test engineers to implement test properly using BDD and TDD practices.
  • Code Reviews, Pull Requests on every merge operation.
  • Code Coverage: 100% code needs to be covered by unit tests.
  • Test Automation for UI, API, DataBase testing, Performance test, Smoke tests and others, all integrated the CI – CD pipelines at different stages, results automatically collected and asserted as Quality Gates.
  • Regression testing happening on nightly builds.
  • Code Analysis rules for coding practices when building.
  • SecOps practiced integrating SAST and DAST tools as part of the CI – CD pipelines

Manual testing is out of discussion. But if you are not on a green field and still have to deal with manual testing (we had some legacy projects where manual testing was still a must) then VSTS also provides a nicely done solution for test management, which gets complemented with a Windows client (Microsoft Test Manager) and with Test and Feeback tool for browsers.

3) Implementation

After we agreed and sign off all the strategy plans for:

  • Branching strategy
  • Repository strategy
  • Build strategy
  • Release strategy
  • Environments architecture
  • Test Automation strategy

And we discussed about other topics such as SecOps, packages, monitoring and alerts, hot fixes approach, cloud costs, etc,  we starting implementing on every area, starting by Repo, Branching and Build strategies.

We have to say that thanks to spend a proper 60% of the time on planning and strategies discussions, the whole implementation took no more than 30% of the time.

4) Adoption

For teams that never worked on a fully automated environment where lots of DevOps practices are applied, can be difficult to absorb from one day to another. We solve this problem having different members of the team, specialized on different areas of expertise, shadowing us during the implementation phase.

We also had one recorded KT session over every delivery plan and invited the whole team (and external teams too) to be part of those sessions.

And last but not least left behind some training materials and guides to help them on fully develop their capabilities around the new DevOps tools and processes.

Some teams prefer us to organise some Agile Workshops for over 3 days adding some additional days for tools and technical Q&A sessions.

5) Support

Last but not least, our modus operandi is to engage, prepare, plan, deploy, share and guide and finally give support, guiding these teams for few weeks over the different areas of work, ensuring they are self-sufficient to start working on an agile manner with the new processes and tools. Then we periodically contact them to see how much they have matured on the different areas previously measured during our pre-engagement.

Summarising, it has been such a nice experience to participate on such project, defining the whole DevOps strategy from the very beginning and see it flourish over the time.

We have made other journeys of different nature, different technologies but with mostly the same approach and same outcome.

And remember, DevOps is not a tool, is not a guide, is not a methodology, is a Journey.

 

Posted in DevOps, Uncategorized | 1 Comment

All you need to know to start with DevOps

The main purpose of this post is to share with you the main resources that will drive you through the journey of DevOps, but first, a quick intro to DevOps.

INTRO TO DEVOPS

What is DevOps?

DevOps (a clipped compound of “development” and “operations”) is a software development method that emphasizes the roles of both software developers and other information-technology (IT) professionals.

What we can basically describe in a diagram showing how QA, Developers and Operations are part of DevOps.

clip_image002DevOps brings you the next real benefits:

Deliver better quality software faster and with better compliance

Drive continuous improvement and adjustments earlier and more economically

Increase transparency and collaboration

Control costs and utilize provisioned resources more effectively while minimizing security risks

Plug and play well with many of your existing DevOps investments, including open source

So, DevOps is more than a technology or a tool set. It’s a mindset that requires cultural evolution. It is people, process and the right tools to make your application lifecycle faster and more predictable.

DevOps quickly evolved Application Lifecycle Management (ALM) and Agile methodologies to address the needs of the digital business including:

The critical importance of transparency, communication and collaboration between development and operations teams

The inclusion of the project’s business owner and other critical groups such as security (DevOpsSec), networking, compliance in the discussion

Practicing DevOps can help teams respond faster together to competitive pressures by replacing error prone manual processes with automation for improved traceability and repeatable workflows. Organisations can also manage environments more efficiently and enable cost savings with a combination of on-premises and cloud resources, as well as tightly integrated open source tooling.

Careful selection of the right toolset will minimize risk and facilitate much needed self-service of resources while at the same time reduce security risks across hybrid environments. Improvement of quality practices helps to identify defects early in the development cycle, which reduces the cost of fixing them.Rich data obtained through effective instrumentation provides insight into performance issues and user behaviour to guide future priorities and investments. A wide set of tools and services from Microsoft and others enable these (DevOps) practices on-premises and in the cloud.

DevOps should be considered a journey, not a destination. It should be implemented incrementally through appropriately scoped projects, from which to demonstrate success, learn, and evolve.

DevOps is not just about the tools and the tech. It’s also about the importance of transparency, communication and collaboration between development and operations teams, it’s about the inclusion of the project’s business owner and other critical groups such as security, networking, compliance in the discussion.
To learn more about the whole process, visit Microsoft Virtual Academy’s DevOps section.

DevOps is a journey, not a destination. Get yourself ready, and be on your way to success.

RESOURCES

So now that you know more about DevOps, what does Microsoft have to offer you?

Training: A bunch of very good online trainings provided by Microsoft Virtual Academy.

· Managing the Application Lifecycle with MSDN

· Dev/Test Scenarios in the DevOps World

· Application Lifecycle Management (ALM) for Startups

· Modern IT: DevOps to ITIL, Creating a Complete Lifecycle for Service Management

· Fundamentals of Application Lifecycle Management

· Application Performance Monitoring

· Assessing and Improving Your DevOps Capabilities

· Azure Resource Manager DevOps Jump Start

· DevOps – Visual Studio Release Management Jump Start

· Enabling DevOps Practices with Visual Studio Online Build

· Building Infrastructure in Azure using Azure Resource Manager

· Managing Your Systems on Microsoft Azure with Chef

· Open Source Database on Microsoft Azure

· Open Source for DevOps Practices

· Automating the Cloud with Azure Automation

· DevOps: An IT Pro Guide

Learn: Training resources to deliver continuous value faster

· DevOps as a Strategy for Business Agility (video)

· Open Source for DevOps Practices (video)

· Assessing and Improving Your DevOps Capabilities (video)

· Azure Resource Manage for DevOps

Try: Evals & Trials

· Sign-up for FREE Azure and Visual Studio trial

· System Center Evaluation

· Team Foundation Server 2012 and System Center 2012 Operations Manager Integration V-Lab

Books:

· Microsoft’s journey to Cloud Cadence (eBooK)

· Saugatuck: Why DevOps Matters (whitepaper)

· Forrester: Total Economic Impact of Microsoft Release Management report (whitepaper)

Videos:

· DevOps on Channel 9 (videos)

DevOps Assessment tool:

· http://devopsassessment.azurewebsites.net/

· DevOps Blog: http://blogs.technet.com/devops/

Virtual Machines:

· Visual Studio 2015 ALM Virtual Machine and HOLs

· Visual Studio 2013 Update 3 ALM Virtual Machine

· TFS2012 and System Center 2012 Operations Manager Integration Virtual Machine and Hands on Lab

· TechNet Virtual Lab

References:

Introduction to MS DevOps: http://www.microsoft.com/en-gb/server-cloud/solutions/development-operations.aspx
Microsoft Virtual Academy DevOps: http://www.microsoftvirtualacademy.com/training-topics/devops

I hope this resources are useful for you as they were for me.

Cheers!

Eduardo Ortega

Posted in DevOps | 2 Comments

[Event]Team Collaboration with Visual Studio Online

Today I’m running a new event in Central London and covering one of the topics of the year, Team Collaboration with Visual Studio Online.

For those that don’t know what Visual Studio Online is, just say that it’s not an IDE, it’s everything else. Visual Studio Online provides a set of cloud-powered collaboration tools that work with your existing IDE or editor, so your team can work effectively on software projects of all shapes and sizes.

Yes, of course it gives you version control (git or TFVC) but also Tools for agile teams such as Kanban, Scrum templates and Dashboards. It provides you powerful services like the new continuous integration system that allows you to build, validate and deploy into the a hybrid cloud and that offers an open and extensible integration with other platforms.

Do you want to know more?

Come to the event today here

 

Description:

In this session we will go through the Microsoft latest ALM Collaboration tools, covering how to plan and track work in an agile environment. This will be shown using one of the three templates that Microsoft provides for project management, Scrum, and we will see how to customize your Kanban boards, create dashboards and reports for anyone in the team (not only dev and test people). As Visual Studio Online is a Microsoft Cloud Service that is evolving every month, we will see also what’s just came new into VSO this year.

 

Who will this interest?

Anyone with a keen interest on Project Management, Work  Item Tracking, Capacity Management, Agile Frameworks.

Also BA, Devs, Testers and Ops that want to work in an Agile environment with a Continuous Integration and Continuous Delivery cadence.

We will cover the following topics:

-Project Management with Visual Studio Online
-Tracking work with custom Kanban boards
-Burndown, cumulative flow, capacity and other custom charts
– Collaboration between Management, Dev, Test and Ops.
Azure Active Directory, Office 365, Power BI
– Integration with Excel, Visual Studio, MTM and  Trello.

 

Come along to Moorgate on Thursday 24th September 2015 where Eduardo Ortega, ex former Microsoft  Evangelist, MVP and MCSD ALM professional, will be discussing Visual Studio Online and Team Foundation Server as ALM Tools.

http://www.meetup.com/London-Microsoft-DevOps/events/225390939/

Posted in Agile and Scrum, Events, VSO | Leave a comment

When and why to run Web Performance and Load Tests

A colleague from recently asked me about when should we use Load and Performance Tests? Well my answer would be: “All the time during the product development”, but there are specific stages where this is critical.

Some agile teams have a verification week before moving to the next iteration. During verification week, they spend first 2-3 days of the week verifying the product in their test environments. If everything is okay, by Thursday they deploy to preproduction environment. Once they have green light in pre-production, they deploy to production, which is targeted over the weekend or Monday morning.

So the tests that they run during this verification week are the same they run continuously in their working branch.

Load testing is a critical part of our software development process. We find many serious issues in load testing, everything from performance regressions, deadlocks and other timing-related bugs, to memory leaks that can’t be effectively found with functional tests. Load testing is critical to being able to confidently deploy.

Load Test Script Architecture

TFS has web service end points, rest end points and web front ends.

In order to generate the massive loads we need, we use Web tests to drive load to our web site and web pages, and unit tests to drive load to our web services.

Here a representation of the Load Tests Script Architecture.

All tests scripts use common configuration to determine which TFS instance to target in the test, and scripts can be configured to target an on-premises server or Visual Studio Online.

Types of Load Testing we do

Performance Regression Testing: executed every sprint to ensure that no performance or scale regressions are introduced.
Scale Testing:

Increasing the number of accounts and users to find new bottlenecks in scale. Some key counters to consider are:

  • RPS (requests per second)
  • % CPU AppTiers
  • Average Response Time
  • Current Server Requests
  • Active Team Project Collection Service Hosts
  • Private Bytes
Deployment Testing

We run a load tests while the upgrade is happening, so, as activity is constantly going against the service, we can detect outages through all the stages.

  • Stage 1: Deploy new binaries into web front ends and job agents.
  • Stage 2: Run jobs to upgrade the databases.
  • Stage 3: Run jobs to do necessary modifications of the data stored in the databases.
Directed Load Testing

Isolate a particular component or service for performance, stress and scale testing.

Testing in Production

These test are fundamentally focused on analysing the following data to look for regressions:

  • Activity log analysis.

    • Increases in failed command counts
    • Increases in response times, which indicates a perf regression
    • Increase in call counts, which indicates a client chattiness regression
  • CPU and memory usage.

    • Increase in CPU usage after deployment
    • Memory leaks
  • PerfView analysis: mainly driven by a tool called PerfView. Useful for finding memory and CPU regressions in production.

As you see, running performance and load tests is something that could be done anytime, but depends on the time where you do it, it will involve different techniques and data to capture.

I encourage you to create your first Web Performance and Load Tests following some of the next links and find yourself the answer of… When and Why to run Web Performance and Load Tests.

Generate and run coded web performance test

Step by Steps tutorial- Run performance tests on your app

Distributing Load Test Runs Across Multiple Test Machines Using Test Controllers and Test Agents

Analysing Load Tests Results Using the Load Test Analyser

Load Tests in the Cloud

Happy testing!

Eduardo Ortega

Posted in P&L Tests, Testing, Visual Studio | Leave a comment

Reporting on TFS

Today during the Deep dive in Visual Studio and MTM course we were covering the types of reports that can be produced in TFS. Either for tracking your tasks or your tests progress, these reports are produced from the same place and with the same mechanism. Let’s take a look to the next picture:

As we all know, the SQL Server database that is in TFS is in charge of storing and producing the right KPIs for our reports. Most of the information we query is directly extracted directly from the database where our project is hosted (our project collection DB). But most of the reports are produced directly not from the project DB but from the OLAP data cube that is managed by SQL Server Analysis Services.

So said that, we have two extra Data Bases, one is the OLAP Cube, TfsAnalysis data base, that is mainly use to generate reports in Excel or to export reports to SharePoint wherever you have an integration with it.

The other database, TfsWarehouse, is being used to generate reports through SQL Server Reporting Services, which give us the chance to generate also our own reports using the Report Designer tool. This reports are easily accessible through the Web Access of our team portal or from our Team Explorer in Visual Studio.

Either for TfsAnalysis or TfsWarehouse databases, you should have permissions to them both in order to create this reports.

For how to GRANT PERMISSIONS TO VIEW OR CREATE REPORTS IN TFS follow the next link.

SQL Server Reporting Services

When you finally get access to SSRS, you would be able to access to the folder above represented, and, depending on the project template that your team is using for the project, you will see less or more reports. But remember, the report builder is there!

Some of the reports are the next:

  • Test and bug reports

    • Test Case Readiness
    • Test Plan Progress
    • Bug Status
    • Bug Trends
    • Reactivations
  • Project Management Reports:

    • Backlog Overview
    • Release Burndown
    • Sprint Burndown
    • Velocity

All these reports and more are explained here

Probably the test case readiness is one of the most detailed and spectacular, showing us the next information:


This is an extended version of the simple chart that we usually take from the Microsoft Test Manager (Plan tab, Results menu), where we can analyse the Test Result Summary of our tests executions by tester, by suite, by configuration and answer to questions such as:

How much testing has the team completed?

How many tests are left to be run?

How many tests are passing?

How many tests are failing?

How many tests are blocked?

Why tests are failing?

Are new issues going into production?

Is there any regression on the failing tests?

Last but not least, another way to create reports in TFS is using Microsoft Excel.

These awesome pivot tables can be generated from either the Team tab in Microsoft Excel or from Visual Studio (straight from a query).

If you are creating the report from Visual Studio Team Explorer, you basically have to look for the query you want to use to generate the report, and then right click and create report in Excel.

If you decide to create the Report from the OLAP Cube, in Microsoft Excel there is a tab called “Data”. There you will find an action called “From other sources” where you can select “From Analysis services”

After you chose this option, a wizard will be introduce to you:

  1. Connect to the DataBase server
  2. Select DataBase and Table (Tfs_Analysis will be ours)
  3. Give it a name and point it to our file
  4. Report will be generated

I hope this blog post has thrown some light on what reports are available in TFS and do not forget… In terms of having these reports, you have to generate the data first! So make sure you create your work items such as tasks, user stories, test cases, and others properly or reports will be useless J

Enjoy!

Eduardo Ortega

Posted in ALM, TFS | Leave a comment

C# Programming resources

While running the Certified Scrum Developer Training at London, we went through an amazing journey across TDD and BDD techniques using C# as a main programming languages.

For some of us, it was a good practice that helped us to refresh our rusty programming skills, for others was a way to move a Tester role to a Software Developer Engineer in Testing role.

Let me share with all of you some useful resources about C# programming that will make your journey on programming even more interesting.

C# Programming Guide: https://msdn.microsoft.com/en-us/library/67ef8sbd.aspx

[Video Training]C# Fundamentals for Absolute Beginners:http://www.microsoftvirtualacademy.com/training-courses/c-fundamentals-for-absolute-beginners

[Video Training]Programming in C# Jump Start:http://www.microsoftvirtualacademy.com/training-courses/developer-training-with-programming-in-c

[Video Training]Twenty C# Questions Answered:http://www.microsoftvirtualacademy.com/training-courses/twenty-c-questions-explained

[Video Training]What’s New in C# 6: http://www.microsoftvirtualacademy.com/training-courses/developer-productivity-what-s-new-in-c-6

[Videos]How Do I? Videos for Visual C#: https://msdn.microsoft.com/en-us/vstudio/bb798022

[Samples]Visual Studio Samples: https://code.msdn.microsoft.com/vstudio

[Free Book]C# Language Specification 5.0 https://www.microsoft.com/en-us/download/details.aspx?id=7029

[Books]Visual Studio Books: https://msdn.microsoft.com/en-us/vstudio/dd285474

More Visual C# Resources:
Get started with Visual C#
Asynchronous programming with Visual C#
Getting started with .NET and Visual Studio
Programming concepts

Last but not least, you can download the different Visual Studio versions from herehttps://www.visualstudio.com/en-us/downloads
I recommend to download the Community Edition as it’s totally free and fulfil most of the needs you have about Coding.

I hope this info is useful and enjoy coding!

  • May the code be with you –

Eduardo Ortega

Posted in c#, Visual Studio | Leave a comment

Exporting Fiddler2 sessions to Visual Studio Web Tests

While finishing the revisited and new “Web Performance Testing with Microsoft Visual Studio 2013” training, I found a very interesting topic to cover, idea of my colleague Carl Bricknell, that is to migrate Fiddler2 sessions to Visual Studio generating new Web Tests.

As you know Visual Studio 2013 Ultimate and Visual Studio 2015 Enterprise have included the templates for Load and Performance Tests.
It’s as easy as go into VS and create a new “Web Performance and Load Test project”.

Usually, once the project is done, we have to record our session with the Web Test Recorder as follows:

And right after our session, we will be able to edit the recorded test steps on the WebTest view. This will allow us to create our validation rules that would be used to let the test pass or fail; also create, modify and delete web requests, their headers, parameters (dynamic and statics) and, the most fancy feature, to add data parameters in order to replay the tests several times with different data straight from an Excel Sheet, CSV file or SQL Database.

Many testers and developers use Fiddler2 in a daily basis, and this is because is one of the most powerful web traffic capture tools.

Fiddler2 includes the ability to capture web traffic (including AJAX requests) for later playback with the Visual Studio Web Performance Test feature. It can also be used to help debug your Web performance tests. Comparing your Fiddler2 recording and your Web performance test recording can help identify key missing headers that your Web performance test may not record, amongst other things.

We can summarize as:

Fiddler is at HTTP debugging proxy server application that captures HTTP and HTTPS traffic and logs it for the user to review.

Fiddler can also be used to modify HTTP traffic for troubleshooting purposes.

In order to export Fiddler2 recording sessions to Visual Studio Web Test Project we need to follow the next steps:

  1. Create a Visual Studio empty Web Test Project
  2. Record your session using Fiddler2
  3. Export the Fiddler 2 session to the Visual Studio Web Test Project

The exporting is very straight forward:

  1. Go to File à Export Sessions à All sessions

  1. Select Visual Studio Web Test as Export Format

  1. Select all the plugins installed by default:

  1. Find the generated web test in your hard drive and attach it to your Visual Studio Web Test Project:

Done!

From here, everything that happens is result of your creativity, it’s ready for testing!

  • Happy testing! –

Eduardo Ortega Bermejo

Posted in P&L Tests, Testing, Visual Studio | 1 Comment