30 Days of DevOps: Azure resources, environments and Terraform

Managing Azure Resources is a piece of cake, isn’t it? Just log into the Azure portal, select or create a resource group and begin administering your resources. Quite straightforward, right?

Blog-AzurePortal1
But, let’s say we have a case where we have to have 9 Windows 2016 Servers installed for a new team of developers that are joining the team next week and 5 Test Agents to be hosted on different versions of Windows with multiple browsers installed, which developers and testers will use to run their UI tests remotely.

Our 1st challenge is how we can provide all this infrastructure and applications on such short notice?

2nd challenge is about test environments. We are going to be running UI tests, which tend to leave residual test data in our browsers and disks, such as cached data.

There could potentially be a 3rd challenge if we have to use PowerShell for provisioning the machines and an Apple computer.

Let’s tackle one issue at a time…

1st. Provisioning and managing resources

As I said previously, you can create and manage all the resources from the Azure portal, but this is a manual step, which once done (a process that can take minutes), the only way to track of what has been deployed is the Activity Log. The rest of the info can be extracted from the service plan or sku (if any) or the properties of the resource.

Blog_AzurePortal2

A more elegant and cost-efficient way to manage and deploy your resource catalogue is using Terraform.

Terraform can help us create our deployment plans and keep track of what’s being deployed, deploy a given resource just applying the plan, destroy the resource in a click and redeploy it again with another click.

For most of the resource’s properties, we can go to our terraform file to change the property we want and then re-apply the changes. If it requires a full rebuild of the resource, Terraform will tell us and will then do this for us.

In the case of our 9 Windows Server machines, we just need one terraform template for designing the machine specs, network, security, OS image and disk and then to create one property variable file per machine.

Blog_Terraform1.png

Same applies for the 5 test agents in our scenario.

With Terraform we can manage not only VMs but Data Sources, App Services, Authorization, Azure AD, Application Insights, Containers, Databases, DNS resources, Load Balancers and much more!

Some of you may think to use ARM templates but I personally find them full of clutter and too complex when compared with the simplicity of Terraform.

Blog_ARM1.png

(example of 1 machine exported to ARM template. More than 800 lines of script!)

2nd. Test Environments

Test environments are expensive. Microsoft offers something called Dev/Test Labs, where you can quickly deploy test environments and schedule them to go up and down according to your needs. But this is not enough, we want to create the test environment, deploy the test agent, provisioning it with the configuration we need, run the tests and then destroy the environment.

I’m not going to dive into the configuration provisioning or test runs in the deployment pipeline as I will leave it for a separate post, but it is worthwhile to mention the continuous deployment of environments.

If we already have a template to create the Azure resources we just need to activate it with a button. This can be triggered just with a job that can be hosted on Azure Pipelines, Jenkins, Team City, Octopus Deploy or similar deployment pipeline orchestrators.

Such deployment could be fully automated without requiring pre-approval or we can use the deploy button on demand.

The main point for discussion is not the tool but the process, and we have already started the process abstracting the infrastructure into Terraform. Now, it’s just about applying our plans on demand. It could be as simple as:

terraform apply -var-file=MachineX.tfvars

For those who are more adventurous and don’t want to use terraform they can use ARM templates which can be used in your CD pipelines or even, if you fancy it, create your own Azure Function that runs a PowerShell script that deploys your infrastructure.

3rd. Azure from Mac

Most people associate Azure with Windows and AWS with Linux, well, that’s a myth.
You can manage Azure from macOS, Windows, Linux or others, but here are some recommended tools to manage your Azure Resources.

Option 1. Install and use Azure CLI.
The Azure CLI is Microsoft’s cross-platform command-line experience for managing Azure resources.

Option 2. Azure Portal: https://portal.azure.com.

Azure portal not only offers you a nice GUI to deploy and maintain your resources, but it also gives you a remote Bash or Powershell! Just open your portal and click on the top bar in the icon >_

Blog-ScriptingCloud

Option 3. Powershell and Powershell core.

PowerShell Core is a version of PowerShell that can run in any platform and that is based on .NET Core, and not .NET like his predecessor PowerShell. If you are currently using Windows 10 you can use out of the box PowerShell 5.1, and this latest version of PowerShell Core v6.0. Otherwise, you can use PSCore in your Mac.

Once you have PowerShell Core on your Mac or Linux distro, you can install the modules for Azure

Option 4. Terraform.
Avoid the portal, scripting with PowerShell or even managing Azure from your APIs. Just try out Terraform and manage your resources from only one place.

I hope you have enough to start managing your Azure resources and find out what is the best way for you to deploy all those machines with no effort.

Enjoy it!

Advertisements
Posted in Azure, DevOps, Terraform, Uncategorized | Leave a comment

30 Days of DevOps: Elastic cloud vs AWS PaaS ELK

Some time ago I read an interesting article titled: “Is It the Same as Amazon’s Elasticsearch Service?

It was quite a good article, to be honest, it compared perfectly 2 great elastic implementations: Elastic cloud from Elastic.co and Elasticsearch services from AWS.
Nevertheless, I thought the article was not fully objective, as it was mostly saying that AWS implementation was an Elastic Search fork of the Elastic Search mainstream and that was lacking all the capabilities that now Elastic cloud offers in the X-Pack package, This is amazing but costs a pretty penny!

In the end, both should be the same right? As Elasticsearch is a search engine based on Lucene and is developed in Java and is released as open source under the terms of the Apache License.

So both are offering the same product but with small differences, one is offering a ton of plugins provided by X-Pack, the other is relying on the current AWS services to match his rival.

At MagenTys I have worked with both and also with the On-Premises version of ELK, but I want to give you my opinion. Let’s take a closer look at both and also analyse a vital part of it, which is the cost.

Elastic Cloud

It’s the company behind the Elastic stack, that means Elasticsearch, Kibana, Beats, and Logstash.

They officially support the Elasticsearch open source project, and at the same time offers a nice top layer of services around it, this is formerly known as X-Pack.

X-Pack is made of enterprise-grade security and developer-friendly APIs to machine learning, and graph analytics.This includes security, alerts, reporting, graph, machinelearning, Elasticsearch SQL and others.

It has a very nice cost calculator: https://cloud.elastic.co/pricing

Which we will be using for this article in order to compare it with AWS offering. For such purpose we will be comparing a t2.medium AWS instance.

Elasticsearch service AWS 
Instance type Two instances:
– aws.data.highcpu.m5
– aws.kibana.r4
Instance count 2
 Dedicated master  No
 Zone awareness  No
 ES data memory  4 GB
 ES data storage  120 GB
Kibana memory  1 GB
Estimated price: $78.55
As we can see, Kibana and Elastic are deployed in separate instances and the total storage is 120 GB, which is quite good in comparison with what comes by default with AWS (35GB).
Thanks to X-Pack we will enjoy of a few new features from either Kibana, ES or Logstash. Main plugins are:
– Graph
– Machine Learning
– Monitoring
– Reporting
– Security
– Watcher
More information here
Blog-Kibana1

Elasticsearch service AWS

Another alternative is Amazon Elasticsearch service, which is a fully managed service by AWS. This means it’s fully deployed, secured and ready to scale Elasticsearch.

It also allows us to ingest, search, analyse and visualise data in real-time. It offers Kibana access as well, and LogStash integration, but it lacks of the X-Pack, this means that some of the previous features we’ve seen such us users and group management and alerts are missing. This could be tackled with a different approach, letting AWS to manage the access to ES and Kibana using the “access policy” where we can whitelist ip addresses and apply access templates to IAM users. Also offers integration with Amazon Cognito for SSO and Amazon CloudWatch for monitoring and alerts.

Another advantage is that can be integrated in your VPCs.

Let’s take a look to the pricing:

Elasticsearch service AWS 
Instance type t2.medium.elasticsearch (2vCPU, 4GB)
Instance count 1
Dedicated master No
Zone awareness No
Storage type EBS
EBS volume type General Purpose (SSD)
EBS volume size 35 GB
Estimated price: $59.37

$0 per GB-month of general purpose provisioned storage – EUW2 under monthly free tier 10 GB-Mo – $0.00

$0.077 per t2.medium.elasticsearch instance hour (or partial hour) – EUW2 -720 Hrs – $55.44

$0.157 per GB-month of general purpose provisioned storage – EUW2 – 25.000 GB-Mo – $3.93

You need to pay standard AWS data transfer charges for the data transferred in and out of Amazon Elasticsearch Service. You will not be charged for the data transfer between nodes within your Amazon Elasticsearch Service domain.
Amazon Elasticsearch Service allows you to add data durability through automated and manual snapshots of your cluster. The service provides storage space for automated snapshots free of charge for each Amazon Elasticsearch domain and retains these snapshots for a period of 14 days. Manual snapshots are stored in Amazon S3 and incur standard Amazon S3 usage charges. Data transfer for using the snapshots is free of charge.
Data transfer costs in AWS are quite small but also we have to take them into consideration.
Data Transfer OUT From Amazon EC2 To Internet
Up to 1 GB / Month $0.00 per GB
Next 9.999 TB / Month $0.09 per GB
Next 40 TB / Month $0.085 per GB
Next 100 TB / Month $0.07 per GB
Greater than 150 TB / Month $0.05 per GB
And last but not least, as X-Pack is not available, the plugins we discussed about before are not present.
Blog-Kibana3

Summarising

If you compare the costs, there is really not much difference between one and the other, but some extra work to setup properly the AWS implementation needs to be taken in consideration. In Elasticcloud some stuff comes out of the box, and despite requires some tricky configuration (such alerts), in AWS we have to build this from scratch using CloudWatch, events and alerts, so we will spend the money on a consultant that can take of it.

Snapshots is another big point of discussion, as in Elasticcloud snapshots are taking daily 48 times per day every 30 minutes and get stored for 48 hours, while in AWS snapshots are being taken once a day and retained for 14 days with no cost too.

I hope this article helps you to decide which one is your best fit, and do not forget that you can also go for another path, which is create your own ELK stack on premise or in your Cloud, from scratch, deploying it straight into your EC2 instances or Containers hosts and manage fully the infrastructure, services and applications.
Happy searching!
Posted in DevOps, ELK, Uncategorized | Leave a comment

30 Days of DevOps: Test Automation and Azure DevOps

Coding best practices are becoming the norm. More and more development teams are acquiring habits during their developments such as TDD and even BDD. Despite this meaning having to shift left completely the testing, it’s something that for some teams still take years to digest, so we have to go step by step, and the first one is to enable visibility.

Just having TDD and BDD properly applied, doesn’t mean that we are enabling full test automation in our project. Automation is not just about having my code covered by tests and defining features and scenarios on Gherkin in conjunction with some frameworks such as Cucumber, Specflow or Cinammon triggered by some build jobs. It’s also about automating the results, and enabling traceability and transparency when the release happens.

One thing that is not quite often used in Azure DevOps (formerly VSTS), is the Test Automation traceability.

First, let’s talk about test results and where you can find them…

  1. Track the report of my unit tests/component tests running on my build.
    That can be done from a few places one is the Build status report:
    BlogVSTS_Test1.pngThe other one is the Tests Tab:
    Blog-VSTSTest2
  2. Using the new Analytics tab in the Build main page. For that, first, we have to install the Analytics plugin which appears the first time we access to analytics.
    Blog-VSTSAnalytics
    Once installed we can have a more granular detail of our test failures
    Test analytics detail view
    Group by test filesMore information here 
  3. Every test run, of any kind, is registered on the TEST section of Azure DevOps. For accessing this section you need to have or either a Visual Studio Enterprise subscription associated with your account or a Test Manager Extension, which also offers you much more than just test reports.Blog-VSTS-Runs

I really recommend using Microsoft Test Manager / Test Extensions in conjunction with Azure DevOps to get the full potential of test reports and test automation.

Second, on traceability we are not just linking our test runs to the builds, we want also to go a step further and link our test cases to user stories. This part is easy, right?

We just have to create a test case and link it to our user story. This can be done manually from our workspace or it can be done from Microsoft Test Manager when we create test cases as part of Requirements.

At the end this would look like this inside the user story (displayed as Tested By):

VSTS_TestCasesInStory.png

You can find more information here about how to trace test requirements

Third. This leads me to the last part which is my test automation. When you open a test case it usually looks like this:

VSTS-TestCase1

But this is a test case that can either be created as a manual test or automatically through an exploratory test session (one of the cool features of MTM). The good thing of them is that if they are properly recorded, you can replay them again and again automatically using one feature called fast forwarding.

Our need is to link our coded test automation (MSTest, NUnit, etc) to these test cases, that why Microsoft gave us that “Automation status” tab inside our test cases.
This tab is just telling us if it has an automation test associated with the test case or not.
An easy and quick way to enable this is:

  1. Go to Visual Studio Test Explorer
  2. Right-click over your test
  3. Associate to a test caseAssociate Automation With Test Case

Sometimes we don’t need to go through test cases, we just want to set up a test plan and run all the automation against that test plan. With this, we don’t have to create our Test Cases in Azure DevOps, we just need to create a test plan, configure its settings and modify our test tasks in build/release pipelines to run against that plan.

There is a good article from Microsoft that explains the whole process here.

Last but not least, I want to write briefly about Microsoft Test Manager. This tool has been around since the early versions of Visual Studio with Test Edition and with the Premium/Enterprise editions.

Initially, it was meant to be used as a Test Management tool for manual testing and exploratory testing, but it has acquired more capabilities over the years, up to the point that today it’s mostly integrated with Azure DevOps inside the TEST tab.

If you have the MTM client, you can connect to your Azure DevOps project and manage from there your test cases, test environments, manual tests, exploratory test sessions and you can also record not only your sessions but your test steps too through exploratory sessions. With this, you can run most of your manual tests automatically using fast forwarding, which replays all the actions the tester takes.

It is REALLY good for managing test suites and test packs and it has integration with your Builds and Releases, and you can even tag the environments you are using and the software installed on them.

This adds to your Test Capabilities what you need in order to complete your plan.
As a last note, if you are using Coded UI as your main UI Test Automation Framework, it has direct integration with MTM too, so you can associate your Coded UI tests to your Test Cases.

There is also one forgotten feature called Test Impact Analysis, which integrates not only with your builds and releases but also with MTM, which allows you to re-run only the tests that have been impacted by code changes since the last time the code was pushed into the repository, so then we save testing time.

I hope this article shows you the capabilities of Azure DevOps in terms of Automation and Traceability.

 

References:

Associate automated test with test cases

Run automated tests from test plans

Workarounds: Associate test methods to test cases

Track test results

Analytics extension

Test manager

Test Impact Analysis in Visual Studio

Code coverage in Azure DevOps and Visual Studio

Track test status

Posted in Build, Coded UI, DevOps, Testing, Visual Studio | Leave a comment

30 days of DevOps: Application Logging

How do you analyse the behaviour of your application or services during development or when moving the code to production?

This is one of the most challenging things to control when we deploy software into an environment. Yes, the deployment is successful, but is the application really working as expected?

There are a number of ways to check if it is working as expected. One way is to analyse the behaviour of your application by extracting the component and transaction logs generated internally and somehow analyse them through queries and dashboards, This should help us to understand what’s going on.

SPLUNK

I’m a big fan of Splunk, you just need to create your log files and send them to Splunk, and in a matter of minutes, you can create shiny cute dashboards to query and monitor your log events.

Blog_Splunk

My only issue with Splunk is the cost. Using it for a few solutions is okay, but when you’re having to process a large amount of data it then becomes very expensive. As the offering we might need is based on a set of data per day. Even so, I can say it’s extremely easy to parse your data, create data models, create panes and dashboards and also provide alerts.

Some teams may rather opt for other (cheaper) solutions. Remember, open source doesn’t always mean free. The time your dev team is going to spend implementing the solution is not free!

ELK

A cheaper (sometimes) alternative is to use Elastic Search and Kibana (for extracting logs and analysis in that order).

Kibana is an open source data visualization plugin for Elasticsearch. It provides visualization capabilities on top of the content indexed on an Elasticsearch cluster, which is also open source.

Both can be hosted on your own servers, deployed on AWS or Azure or another cloud provider and you even have the option to use a hosted Elastic Cloud if you don’t care too much about the infrastructure.

How does this work?

1st) log your operations with a proper log management process (unique log code, log message, severity, etc).

2nd) Ingest the log files into an elastic search index and extract from your events the fields that you want to use for your charts representation and searches.

3rd) Create searches and dashboards according to the needs of the team. E.g. All logs, error logs and transactions for Dev and Test, error logs per component per system, no. Of HTTP requests, and HTTP error codes for Business Analysis, Operations and Support, etc.

4th) Give to the team the access and tools they really need. Yes, you can provide access to Kibana to the whole team and everybody’s happy, but why not use the full potential of Elastic Search? If I would be doing the testing, I would use the Elastic Search REST API to query the events logged by the application from my tests.

At MagenTys we have done ELK implementations from Zero to Hero for a wide range of projects, and not only for software development. It can also be used to ingest and represent the data of sources, such as Jenkins, Jira, Confluence, SonarQube and more!

 

Don’t like ELK? There are other options for application logging, that can be also extended to your infrastructure, like Azure Monitor.

Azure Monitor

Microsoft has recently changed the names to some of their products and has also grouped them all together. For example, Log Analytics and Application Insights have been consolidated into Azure Monitor to provide a single integrated experience.

Azure Monitor can collect data from a variety of sources. You can think of monitoring data for your applications in tiers ranging from your application, any operating and services it relies on, down to the platform itself

OMS (Operations Management Suite) as such is being retired, moving all its services into Azure Monitor. For those that are currently using it, you should know that by Jan 2019, the transition will be complete and you might have to move into Azure Monitor

Saying that, the new Azure Monitor experience looks like this:

Azure Monitor overview

Azure Monitor collects data from each of the following tiers:

  • Application monitoring data
  • Guest OS monitoring data
  • Azure resource monitoring data
  • Azure subscription monitoring data
  • Azure tenant monitoring data

To compare it with Splunk and ELK, we can leave the operations and resources monitoring aside for a moment and focus on Log Analytics and Application Insights.

Log data collected by Azure Monitor is stored in Log Analytics which collects telemetry and other data from a variety of sources and provides a query language for advanced analytics.

Common sources of data usually are .NET and .NET Core applications, Node.js applications, Java applications and Mobile Apps. But we can import and analyse custom logs too.

There are different ways to use Log Analytics, but mostly is being done through Log Queries:

Log searches

 

Remember that with Log Analytics and Log Queries, we are extracting the events created in our log files, organising and parsing them, filtering and then creating our Dashboards, Reports and Alert from them, similar to the Splunk model. With the advantage that we can cross-reference this logs with the information extracted from Application Insights:

Tables

Application Insights (which used to be separated from OMS and Log Analytics), is better used for analysing the traffic and actions around your applications. For example, on a web page, it’s straightforward with Application Insights to see the number of web requests, the pageViews, the HTTP Error codes or even analyse the stack trace of the errors captured and link this to our source code.

On the visualisation side,

Dashboard

It still has some limitations in terms of customisation of visualisations, but it’s extensible as we can link it to wonderful tools such as PowerBi or Grafana.

Azure Monitor views allow you to create custom visualizations with log data stored in Log Analytics.

View

Application Insight workbooks, provides deep insights into your data, to help your development team to focus on the most important.

Workbook

Last but not least, you can use Log Analytics in conjunction with PowerBi or Grafana, which are “nice to have”. The problem of Grafana is that you can monitor and build metrics but not analyse logs:

Grafana

The bright side is that Grafana is Open Source, free and can be used with many many data sources, Elastic Search included.

The last thing to mention, Azure Monitor it’s not free but it’s quite affordable!

In Summary

We have briefly discussed Splunk, ELK and Azure Monitor. What type of data we can extract and analyse, different visualisations, and cost.

Most development teams use ELK as they are used to it or either come from a Java background.

I’m seeing more and more teams using Splunk, which I really recommend but it is still an expensive tool to have.

Azure Monitor, traditionally has been used extensively in Operations (a legacy from System Center family, moved to the cloud, now integrated with other analytic tools) and Performance Testing. Now they bring together the other missing piece, Log Analytics and Application Insights, for application logs analysis, and offers a very good combo of metrics and logs tools for a very good price.

Not to go into deep details about any of those, just mentioning the most common scenarios I’m finding out there.

I hope this information is useful for you!

 

 

 

Posted in Uncategorized | Leave a comment

30 days of DevOps: Jenkins and Jira

Another DevOps day at MagenTys.

Part of DevOps is to increase transparency and improve the end 2 end traceability of our user stories from the conception to the release out of the pipeline.

There are different ways to bring that in. One typical case is how we can track development activities inside our Jira tickets.

Jira has integration with multiple development tools and external systems. For example, we can have our source code in Github or Bitbucket and track any source code changes and the pull requests and branches created in our repo, all of it from inside the Jira ticket.

Blog_JiraBranching

This helps the team to know what changes were made in the code for the purpose of this story.

But, despite this being a cool must-have, we can go one step further and also manage our build pipelines from our beloved Atlassian tool. We can, for example, trigger a Jenkins job every time a pull request is completed.

Blog_Bitbucket

So the team can create the pull request from Jira,  another team approves it and merge the code and then a Jenkins hook captures the commit and triggers the job.

At this point, you can think, “wow, that’s awesome!”. But hold your horses, it can be improved, even further, what about updating the ticket status back in Jira whenever the build fails or succeed? Now is where it gets fancy.

We can use Jenkins to call the Jira API to change the status of the ticket according to build job that is building the branch associated with the ticket, so now we don’t only have automatic builds from Jira/Bitbucket but also we can have Jenkins reporting to Jira what’s going on with the builds, which packages are created in Nexus or Artifactory.

For that, we can either use one of the many plugins available for Jenkins or call the Jira REST API from a Jenkins job task.

Jira Plugin
Jira Issue Updater Plugin

You can find some plugins in the Atlassian Marketplace too so you can do something fancy as the plugin below:

Image result for jira jenkins integration

https://marketplace.atlassian.com/apps/1211376/jenkins-integration-for-jira?hosting=cloud&tab=overview

Thanks to this you can improve our code, build, test, release process and making the life easier for every development team member, and at the same time helping to generate the release page in confluence with the right stories, artefacts, and qualities signed off, but that’s another DevOps story for another day.

 

References:

Bitbucket configuration: https://wiki.jenkins.io/plugins

Atlassian marketplace: https://marketplace.atlassian.com

Posted in Atlassian, DevOps, Jenkins, Uncategorized | Leave a comment

30 days of DevOps: DC/OS

Let’s start with a typical scenario…We have multiple teams requiring multiple services on different clouds. One team comes to you and asks about deploying a new Kafka Rest Proxy server to support an integration microservices solution. Another team asks you to set up a Jenkins and Nexus server in order to build your Build & Release pipelines. And, Developers want to analyse the application logs, so they request the creation of an ELK setup for monitoring transactions and creating alerts.

Does this ring a bell? It happens every day in most software development teams. We try to be “smart” and make some short-term decisions about how and where to deploy these services. In some cases, we just want to move away from the complexity of creating the service from scratch so we decide to go from Virtual Machines to SaaS, for example moving to Elastic Cloud instead of deploying an AWS ELK.

Without realising, you’ll have created a problem, which is that managing, monitoring, optimising and maintaining all these systems separately becomes unmanageable.

Another discussion point – cost. How can we optimise the cost of all these resources, ensure that the performance is adequate, able to scale up & down when needed and check our overall processing and data usage etc?

Most people try to optimise in the following ways:

  • Automatic Scaling
  • Moving from Physical Servers to VMs
  • Moving from VMs to Containers
  • Moving from VMs or Containers to Serverless architectures
  • Shutting down on idle times
  • Constant monitoring and tweaking the performance

At MagenTys we have been transforming companies moving them from VMs to containers and Serverless architectures so, why not going one step ahead?

All of these points led me to explore a different approach and to look into solutions such as DC/OS.

Let me briefly tell you about DC/OS first…

DC/OS (Data Center Operating System) is a cluster manager and system to run data services. It allows customers to run services and containers into their own clouds or infrastructure. As a data centre operating system, DC/OS is itself a distributed system, a cluster manager, a container platform, and an operating system.

It belongs to a company called Mesosphere and it’s being built on the top of Apache Mesos.

To add more details into DC/Os, we could say that it’s a modern data-driven application model for containerised microservices, data services and AI/ML tools that is supported on the top of Hybrid Clouds.

Just one slide courtesy of Mesosphere that explains this a bit better:

One of the great advantages of DC/OS is that as a cluster manager, it runs in every infrastructure. This could be Physical Servers, Virtual Servers, Private Clouds or Public Cloud Providers such as Google, AWS and Azure.

This means that you can move DC/OS from one cloud to another, which is as easy as creating new nodes (VMs) in another cloud and move just the machines that are conforming DC/OS, gradually. This means you’re not tied into a specific cloud provider anymore.

DC/OS also includes a unique section where you can administer the security, compliance of your infrastructure & services and where you can schedule when, and how your applications are deployed and for how long.

As part of DC/OS we can find:

  • Application-Aware Automation
  • Intelligent Resourcing
  • Security
  • Service Catalog
  • Hybrid & Edge Cloud
  • Operations & Troubleshooting
  • Multitenancy
  • Networking & Storage

Don’t forget that DC/OS is built on the top of Apache Mesos, a distributed systems Kernel that provides applications (e.g., Hadoop, Spark, Kafka, Elasticsearch) with APIs for resource management and scheduling across entire datacenter and cloud environments.

If you want to compare DC/OS with Linux, we could go as far as to say that DC/OS is like a distribution of Apache Mesos Cluster Management.

As a container platform, DC/OS includes two built-in task schedulers (Marathon and DC/OS Jobs (Metronome)) and two container runtimes (Docker and Mesos). All tasks on DC/OS are containerised, which allows the system to optimise and distribute the hardware resources in a more efficient way, adding also resiliency to these containers.

selected image

Adding to this abstractions beast, we have an offer of 100+ frameworks (resources) available on the catalogue, which can be deployed into DC/OS with a couple of clicks.

Image result for dc/os

Why Industry Leaders choose Mesosphere?:

  • Build and deploy time from weeks to minutes.
  • Save 50% on AWS or Azure bills.
  • Don’t want to get locked to a cloud provider.
  • Fast data + Microservices: With Kubernetes on DC/OS we can use the container orchestrator alongside big data services with a common set of security, maintenance, and management tools”.

One picture I saw during my workshop with DC/OS which I really liked (see below) shows a typical technology backend that we find in most of our projects, with the difference being that all those are hosted in DC/OS instead on multiple clouds as PaaS or SaaS.

To summarise, it was really worth it to explore this solution as I found out that instead  of having all these services and applications such as Jenkins, Artifactory, Kafka, ElasticSearch, Kibana, Hadoop, Bitbucket, Kubernetes etc. in different providers/clouds/offerings, I have seen that it makes more sense to have them all together under the same umbrella. With DC/OS you basically provide the hardware power you want based on the number of nodes (VMs) that you want to deploy and use those resources as a pool for all the services that you will deploy inside, making a better use of your resources.

If you want to know more just explore it here

References:

http://mesos.apache.org/documentation/latest/architecture/

https://docs.mesosphere.com/1.11/overview/what-is-dcos/

https://dcos.io/

Posted in DevOps | Tagged | Leave a comment

What the duck is DevOps?

I wrote my first article about what DevOps is and how to start with it in October 2015. Back then, there were not much information about DevOps, but Agile was peaking up, and some processes like Application Lifecycle Management and SDLC were very well known. Likewise with “DevOps” tools, there were some, but not to the degree of sophistication we can expect from tools we have nowadays.

Then on January of this year (2018), I wrote down another article about a DevOps journey that I took part in during Christmas time, and I thought, “Great! I just added my little grain of sand to the DevOps community” which I hoped was already very well known, and I assumed (bad from me) that by now mostly everyone knew about DevOps and what it’s about.

But, I’m finding myself in many situations where people still have the wrong concept or idea about what DevOps is.

Here are a few examples:

  • “Yes, we have a new project that needs a database developer for 3 weeks to create some views on an Oracle DataBase, let’s bring a DevOps consultant for this work.”
  • “I need a Web Frontend Developer for this Mobile Application, we should hire a DevOps engineer.”
  • “Yes, if it is a DevOps person has to be an expert on Java, C#, JavaScript, Python and Perl, and be able to deploy them into production.”

I could go on but you get my point…

I have a view of what DevOps means and what a DevOps “Consultant/Engineer” should be capable of. I’m not saying it’s the right view, but as always, we can open a constructive debate about it.

Let’s start with the official definition of DevOps by Mr. Wikipedia:

DevOps (a clipped compound of “development” and “operations“) is a software engineering culture and practice that aims at unifying software development (Dev) and software operation (Ops). The main characteristic of the DevOps movement is to strongly advocate automation and monitoring at all steps of software construction, from integration, testing, releasing to deployment and infrastructure management. DevOps aims at shorter development cycles, increased deployment frequency, and more dependable releases, in close alignment with business objectives

My favourite sentence is: “The main characteristic of the DevOps movement is to strongly advocate automation and monitoring at all steps of software construction, from integration, testing, releasing to deployment and infrastructure management”

So, if you ask me again what a good DevOps consultant should be aware of, I would suggest the following:Code – Code development for Automation, Source Code Management, Code Reviews, Branching Strategy, Code Analysis tools

Build – Build Strategy, Continuous integration, build status

Test – Continuous testing tools, Automation for Functional and Non Functional testing

Packaging – Artefacts generation, quality labelling and packages readiness

Release – Change management, release approvals, release automation, environments generation, automatic deployment

Configure – Infrastructure configuration and management, Infrastructure as Code tools

Monitor – Applications performance monitoring, alerts, reports, end–user experience

On top of that you can include the technologies which are crossing through these areas. Technologies like, Jenkins, Terraform, Ansible, Grafana, Chef, VSTS, Splunk, ELK, Python, Nuget, TeamCity, Cloudwatch, Lambdas, OMS, etc. They are just tools for a purpose, and they are meant to be used for the categories previously mentioned.

There is a common misconception that a DevOps consultant needs to be a Cloud Expert. This is not true at all, Cloud Services are just another set of servers, services and tools that are used for the same purpose.

A good place to check out what tools are trendy at the moment and how they can be used in each of the above areas is https://xebialabs.com/periodic-table-of-devops-tools/. This for me, so far, is one of the best representations of what an engineer specialised in DevOps should know about.

When someone asks me what a DevOps consultant is, I answer that it’s a mixture of all these things, with strengths in 1 or 2 specific areas and an overall average/good knowledge of the other areas. They are also capable of working with any of these with ease. This doesn’t mean a DevOps person is exclusive to being a pure developer, a pure test automation dev, or even a pure operations person. It’s the knowledge about of these areas of DevOps which makes the difference.

Saying that, I will be interested to know what your understanding of DevOps is and what everybody now (maybe incorrectly) calls, a DevOps engineer.

Posted in DevOps, Uncategorized | Leave a comment