Agile, Architecture, Design Pattern, Devops, Methodology, Microservices, OO Design Principles, TDD

InfoQ’s 2019, and Software Predictions for 2020

See full post at: https://www.infoq.com/articles/infoq-2019-retrospective/

Key Takeaways

  • Last month, Google claimed to have achieved quantum supremacy—the name given to the step of proving quantum computers can deliver something that a classical computer can’t. That claim is disputed, and it may yet turn out that we need a better demonstration, but it still feels like a significant milestone.
  • A surprise this year was the decline of interest in Virtual Reality, at least in the context of Smart-phone-based VR. Despite this we still think that something in the AR/VR space, or some other form of alternative computer/human interaction, is likely to come on the market in the next few years and gain significant traction.
  • We expect to see the interest in Web Assembly continue and hope that the tooling for it will start to mature.
  • In our DevOps and Cloud trend report, we noted that Kubernetes has effectively cornered the market for container orchestration, and is arguably becoming the cloud-agnostic compute abstraction. The next “hot topics” in this space appear to be “service meshes” and developer experience/workflow tooling.
  • We’re looking forward to seeing what the open source community and vendors are working on in the understandability, observability, and debuggability space in the context of architectural patterns such microservices and functions(as-a-service).

Development

Java

JavaScript, Java, and C# remain the most popular languages we cover, but we’re also seeing strong interest in Rust, Swift, and Go, and our podcast with Bryan Cantrill on “Rust and Why He Feels It’s The Biggest Change In Systems Development in His Career” is one of the top-performing podcasts we’ve published this year. We’ve also seen a growing interest in Python this year, probably fuelled by its popularity for machine learning tasks.

After a rather turbulent 2018, Java seems to be settling into its bi-annual release cycle. According to our most-recent reader survey Java is the most used language amongst InfoQ readers, and there continues to be a huge amount of interest in the newer language features and how the language is evolving. We also continue to see strong and growing interest in Kotlin.

It has been interesting to see Microsoft’s growing involvement in Java, joining the OpenJDK, acquiring JClarity, and hiring other well known figures including Monica Beckwith.

In the Java programming language trends report, we noted increased adoption of non-HotSpot JVMs, and we believe OpenJ9 is now within the early-adopter stage. As the time we noted that:

“We believe that the increasing adoption of cloud technologies within all types of organisation is driving the requirements for JREs that embrace associated “cloud-native” principles such as fast start-up times and a low memory footprint. Graal in itself may not be overly interesting, but the ability to compile Java application to native binaries, in combination with the support of polyglot languages, is ensuring that we keep a close watch on this project.”

.NET

The release of .NET Core 3 in September generated a huge buzz on InfoQ and produced some of our most-popular .NET content of the year. WebAssembly has been another area of intense interest, and we saw a corresponding surge in interest for Blazor, a new framework in ASP.NET Core that allows developers to create interactive web applications using C# and HTML. Blazor comes in multiple editions, including Blazor WebAssembly which allows single-page applications to run in the client’s web browser using a WebAssembly-based .NET runtime.

According to our most-recent reader survey, C# is the second-most widely used language among InfoQ readers after Java, and interest in C#8 in particular was also strong.

Web Development

Unsurprisingly, the majority of InfoQ readers write at least some JavaScript – around 70% according to the most recent reader survey – making it the most widely used language among our readers. The dominant JavaScript frameworks for InfoQ readers seem to currently be Vue and React. We also saw interest in using Javascript for machine learning via TensorFlow.js.  Away from JavaScript, we saw strong interest in some of the transpiler options. In addition to Blazor, mentioned above, we saw strong interest in Web Assembly, Typescript, Elm and Svelte.

Architecture

It’s unsurprising that distributed computing, and in particular the microservices architecture style, remains a huge part of our news and feature content. We see strong interest in related topics, with our original Domain Driven Design Quickly book, and our more-recent eMag “Domain-Driven Design in Practice” continuing to perform particularly well, and interest in topics like observability and distributed tracing. We also saw interest in methods of testing distributed systems, including a strong performance from our Chaos Engineering eMag, and a resurgence in reader interest in some for the core architectural topics such as API design, diagrams, patterns, and models.

AI, ML and Data Engineering

Our podcast with Grady Booch on today’s Artificial Intelligence reality and what it means for developers was one of our most popular podcasts of the year, and revealed strong interest in the topic from InfoQ readers.

Key AI stories in 2019 were MIT introducing GEN, a Julia-basd language for artificial intelligence, Google’s ongoing work on ML Kit, and discussions around conversational interfaces, as well as more established topics such as streaming.

It’s slightly orthogonal to the rest of the pieces listed here, but we should also mention “Postgres Handles More Than You Think” by Jason Skowronski which performed amazingly well.

Culture and Methods

If there was an overarching theme to our culture and methods coverage this year it might best be summed up as “agile done wrong” and many of our items focused on issues with agile, and/or going back to the principles outlined in the Agile Manifesto.

We also saw continued in interest in some of the big agile methodologies, notably Scrum, with both “Scrum and XP from the Trenches“, and “Kanban and Scrum – Making the Most of Both” performing well in our books department.

We also saw strong reader interest in remote working with Judy Rees’ eMag on “Mastering Remote Meetings“, and her corresponding podcast, performing well, alongside my own talk on “Working Remotely and Managing Remote Teams” from Aginext this year.

DevOps and Cloud

In our DevOps and Cloud trends report, we noted that Kubernetes has effectively cornered the market for container orchestration, and is arguably becoming the cloud-agnostic compute abstraction. The next “hot topics” in this space appear to be “service meshes” and developer experience/workflow tooling. We continue to see strong interest in all of thee among InfoQ’s readers.

A trend we’re also starting to note is a number of languages which are either infrastructure or cloud-orientated. In our Programming Languages trends report, we noted increased interest and innovation related to infrastructure-aware or cloud-specific languages, DSLs, and SDKs like Ballerina and Pulumi. In this context we should also mention Dark, a new language currently still in private beta, but already attracting a lot of interest. Somewhat related, we should also mention the Ecstasy language, co-created by Tangosol founders Cameron Purdy and Gene Gleyzer. Chris Swan, CTO for the Global Delivery at DXC Technology, spoke to Cameron Purdy about the language and the problems it’s designed to solve.

Software Predictions for 2020

Making predictions in software in notoriously hard to do, but we expect to see enterprise development teams consolidate their cloud-platform choices as Kubernetes adoption continues. Mostly this will be focussed on the “big five” cloud providers – Amazon, Google, IBM (plus Red Hat), Microsoft, and VMware (plus Pivotal). We think that, outside China, Alibaba will struggle to gain traction, as will Oracle, Salesforce, and SAP.

In the platform/operations space we’re expecting that service meshes will become more integrated with the underlying orchestration frameworks (e.g. Kubernetes). We’re also hopeful that the developer workflow for interacting with service meshes becomes more integrated with current workflows, technologies, and pipelines.

Ultimately developers should be able to control deploy, release, and debugging via the same continuous/progressive delivery pipeline. For example, using a “GitOps” style pipeline to deploy a service by configuring k8s YAML (or some higher-level abstraction), controlling the release of the new functionality using techniques like canarying or shadowing via the configuration of some traffic management k8s custom resource definition (CRD) YAML, and enabling additional logging or debug tooling via some additional CRD config.

In regards to architecture, next year will hopefully be the year of “managing complexity”. Architectural patterns such microservices and functions(as-a-service) have enabled developers to better separate concerns, implement variable rates of change via independent isolated deployments, and ultimately work more effectively at scale. However, our ability to comprehend the complex distributed systems we are now building — along with the availability of related tooling — has not kept pace with these developments. We’re looking forward to seeing what the open source community and vendors are working on in the understandability, observability, and debuggability space.

We expect to see more developers experimenting with “low code” platforms. This is partly fueled by a renewed push from Microsoft for its PowerApps, Flow, Power BI, and Power Platform products.

In the .NET ecosystem, we believe that Blazor will keep gaining momentum among web developers. .NET 5 should also bring significant changes to the ecosystem with the promised interoperability with Java, Objective-C, and Swift. Although it is early to say, Microsoft’s recent efforts on IoT and AI (with ML.NET) should also help to raise the interest in .NET development.  Related we expect to see the interest in Web Assembly continue and hope that the tooling hear will start to mature.

Despite the negative news around VR this year, we still think that something in the AR/VR space, or some other form of alternative computer/human interaction, is likely to come on the market in the next few years and gain significant traction, though it does seem that the form factor for this hasn’t really arrived.

Chưa phân loại, Devops

Think before using Configuration Management Tools for Infrastructure Provisioning

Source: http://www.neeleshgurjar.co.in/think-before-using-configuration-management-tools-for-infrastructure-provisioning/

These days, almost every Software development organization is trying to implement DevOps in their Software Development Lifecycle.

DevOps is getting accepted worldwide for its Software Delivery speed and reliability.

Infrastructure provisioning or orchestration and Configuration Management both are like heart and soul of DevOps toolchain.

Tools like Terraform & Cloud Formation are used for Infrastructure Provisioning (IP). Same time Configuration Management (CM) already has the long list of tools such as Ansible, salt, puppet etc.

Configuration Management tools also have the ability to spin up infrastructure, however there are some cons, which needs to face.

In real world due to lack of awareness or due to time pressure, organizations use CM tools to launch infra.

In this small note, I will try to explain why CM tools are not recommended for launching infrastructure in cloud.

Advantages of using CM tools for Provisioning Infrastructure:

  • Less learning curve: No need to learn different technology and its syntax. One can write Infra code using same CM syntax.
  • Fast integration with Config Management tool: Instances launched by CM tool, easily gets added to host database of CM tool. No need to have any third party application  or script to connect. For eg. if we spin up an instance with Salt-Cloud then target instance will get added to Salt-master automatically.

Due to these advantages organization gets attracted towards Config Management tool for provisioning as well. However along with these advantages, one should think about below points before using these config tools for provisioning infrastructure.

Maintaining State:

Whenever we launch or modify any infrastructure, it creates a state. For eg. if we launch 1 VPC, 2 Subnets, 4 EC2 instances initially, then it will be a first state aka Baseline of your infrastructure. Any modification to any of these components will lead to generating new state of infra similar to versioning of source code.

  • Easy to rollback: As our previous states are maintained or versioned we can easily rollback to previous states.
  • Easy to track changes: We can see differences between 2 states any time to track changes between old and new infra
  • Compliance requirement: Version of Infrastructure is required by FedRAMP like Compliances, which can be fullfilled by maintaining states in code.
  • Easy to apply changes to any specific components in infrastructure

As per my previous experiences, if infrastructure is provisioned using config management tool like Ansible, Salt-Cloud, it is not possible or difficult to manage state of Infrastructure. In reply to this one  can say that they do not need currently, but are you sure that you will never need it in future? For eg. if you plan for FedRAMP or any other compliance, which requires Versioning of Infrastructure.

Integration of Infrastructure provisioning and Configuration Management tools:

Some times people avoid to use Infrastructure provisioning tool because they think that Integration of provisioned infrastructure and configuration management tool will be difficult or troublesome. In reality it is very easy to integrate both the tools. We can utilize features like “User Data” in case of AWS to achieve this. We can write some commands or script in bootstraping to configure CM client on target machine and get it connected with our CM Master. It is just matter of 10-15 lines of Shell Script.

Infrastructure inventory management:

If infrastructure is spawned by Provisioning tool like CloudFormation or Terraform, then it is easy to keep track of all inventory.

We can easily generate CSV report from Terraform state files as well. As Configuration Management tools do not maintain state of provisioned infra, we can not manage inventory of infrastructure.

Importing current infrastructure:

We can import current running infrastructure in form of Code using Provisioning tools, but it is not possible using Configuration Management tools.

Limitation for launching various components:

Terraform or Cloudformation can provision various AWS services like S3 bucket, IAM Role, VPC, Security Groups, etc. very easily. However these components cannot be launched easily using Configuration Management tools. For eg. Salt-Cloud can not provision VPC, S3 bucket, Security Groups etc. One need to develop module for this.

Conclusion:

Configuration Management tools are best when it comes to apply desired configurations to target machines or group of machines.

But when you think about provisioning, there are powerful tools available in market. Using them you will not only get granular control on your infrastructure but you can audit, track and do much more with it.

Also currently there is a basic  requirement of tracking changes in infrastructure, which can be easily achieved with these provisioning tools such as Terraform and CloudFormation. CloudFormation is also comming up with Drift Detection.

Please do think about compliance, which may require in future and try to be ready for that from beginning.

I hope this article will help you in deciding tool for provisioning infrastructure.
Please do let me know your suggestions on ngurjar[at]neeleshgurjar[dot]co[dot]in

#CloudFormation #Terraform #ConfigurationManagement_vs_InfrastructureProvisioning #Terraform_vs_Ansible

Devops

CICD, Continuous delivery pipeline with Docker Compose and Jenkins

0

Continuous Delivery and DevOps are well known and widely spread practices nowadays. Kubernetes and Docker Swarm are two of the most powerful containerorchestration platforms.

Kubernetes and Spinnaker create a robust continuous delivery flow that helps to ensure your software is validated and shipped quickly.

This tutorial shows you how to create a continuous delivery pipeline using Docker Compose and Jenkins Pipeline. Kubernetes is too opinionated, hard to set up, quite different from Docker CLI. With Docker Compose you can run local development, deploy to dev/test and with very little change(removing any volume bindings, binding to different ports, setting environment variables differently) you can run on Docker Swarm cluster – so your local, dev, staging and production environment are almost identical. This tutorial is based on my previous POC: Event driven microservices architecture using Spring Cloud

So you are going to start 10 Spring Boot applications, 5 MongoDB instances, Jenkins, RabbitMQ, SonarQube, Zuul Gateway, Config Server, OAuth2 Server, Eureka Discovery and Hystrix Circuit Breaker. And the best of all, this complete continuous delivery pipeline(>20 containers) cost me only 100$ per month – ask me how.

Setting up Jenkins – see Jenkins online

sudo docker run -u root --rm -d -p 49001:8080 -p 50000:50000 -v jenkins-data:/var/jenkins_home -v /var/run/docker.sock:/var/run/docker.sock jenkinsci/blueocean

Add Jenkins Plugins: HTTP Request PluginJaCoCo pluginMask Passwords PluginPublish Over SSHSonar Quality Gates PluginSonarQube Scanner for JenkinsSSH Agent PluginSSH plugin.

Setting up SonarQube – see SonarQube online

sudo docker run -d --name sonarqube -p 9091:9000 -p 9092:9092 sonarqube

Now, we are ready to build our CI/CD pipeline. You can manage the Jenkinsfile using git instead of using the Jenkins UI.

First, we need to setup the Docker credentials for GitHub, DockerHub and SSH to your server where you are planning to install your docker containers.

Writing Your Jenkins Job

We will create nine stages: Preparation, Build & Unit Test, JaCoCo, SonarQube Analysis, SonarQube Quality Gate, Publish to DockerHub, Deploy, HealthCheck, Integration Test.

0 (1)

The Preparation stage will clone your code, the Build stage will build your code and run unit tests, the Publish stage will Push the Docker Image to your registry and the Deploy stage will deploy your containers. That’s how it will look like when you are done: Jenkins

Preparation Stage: Clone the project’s code.

stage('Get code from GitHub') { 
      // Get code from a GitHub repository
      git 'https://github.com/sbruksha/Microservices-platform.git'
   }

Build and Unit Test Stage

stage('Build and Unit Test') {
      // Run build and test
      sh '''cd $WORKSPACE/account-service && ./gradlew clean build jacocoTestReport
    cd $WORKSPACE/appointment-service && ./gradlew clean build jacocoTestReport
    cd $WORKSPACE/auth-server && ./gradlew clean build jacocoTestReport
    ........
    '''
   }

Run JaCoCo Stage

stage('JaCoCo Report') {
      jacoco exclusionPattern: '**/test/**,**/lib/*', inclusionPattern: '**/*.class,**/*.java'
   }

After that step executes, you can see the reports here.

SonarQube Analysis Stage

stage('SonarQube analysis') { 
        withSonarQubeEnv('Sonar') { 
          sh '''cd $WORKSPACE/appointment-service && ./gradlew sonarqube \
 -Dsonar.host.url=http://dev.eodessa.com:9091 \
 -Dsonar.login=xxxxxxxxxxxxxxxxxxxxxxxxxxxx \
 -Dsonar.sources=src/main \
 -Dsonar.java.binaries=build/classes \
 -Dsonar.java.libraries=build/libs/*.jar
 ........'''
        }
   }

Quality cannot be injected after the fact, it must be part of the process from the very beginning. It is strongly recommended to inspect the code and make findings visible, as soon as possible. As part of the pipeline, the code is inspected, and only if the code is fine and meets the quality gates, the built artifacts are uploaded to the binary repository manager.

We can open the SonarQube web application and drill down to the finding, click hereor see the screenshot below:

As part of a Jenkins pipeline stage, SonarQube is configured to run and inspect the code. But this is just the first part, now we add the quality gate in order to break the build if code quality is not met.

SonarQube Quality Gate Stage

stage("SonarQube Quality Gate") { 
        timeout(time: 1, unit: 'HOURS') { 
           def qg = waitForQualityGate() 
           if (qg.status != 'OK') {
             error "Pipeline aborted due to quality gate failure: ${qg.status}"
           }
        }
   }

Specifically the waitForQualityGate code will pause the pipeline until SonarQube analysis is completed and returns the quality gate status.

Publish Images to DockerHub Stage

stage('Publish Images to Hub') {
   		withCredentials([[$class: 'UsernamePasswordMultiBinding', credentialsId: '5e1c35ab-1404-4165-b224-8894cc70', usernameVariable: 'DOCKER_USER', passwordVariable: 'DOCKER_PASS'],]) {
        sh '''docker login -u ${DOCKER_USER} -p ${DOCKER_PASS}
    cd $WORKSPACE/account-service && ./gradlew build publishImage
    cd $WORKSPACE/appointment-service && ./gradlew build publishImage
    cd $WORKSPACE/auth-server && ./gradlew build publishImage
    ....................'''
    	}
   }

Make sure that the DOCKER_USER and DOCKER_PASS variables were created in the withCredentials section.

Deploy Images with Docker Compose Stage

stage('Deploy Images with Docker-Compose') {
        build 'EventDrivenPlatform-Dev-Deploy'
   }

Deploy stage will run “EvenDrivenPlatform-Dev-Deploy” job, which execute shell script on remote host using ssh: sudo docker-compose -f docker-compose.yml up -d

With Docker Compose you define a multi-container application in a single file, then spin your application up in a single command which does everything that needs to be done to get it running. You can create different Jenkins jobs for different environments: dev, stage, prod. The main function of Docker Compose is the creation of microservice architecture, containers and the links between them. After this stage is completed you can see:

  • All your containers are running
  • Eureka discovery server http://dev.eodessa.com:8761/
  • Hystrix Dashboard, Zuul Gateway, RabbitMQ, ConfigServer, AuthServer, etc..

HealthCheck Stage

stage('HealthCheck') {
      httpRequest responseHandle: 'NONE', url: 'http://dev.eodessa.com/health'
   }

Integration Test Stage

stage('Integration Test') {
      sh '''cd $WORKSPACE/test-integration && ./gradlew clean test'''
   }

After creating the pipeline, you save our script and hit Build Now on the project home of our Jenkins dashboard.

Here’s an overview of the builds:

Each output is accessible when hovering over a stage cell and clicking the Logs button to see the log messages for that step.

You can also find more details of the code analysis, JaCoCo report and SonarQube. When you have successful build you can see the project running online.

All source code for this post is located in GitHub repository.

Source: https://www.linkedin.com/pulse/cicd-continuous-delivery-pipeline-docker-compose-jenkins-bruksha?trk=related_artice_CICD%2C%20Continuous%20delivery%20pipeline%20with%20Docker%20Compose%20and%20Jenkins_article-card_title