Architecture, Tản mạn

System Design – Thiết kế Hệ thống

Everything’s a Trade-Off.

Đôi lời tản mạn.
Cuộc đời 1 lập trình viên từ khi bắt đầu với các dòng code, tập tành viết unit test rồi cơ duyên đưa đẩy mình tìm hiểu về clean code, code smell, refactor để có thể viết được những đoạn code sạch hơn. Lên xíu nữa lại bén duyên với design pattern giúp mình viết code dễ maintain hơn, và trông pro hơn. Sau đó, ta tự tin có thể đảm nhận các nhiệm vụ về review code, được tham gia vào quá trình detail design một module hoặc feature nào đó. Giai đoạn này kéo dài khá lâu, khoảng 3,4 năm đầu đời.

Sau giai đoạn đó, sẽ đến giai đoạn mình được assign để design kiến trúc cho cả một hệ thống. Như thường lệ, như một sự may mắn khi tìm hiểu trên Google, ta sẽ tiếp cận được với rất nhiều tài liệu nói về Software Architecture như Clean Architecture hoặc như series bài viết của mình về Architecture ngay từ những ngày đầu tìm hiểu. Chung quy, software architecture cũng giống như những high level pattern để giải quyết những bài toán ở cấp độ cao hơn so với design pattern, vốn ta sẽ liên tưởng ngay đến coding. Những khái niệm như Microservices, Domain Driven Design, Cloud Computing cũng dần dần xuất hiện. Các khái niệm về High Availability, scalability, Reliability, Security ngày càng trở nên quan trọng trong việc thiết kế bất cứ sản phẩm nào.

Những lần được phân công làm pre-sales cùng các “chuyên gia”, có cả dân sale lẫn dẫn senior technical architect lẫn solution architect đã giúp mình ngày càng mở mang tầm mắt.

Tuy nhiên, mình cảm thấy như vậy chưa đủ, các “high level” digram được vẽ ra vẫn chưa thể lột tả được hết ý nghĩa của một system, vẫn còn một khoảng “gap” rất lớn từ các bản thiết kế để đi đến thực tế lúc triển khai, vẫn tồn tại nhiều vấn đề mà khi đi sâu vào chi tiết mới thấy được. Luôn tồn tại một câu hỏi trong đầu mình, đó là làm thế nào để có thể thiết kế được nguyên cả một hệ thống, không chỉ là những bản vẽ mà còn phải giải quyết được các vấn đề mà bản thiết kế vẫn chưa mô tả được. Và liệu mình có bỏ qua những yếu tố nào khi thiết kế không? Đâu là “bài” để mình theo để có thể lý luận được cả một hệ thống?

Nếu không đặt câu hỏi, bạn sẽ không có câu trả lời. Rất may là mình có câu hỏi, và đang tìm kiếm câu trả lời cho những câu hỏi đó.

(Hết phần tản mạn)

System Design – hay Việt ngữ là Thiết kế Hệ thống.

Một keyword vừa quen mà vừa lạ, tưởng không lợi hại nhưng lợi hại không tưởng.
Quen là một Thiết kế Hệ thống thì ai cũng được học ở trường, nhưng nội dung nó dạy gì thì quên mất rồi. Lạ là vì từ khi đi làm, mình chưa bao giờ thử search cụm từ đó.

Vậy System Design (khi đi làm) nói về cái gì?

“System design is the phase that bridges the gap between problem domain and the existing system in a manageable way. This phase focuses on the solution domain, i.e. “how to implement?”

System design is the process of designing the elements of a system such as the architecture, modules and components, the different interfaces of those components and the data that goes through that system.

The purpose of the System Design process is to provide sufficient detailed data and information about the system and its system elements to enable the implementation consistent with architectural..”

Types of System Design:

  • Logical Design
  • Physical Design
  • Architectural Design
  • Detailed Design
  • Conceptual Data Modeling
  • Entity Relationship Model

Elements of a System

  • Architecture – This is the conceptual model that defines the structure, behavior and more views of a system. We can use flowcharts to represent and illustrate the architecture.
  • Modules – This are components that handle one specific tasks in a system. A combination of the modules make up the system.
  • Components – This provides a particular function or group of related functions. They are made up of modules.
  • Interfaces – This is the shared boundary across which the components of a the system exchange information and relate.
  • Data – This the management of the information and data flow.

vân vân, mây mây..

Reference:

Đọc các định nghĩa trên thì cũng chưa thấy gì lợi hại lắm. Nhưng để ý 1 xíu bạn sẽ thấy System Design bao hàm tất cả những bản thiết kế cần thiết để tạo nên một system hoàn chỉnh, từ architecture, module, interface, data flow…

Có thể nói System Design là đẳng cấp cao nhất của design phase. Sau giai đoạn này chúng ta sẽ tiến lên phase Analysis với màn món System Analysis. Con đường trở thành Solution Architect hoặc Sale đang trước mắt chúng ta. Kakaka..

Và nếu may mắn, chúng ta sẽ tiếp cận được với một phương trời mới với rất nhiều những “tài nguyên” phong phú để đắm chìm trong đó. Đó chính là những tài liệu giúp chúng ta tiếp cận với tư duy thiết kế những hệ thống to bự (large scale) trong thế giới thật (real-world) chứ không phải là các triết lý (principle) và pattern suông nữa. (Nếu xem Principle và pattern là linh hồn của giải các bài toán của một hệ thống thì việc đắp da đắp thịt cho hệ thống).

Một số gợi ý cho các bạn tham khảo:

Các tài liệu đó sẽ đưa chúng ta go through các thiết kế của các hệ thống kinh điển như Facebook, Dropbox, Twitter, Instagram,…

Đọc đến đây thì nếu đó là thứ bạn cần thì sẽ không ngần ngại mà rời trang này và tiến vào quá trình trải nghiệm của bản thân ngay lập tức.

Enjoy : )

Architecture, Cloud

[Azure] Architecting Cloud Applications

Introduction

https://docs.microsoft.com/en-us/azure/architecture/guide/

Architecture styles

  • N-tier application
  • Microservices
  • Event-driven architecture
  • Web-queue-worker
  • Big compute
  • Big data

Application design principles

Follow these design principles to make your application more scalable, resilient, and manageable.

  • Design for self healing. In a distributed system, failures happen. Design your application to be self healing when failures occur.
  • Make all things redundant. Build redundancy into your application, to avoid having single points of failure.
  • Minimize coordination. Minimize coordination between application services to achieve scalability.
  • Design to scale out. Design your application so that it can scale horizontally, adding or removing new instances as demand requires.
  • Partition around limits. Use partitioning to work around database, network, and compute limits.
  • Design for operations. Design your application so that the operations team has the tools they need.
  • Use managed services. When possible, use platform as a service (PaaS) rather than infrastructure as a service (IaaS).
  • Use the best data store for the job. Pick the storage technology that is the best fit for your data and how it will be used.
  • Design for evolution. All successful applications change over time. An evolutionary design is key for continuous innovation.
  • Build for the needs of business. Every design decision must be justified by a business requirement.

Five pillars of software quality

  • Cost. Managing costs to maximize the value delivered.
  • DevOps. Operations processes that keep a system running in production.
  • Resiliency. The ability of a system to recover from failures and continue to function.
  • Scalability. The ability of a system to adapt to changes in load.
  • Security. Protecting applications and data from threats.

Cloud Design Patterns

Caterogies

  • Availability
  • Data management
  • Design and implementation
  • Management and monitoring
  • Messaging
  • Performance and scalability
  • Resiliency
  • Security

Patterns

  • Ambassador
  • Anti-corruption Layer
  • Asynchronous Request-Reply
  • Backends for Frontends
  • Bulkhead
  • Cache-Aside
  • Choreography
  • Circuit Breaker
  • Claim Check
  • Command and Query Responsibility Segregation (CQRS)
  • Compensating Transaction
  • Competing Consumers
  • Compute Resource Consolidation
  • Event Sourcing
  • External Configuration Store
  • Federated Identity
  • Gatekeeper
  • Gateway Aggregation
  • Gateway Offloading
  • Gateway Routing
  • Health Endpoint Monitoring
  • Index Table
  • Leader Election
  • Materialized View
  • Pipes and Filters
  • Priority Queue
  • Publisher/Subscriber
  • Queue-Based Load Leveling
  • Retry
  • Scheduler Agent Supervisor
  • Sequential Convoy
  • Sharding
  • Sidecar
  • Static Content Hosting
  • Strangler
  • Throttling
  • Valet Key

Best practices for Cloud application

  • API design
  • API implementation
  • Autoscaling
  • Background jobs
  • Caching
  • Content Delivery Network
  • Data partitioning
  • Data partitioning strategies (by service)
  • Deployment stamp
  • Monitoring and diagnostics
  • Retry guidance for specific services
  • Transient fault handling
Agile, Architecture, Design Pattern, Devops, Methodology, Microservices, OO Design Principles, TDD

InfoQ’s 2019, and Software Predictions for 2020

See full post at: https://www.infoq.com/articles/infoq-2019-retrospective/

Key Takeaways

  • Last month, Google claimed to have achieved quantum supremacy—the name given to the step of proving quantum computers can deliver something that a classical computer can’t. That claim is disputed, and it may yet turn out that we need a better demonstration, but it still feels like a significant milestone.
  • A surprise this year was the decline of interest in Virtual Reality, at least in the context of Smart-phone-based VR. Despite this we still think that something in the AR/VR space, or some other form of alternative computer/human interaction, is likely to come on the market in the next few years and gain significant traction.
  • We expect to see the interest in Web Assembly continue and hope that the tooling for it will start to mature.
  • In our DevOps and Cloud trend report, we noted that Kubernetes has effectively cornered the market for container orchestration, and is arguably becoming the cloud-agnostic compute abstraction. The next “hot topics” in this space appear to be “service meshes” and developer experience/workflow tooling.
  • We’re looking forward to seeing what the open source community and vendors are working on in the understandability, observability, and debuggability space in the context of architectural patterns such microservices and functions(as-a-service).

Development

Java

JavaScript, Java, and C# remain the most popular languages we cover, but we’re also seeing strong interest in Rust, Swift, and Go, and our podcast with Bryan Cantrill on “Rust and Why He Feels It’s The Biggest Change In Systems Development in His Career” is one of the top-performing podcasts we’ve published this year. We’ve also seen a growing interest in Python this year, probably fuelled by its popularity for machine learning tasks.

After a rather turbulent 2018, Java seems to be settling into its bi-annual release cycle. According to our most-recent reader survey Java is the most used language amongst InfoQ readers, and there continues to be a huge amount of interest in the newer language features and how the language is evolving. We also continue to see strong and growing interest in Kotlin.

It has been interesting to see Microsoft’s growing involvement in Java, joining the OpenJDK, acquiring JClarity, and hiring other well known figures including Monica Beckwith.

In the Java programming language trends report, we noted increased adoption of non-HotSpot JVMs, and we believe OpenJ9 is now within the early-adopter stage. As the time we noted that:

“We believe that the increasing adoption of cloud technologies within all types of organisation is driving the requirements for JREs that embrace associated “cloud-native” principles such as fast start-up times and a low memory footprint. Graal in itself may not be overly interesting, but the ability to compile Java application to native binaries, in combination with the support of polyglot languages, is ensuring that we keep a close watch on this project.”

.NET

The release of .NET Core 3 in September generated a huge buzz on InfoQ and produced some of our most-popular .NET content of the year. WebAssembly has been another area of intense interest, and we saw a corresponding surge in interest for Blazor, a new framework in ASP.NET Core that allows developers to create interactive web applications using C# and HTML. Blazor comes in multiple editions, including Blazor WebAssembly which allows single-page applications to run in the client’s web browser using a WebAssembly-based .NET runtime.

According to our most-recent reader survey, C# is the second-most widely used language among InfoQ readers after Java, and interest in C#8 in particular was also strong.

Web Development

Unsurprisingly, the majority of InfoQ readers write at least some JavaScript – around 70% according to the most recent reader survey – making it the most widely used language among our readers. The dominant JavaScript frameworks for InfoQ readers seem to currently be Vue and React. We also saw interest in using Javascript for machine learning via TensorFlow.js.  Away from JavaScript, we saw strong interest in some of the transpiler options. In addition to Blazor, mentioned above, we saw strong interest in Web Assembly, Typescript, Elm and Svelte.

Architecture

It’s unsurprising that distributed computing, and in particular the microservices architecture style, remains a huge part of our news and feature content. We see strong interest in related topics, with our original Domain Driven Design Quickly book, and our more-recent eMag “Domain-Driven Design in Practice” continuing to perform particularly well, and interest in topics like observability and distributed tracing. We also saw interest in methods of testing distributed systems, including a strong performance from our Chaos Engineering eMag, and a resurgence in reader interest in some for the core architectural topics such as API design, diagrams, patterns, and models.

AI, ML and Data Engineering

Our podcast with Grady Booch on today’s Artificial Intelligence reality and what it means for developers was one of our most popular podcasts of the year, and revealed strong interest in the topic from InfoQ readers.

Key AI stories in 2019 were MIT introducing GEN, a Julia-basd language for artificial intelligence, Google’s ongoing work on ML Kit, and discussions around conversational interfaces, as well as more established topics such as streaming.

It’s slightly orthogonal to the rest of the pieces listed here, but we should also mention “Postgres Handles More Than You Think” by Jason Skowronski which performed amazingly well.

Culture and Methods

If there was an overarching theme to our culture and methods coverage this year it might best be summed up as “agile done wrong” and many of our items focused on issues with agile, and/or going back to the principles outlined in the Agile Manifesto.

We also saw continued in interest in some of the big agile methodologies, notably Scrum, with both “Scrum and XP from the Trenches“, and “Kanban and Scrum – Making the Most of Both” performing well in our books department.

We also saw strong reader interest in remote working with Judy Rees’ eMag on “Mastering Remote Meetings“, and her corresponding podcast, performing well, alongside my own talk on “Working Remotely and Managing Remote Teams” from Aginext this year.

DevOps and Cloud

In our DevOps and Cloud trends report, we noted that Kubernetes has effectively cornered the market for container orchestration, and is arguably becoming the cloud-agnostic compute abstraction. The next “hot topics” in this space appear to be “service meshes” and developer experience/workflow tooling. We continue to see strong interest in all of thee among InfoQ’s readers.

A trend we’re also starting to note is a number of languages which are either infrastructure or cloud-orientated. In our Programming Languages trends report, we noted increased interest and innovation related to infrastructure-aware or cloud-specific languages, DSLs, and SDKs like Ballerina and Pulumi. In this context we should also mention Dark, a new language currently still in private beta, but already attracting a lot of interest. Somewhat related, we should also mention the Ecstasy language, co-created by Tangosol founders Cameron Purdy and Gene Gleyzer. Chris Swan, CTO for the Global Delivery at DXC Technology, spoke to Cameron Purdy about the language and the problems it’s designed to solve.

Software Predictions for 2020

Making predictions in software in notoriously hard to do, but we expect to see enterprise development teams consolidate their cloud-platform choices as Kubernetes adoption continues. Mostly this will be focussed on the “big five” cloud providers – Amazon, Google, IBM (plus Red Hat), Microsoft, and VMware (plus Pivotal). We think that, outside China, Alibaba will struggle to gain traction, as will Oracle, Salesforce, and SAP.

In the platform/operations space we’re expecting that service meshes will become more integrated with the underlying orchestration frameworks (e.g. Kubernetes). We’re also hopeful that the developer workflow for interacting with service meshes becomes more integrated with current workflows, technologies, and pipelines.

Ultimately developers should be able to control deploy, release, and debugging via the same continuous/progressive delivery pipeline. For example, using a “GitOps” style pipeline to deploy a service by configuring k8s YAML (or some higher-level abstraction), controlling the release of the new functionality using techniques like canarying or shadowing via the configuration of some traffic management k8s custom resource definition (CRD) YAML, and enabling additional logging or debug tooling via some additional CRD config.

In regards to architecture, next year will hopefully be the year of “managing complexity”. Architectural patterns such microservices and functions(as-a-service) have enabled developers to better separate concerns, implement variable rates of change via independent isolated deployments, and ultimately work more effectively at scale. However, our ability to comprehend the complex distributed systems we are now building — along with the availability of related tooling — has not kept pace with these developments. We’re looking forward to seeing what the open source community and vendors are working on in the understandability, observability, and debuggability space.

We expect to see more developers experimenting with “low code” platforms. This is partly fueled by a renewed push from Microsoft for its PowerApps, Flow, Power BI, and Power Platform products.

In the .NET ecosystem, we believe that Blazor will keep gaining momentum among web developers. .NET 5 should also bring significant changes to the ecosystem with the promised interoperability with Java, Objective-C, and Swift. Although it is early to say, Microsoft’s recent efforts on IoT and AI (with ML.NET) should also help to raise the interest in .NET development.  Related we expect to see the interest in Web Assembly continue and hope that the tooling hear will start to mature.

Despite the negative news around VR this year, we still think that something in the AR/VR space, or some other form of alternative computer/human interaction, is likely to come on the market in the next few years and gain significant traction, though it does seem that the form factor for this hasn’t really arrived.

Architecture, Cloud, Methodology, Microservices

What Is Cloud Native, Anyway?

Sam Newman, author of Building Microservices, opened the day’s talks by tackling the question directly. He reviewed the different ways that people define cloud native and unpicked whether they were useful.

“It’s a buzz-worthy term that has no real, good, formal definition,” says Sam. “A lot of people conflate many things in the term cloud native. In my talk, I came back down to the sub-characteristics we commonly talk about for cloud-based applications: they’ve got to be built for scale, there must be some concepts of fault tolerance, and they need to have high degrees of automation”.

While there are attempts to build vendor-agnostic, cloud native application platforms, even they are tied to a PaaS or “underlying container orchestrator”. So to be cloud native means to build for the specifics of your chosen platform. Whether it’s AWS, GCE or a private Kubernetes cluster, each has its own peculiarities.

Cloud native comes down to three things:

  • Build microservices: before anything else, move to microservices, as they’ll have the biggest impact
  • Use existing services: don’t re-implement anything that is available as a cloud-based service
  • Automate everything: automated testing, continuous integration, continuous delivery

I think it’s probably an overloaded term,” says Nicki, “For me, cloud native is really about making sure that the way you approach building your applications makes first class citizens out of being able to efficiently utilize your platform to scale, distribute and manage your systems in an automated way. At the moment, that’s predominantly cloud based“. That’s Nicki Watt’s take

Source: https://dzone.com/articles/what-is-cloud-native-anyway

Chưa phân loại, Devops

Think before using Configuration Management Tools for Infrastructure Provisioning

Source: http://www.neeleshgurjar.co.in/think-before-using-configuration-management-tools-for-infrastructure-provisioning/

These days, almost every Software development organization is trying to implement DevOps in their Software Development Lifecycle.

DevOps is getting accepted worldwide for its Software Delivery speed and reliability.

Infrastructure provisioning or orchestration and Configuration Management both are like heart and soul of DevOps toolchain.

Tools like Terraform & Cloud Formation are used for Infrastructure Provisioning (IP). Same time Configuration Management (CM) already has the long list of tools such as Ansible, salt, puppet etc.

Configuration Management tools also have the ability to spin up infrastructure, however there are some cons, which needs to face.

In real world due to lack of awareness or due to time pressure, organizations use CM tools to launch infra.

In this small note, I will try to explain why CM tools are not recommended for launching infrastructure in cloud.

Advantages of using CM tools for Provisioning Infrastructure:

  • Less learning curve: No need to learn different technology and its syntax. One can write Infra code using same CM syntax.
  • Fast integration with Config Management tool: Instances launched by CM tool, easily gets added to host database of CM tool. No need to have any third party application  or script to connect. For eg. if we spin up an instance with Salt-Cloud then target instance will get added to Salt-master automatically.

Due to these advantages organization gets attracted towards Config Management tool for provisioning as well. However along with these advantages, one should think about below points before using these config tools for provisioning infrastructure.

Maintaining State:

Whenever we launch or modify any infrastructure, it creates a state. For eg. if we launch 1 VPC, 2 Subnets, 4 EC2 instances initially, then it will be a first state aka Baseline of your infrastructure. Any modification to any of these components will lead to generating new state of infra similar to versioning of source code.

  • Easy to rollback: As our previous states are maintained or versioned we can easily rollback to previous states.
  • Easy to track changes: We can see differences between 2 states any time to track changes between old and new infra
  • Compliance requirement: Version of Infrastructure is required by FedRAMP like Compliances, which can be fullfilled by maintaining states in code.
  • Easy to apply changes to any specific components in infrastructure

As per my previous experiences, if infrastructure is provisioned using config management tool like Ansible, Salt-Cloud, it is not possible or difficult to manage state of Infrastructure. In reply to this one  can say that they do not need currently, but are you sure that you will never need it in future? For eg. if you plan for FedRAMP or any other compliance, which requires Versioning of Infrastructure.

Integration of Infrastructure provisioning and Configuration Management tools:

Some times people avoid to use Infrastructure provisioning tool because they think that Integration of provisioned infrastructure and configuration management tool will be difficult or troublesome. In reality it is very easy to integrate both the tools. We can utilize features like “User Data” in case of AWS to achieve this. We can write some commands or script in bootstraping to configure CM client on target machine and get it connected with our CM Master. It is just matter of 10-15 lines of Shell Script.

Infrastructure inventory management:

If infrastructure is spawned by Provisioning tool like CloudFormation or Terraform, then it is easy to keep track of all inventory.

We can easily generate CSV report from Terraform state files as well. As Configuration Management tools do not maintain state of provisioned infra, we can not manage inventory of infrastructure.

Importing current infrastructure:

We can import current running infrastructure in form of Code using Provisioning tools, but it is not possible using Configuration Management tools.

Limitation for launching various components:

Terraform or Cloudformation can provision various AWS services like S3 bucket, IAM Role, VPC, Security Groups, etc. very easily. However these components cannot be launched easily using Configuration Management tools. For eg. Salt-Cloud can not provision VPC, S3 bucket, Security Groups etc. One need to develop module for this.

Conclusion:

Configuration Management tools are best when it comes to apply desired configurations to target machines or group of machines.

But when you think about provisioning, there are powerful tools available in market. Using them you will not only get granular control on your infrastructure but you can audit, track and do much more with it.

Also currently there is a basic  requirement of tracking changes in infrastructure, which can be easily achieved with these provisioning tools such as Terraform and CloudFormation. CloudFormation is also comming up with Drift Detection.

Please do think about compliance, which may require in future and try to be ready for that from beginning.

I hope this article will help you in deciding tool for provisioning infrastructure.
Please do let me know your suggestions on ngurjar[at]neeleshgurjar[dot]co[dot]in

#CloudFormation #Terraform #ConfigurationManagement_vs_InfrastructureProvisioning #Terraform_vs_Ansible

Agile

[Note] Agile Project Management – Part 0: Introduction

This is my note when reading Agile Project Management For Dummies, 2nd Edition. The content is mainly distilled and copied from original material.

First of all, let take a quick look by learning Agile Project Management For Dummies Cheat Sheet

From Agile Project Management For Dummies, 2nd Edition

By Mark C. Layton

A Manifesto for Agile Software Developers

The Manifesto for Agile Software Development, commonly known as the Agile Manifesto, is an intentionally streamlined expression of the core values of agile project management. Use this manifesto as a guide to implement agile methodologies in your projects.

“We are uncovering better ways of developing software by doing it and helping others do it. Through this work, we have come to value:

  • Individuals and interactions over processes and tools

  • Working software over comprehensive documentation

  • Customer collaboration over contract negotiation

  • Responding to change over following a plan

That is, while there is value in the items on the right, we value the items on the left more.”

©Agile Manifesto Copyright 2001: Kent Beck, Mike Beedle, Arie van Bennekum, Alistair Cockburn, Ward Cunningham, Martin Fowler, James Grenning, Jim Highsmith, Andrew Hunt, Ron Jeffries, Jon Kern, Brian Marick, Robert C. Martin, Steve Mellor, Ken Schwaber, Jeff Sutherland, Dave Thomas.

This declaration may be freely copied in any form, but only in its entirety through this notice.


The 12 Agile Principles

The Principles behind the Agile Manifesto, commonly referred to as the 12 Agile Principles, are a set of guiding concepts that support project teams in implementing agile projects. Use these principles as a litmus test to determine whether or not you’re being agile in your project work and thinking:

  1. Our highest priority is to satisfy the customer through early and continuous delivery of valuable software.

  2. Welcome changing requirements, even late in development. Agile processes harness change for the customer’s competitive advantage.

  3. Deliver working software frequently, from a couple of weeks to a couple of months, with a preference to the shorter timescale.

  4. Business people and developers must work together daily throughout the project.

  5. Build projects around motivated individuals. Give them the environment and support they need, and trust them to get the job done.

  6. The most efficient and effective method of conveying information to and within a development team is face-to-face conversation.

  7. Working software is the primary measure of progress.

  8. Agile processes promote sustainable development. The sponsors, developers, and users should be able to maintain a constant pace indefinitely.

  9. Continuous attention to technical excellence and good design enhances agility.

  10. Simplicitythe art of maximizing the amount of work not done — is essential.

  11. The best architectures, requirements, and designs emerge from self-organizing teams.

  12. At regular intervals, the team reflects on how to become more effective, then tunes and adjusts its behavior accordingly.


The Agile Platinum Edge Roadmap to Value

The Roadmap to Value is a high-level view of an agile project. The stages of the Roadmap to Value are described in the list following the diagram:

The Agile Platinum Edge Roadmap to Value

  • In Stage 1, the product owner identifies the product vision. The product vision is a definition of what your product is, how it will support your company or organization’s strategy, and who will use the product. On longer projects, revisit the product vision at least once a year.

  • In Stage 2, the product owner creates a product roadmap. The product roadmap is a high-level view of the product requirements, with a loose time frame for when you will develop those requirements. Identifying product requirements and then prioritizing and roughly estimating the effort for those requirements are a large part of creating your product roadmap. On longer projects, revise the product roadmap at least twice a year.

  • In Stage 3, the product owner creates a release plan. The release plan identifies a high-level timetable for the release of working software. An agile project will have many releases, with the highest-priority features launching first. A typical release includes three-to-five sprints. Create a release plan at the beginning of each release.

  • In Stage 4, the product owner, the master, and the development team plan sprints, also called iterations, and start creating the product within those sprints. Sprint planning sessions take place at the start of each sprint, where the scrum team determines what requirements will be in the upcoming iteration.

  • In Stage 5, during each sprint, the development team has daily meetings. In the daily meeting, you spend no more than 15 minutes and discuss what you completed yesterday, what you will work on today, and any roadblocks you have.

  • In Stage 6, the team holds a sprint review. In the sprint review, at the end of every sprint, you demonstrate the working product created during the sprint to the product stakeholders.

  • In Stage 7, the team holds a sprint retrospective. The sprint retrospective is a meeting where the team discusses how the sprint went and plans for improvements in the next sprint. Like the sprint review, you have a sprint retrospective at the end of every sprint.


Agile Project Management Roles

It takes a cooperative team of people to successfully complete a project. Agile project teams are made up of many people and include the following five roles:

  • Product owner: The person responsible for bridging the gap between the customer, business stakeholders, and the development team. The product owner is an expert on the product and the customer’s needs and priorities. The product owner works with the development team daily to help clarify requirements and shields them from organizational noise. The product owner is sometimes called a customer representative. The product owner, above all, should be empowered to be decisive, making tough business decisions every day.

  • Development team members: The people who create the product. In software development, programmers, testers, designers, writers, data engineers, and anyone else with a hands-on role in product development are development team members. With other types of product, the development team members may have different skills. Most importantly, development team members should be versatile, able to contribute in multiple ways to the project’s goals.

  • Scrum master: The person responsible for supporting the development team, clearing organizational roadblocks, and keeping the agile process consistent. A scrum master is sometimes called a project facilitator. Scrum masters are servant leaders, and are most effective when they have organizational clout, which is the ability to influence change in the organization without formal authority.

  • Stakeholders: Anyone with an interest in the project. Stakeholders are not ultimately responsible for the product, but they provide input and are affected by the project’s outcome. The group of stakeholders is diverse and can include people from different departments, or even different companies. For agile projects to succeed, stakeholders must be involved, providing regular feedback and support to the development team and product owner.

  • Agile mentor: Someone who has experience implementing agile projects and can share that experience with a project team. The agile mentor can provide valuable feedback and advice to new project teams and to project teams that want to perform at a higher level. Although agile mentors are not responsible for executing product development, they should be experienced in applying agile principles in reality and be knowledgeable about many agile approaches and techniques.


Agile Project Management Artifacts

Project progress needs to be transparent and measurable. Agile project teams often use six main artifacts, or deliverables, to develop products and track progress, as listed here:

  • Product vision statement: An elevator pitch, or a quick summary, to communicate how your product supports the company’s or organization’s strategies. The vision statement must articulate the goals for the product.

  • Product roadmap: The product roadmap is a high-level view of the product requirements needed to achieve the product vision. It also enables a project team to outline a general timeframe for when you will develop and release those requirements. The product roadmap is a first cut and high-level view of the product backlog.

  • Product backlog: The full list of what is in the scope for your project, ordered by priority. After you have your first requirement, you have a product backlog.

  • Release plan: A high-level timetable for the release of working software.

  • Sprint backlog: The goal, user stories, and tasks associated with the current sprint.

  • Increment: The working product functionality, demonstrated to stakeholders at the end of the sprint, which is potentially shippable to the customer.


Agile Project Management Events

Most projects have stages. Agile projects include seven recurring events for product development:

  • Project planning: The initial planning for your project. Project planning includes creating a product vision statement and a product roadmap, and can take place in as little time as one day.

  • Release planning: Planning the next set of product features to release and identifying an imminent product launch date around which the team can mobilize. On agile projects, you plan one release at a time.

  • Sprint: A short cycle of development, in which the team creates potentially shippable product functionality. Sprints, sometimes called iterations, typically last between one and four weeks. Sprints can last as little as one day, but should not be longer than four weeks. Sprints should remain the same length throughout the entire project, which enables teams to plan future work more accurately based on their past performance.

  • Sprint planning: A meeting at the beginning of each sprint where the scrum team commits to a sprint goal. They also identify the requirements that support this goal and will be part of the sprint, and the individual tasks it will take to complete each requirement.

  • Daily scrum: A 15-minute coordination and synchronization meeting held each day in a sprint, where development team members state what they completed the day before, what they will complete on the current day, and whether they have any roadblocks.

  • Sprint review: A meeting at the end of each sprint, introduced by the product owner, where the development team demonstrates the working product functionality it completed during the sprint to stakeholders, and the product owner collects feedback for updating the product backlog.

  • Sprint retrospective: A meeting at the end of each sprint where the scrum team inspects and adapts their processes, discussing what went well, what could change, and makes a plan for implementing changes in the next sprint.

Source: https://www.dummies.com/careers/project-management/agile-project-management-for-dummies-cheat-sheet/

Devops

CICD, Continuous delivery pipeline with Docker Compose and Jenkins

0

Continuous Delivery and DevOps are well known and widely spread practices nowadays. Kubernetes and Docker Swarm are two of the most powerful containerorchestration platforms.

Kubernetes and Spinnaker create a robust continuous delivery flow that helps to ensure your software is validated and shipped quickly.

This tutorial shows you how to create a continuous delivery pipeline using Docker Compose and Jenkins Pipeline. Kubernetes is too opinionated, hard to set up, quite different from Docker CLI. With Docker Compose you can run local development, deploy to dev/test and with very little change(removing any volume bindings, binding to different ports, setting environment variables differently) you can run on Docker Swarm cluster – so your local, dev, staging and production environment are almost identical. This tutorial is based on my previous POC: Event driven microservices architecture using Spring Cloud

So you are going to start 10 Spring Boot applications, 5 MongoDB instances, Jenkins, RabbitMQ, SonarQube, Zuul Gateway, Config Server, OAuth2 Server, Eureka Discovery and Hystrix Circuit Breaker. And the best of all, this complete continuous delivery pipeline(>20 containers) cost me only 100$ per month – ask me how.

Setting up Jenkins – see Jenkins online

sudo docker run -u root --rm -d -p 49001:8080 -p 50000:50000 -v jenkins-data:/var/jenkins_home -v /var/run/docker.sock:/var/run/docker.sock jenkinsci/blueocean

Add Jenkins Plugins: HTTP Request PluginJaCoCo pluginMask Passwords PluginPublish Over SSHSonar Quality Gates PluginSonarQube Scanner for JenkinsSSH Agent PluginSSH plugin.

Setting up SonarQube – see SonarQube online

sudo docker run -d --name sonarqube -p 9091:9000 -p 9092:9092 sonarqube

Now, we are ready to build our CI/CD pipeline. You can manage the Jenkinsfile using git instead of using the Jenkins UI.

First, we need to setup the Docker credentials for GitHub, DockerHub and SSH to your server where you are planning to install your docker containers.

Writing Your Jenkins Job

We will create nine stages: Preparation, Build & Unit Test, JaCoCo, SonarQube Analysis, SonarQube Quality Gate, Publish to DockerHub, Deploy, HealthCheck, Integration Test.

0 (1)

The Preparation stage will clone your code, the Build stage will build your code and run unit tests, the Publish stage will Push the Docker Image to your registry and the Deploy stage will deploy your containers. That’s how it will look like when you are done: Jenkins

Preparation Stage: Clone the project’s code.

stage('Get code from GitHub') { 
      // Get code from a GitHub repository
      git 'https://github.com/sbruksha/Microservices-platform.git'
   }

Build and Unit Test Stage

stage('Build and Unit Test') {
      // Run build and test
      sh '''cd $WORKSPACE/account-service && ./gradlew clean build jacocoTestReport
    cd $WORKSPACE/appointment-service && ./gradlew clean build jacocoTestReport
    cd $WORKSPACE/auth-server && ./gradlew clean build jacocoTestReport
    ........
    '''
   }

Run JaCoCo Stage

stage('JaCoCo Report') {
      jacoco exclusionPattern: '**/test/**,**/lib/*', inclusionPattern: '**/*.class,**/*.java'
   }

After that step executes, you can see the reports here.

SonarQube Analysis Stage

stage('SonarQube analysis') { 
        withSonarQubeEnv('Sonar') { 
          sh '''cd $WORKSPACE/appointment-service && ./gradlew sonarqube \
 -Dsonar.host.url=http://dev.eodessa.com:9091 \
 -Dsonar.login=xxxxxxxxxxxxxxxxxxxxxxxxxxxx \
 -Dsonar.sources=src/main \
 -Dsonar.java.binaries=build/classes \
 -Dsonar.java.libraries=build/libs/*.jar
 ........'''
        }
   }

Quality cannot be injected after the fact, it must be part of the process from the very beginning. It is strongly recommended to inspect the code and make findings visible, as soon as possible. As part of the pipeline, the code is inspected, and only if the code is fine and meets the quality gates, the built artifacts are uploaded to the binary repository manager.

We can open the SonarQube web application and drill down to the finding, click hereor see the screenshot below:

As part of a Jenkins pipeline stage, SonarQube is configured to run and inspect the code. But this is just the first part, now we add the quality gate in order to break the build if code quality is not met.

SonarQube Quality Gate Stage

stage("SonarQube Quality Gate") { 
        timeout(time: 1, unit: 'HOURS') { 
           def qg = waitForQualityGate() 
           if (qg.status != 'OK') {
             error "Pipeline aborted due to quality gate failure: ${qg.status}"
           }
        }
   }

Specifically the waitForQualityGate code will pause the pipeline until SonarQube analysis is completed and returns the quality gate status.

Publish Images to DockerHub Stage

stage('Publish Images to Hub') {
   		withCredentials([[$class: 'UsernamePasswordMultiBinding', credentialsId: '5e1c35ab-1404-4165-b224-8894cc70', usernameVariable: 'DOCKER_USER', passwordVariable: 'DOCKER_PASS'],]) {
        sh '''docker login -u ${DOCKER_USER} -p ${DOCKER_PASS}
    cd $WORKSPACE/account-service && ./gradlew build publishImage
    cd $WORKSPACE/appointment-service && ./gradlew build publishImage
    cd $WORKSPACE/auth-server && ./gradlew build publishImage
    ....................'''
    	}
   }

Make sure that the DOCKER_USER and DOCKER_PASS variables were created in the withCredentials section.

Deploy Images with Docker Compose Stage

stage('Deploy Images with Docker-Compose') {
        build 'EventDrivenPlatform-Dev-Deploy'
   }

Deploy stage will run “EvenDrivenPlatform-Dev-Deploy” job, which execute shell script on remote host using ssh: sudo docker-compose -f docker-compose.yml up -d

With Docker Compose you define a multi-container application in a single file, then spin your application up in a single command which does everything that needs to be done to get it running. You can create different Jenkins jobs for different environments: dev, stage, prod. The main function of Docker Compose is the creation of microservice architecture, containers and the links between them. After this stage is completed you can see:

  • All your containers are running
  • Eureka discovery server http://dev.eodessa.com:8761/
  • Hystrix Dashboard, Zuul Gateway, RabbitMQ, ConfigServer, AuthServer, etc..

HealthCheck Stage

stage('HealthCheck') {
      httpRequest responseHandle: 'NONE', url: 'http://dev.eodessa.com/health'
   }

Integration Test Stage

stage('Integration Test') {
      sh '''cd $WORKSPACE/test-integration && ./gradlew clean test'''
   }

After creating the pipeline, you save our script and hit Build Now on the project home of our Jenkins dashboard.

Here’s an overview of the builds:

Each output is accessible when hovering over a stage cell and clicking the Logs button to see the log messages for that step.

You can also find more details of the code analysis, JaCoCo report and SonarQube. When you have successful build you can see the project running online.

All source code for this post is located in GitHub repository.

Source: https://www.linkedin.com/pulse/cicd-continuous-delivery-pipeline-docker-compose-jenkins-bruksha?trk=related_artice_CICD%2C%20Continuous%20delivery%20pipeline%20with%20Docker%20Compose%20and%20Jenkins_article-card_title