Posted on April 23, 2024 by cprime-admin -
Part 1: GitLab Overview
- What is GitLab?
- Sequential DevOps vs. Concurrent DevOps
- Concurrent DevOps with GitLab
- GitLab Flows
- GitLab Recommended Process
- GitLab Workflow Components
- Demo Exercises: GitLab Features
Part 2: GitLab Components and Navigation
- GitLab Organization
- GitLab Epics
- Issue: The Starting Point for your workflow
- Issue Organization
- GitLab Workflow Example
- Demo Exercises: GitLab Navigation
- Hands-On Labs: Create a Project & Issue
Part 3: Git Basics
- What is Git?
- Git Key Terms
- Why Git is so popular
- Centralized vs. Distributed
- Basic Git workflow within GitLab
- Common Commands
- Demo Exercises: Working Locally with Git
- Hands-On Labs: Working Locally with Git
Part 4: Basic Code Creation in GitLab
- Code Review- Typical Workflow
- Code Review Workflow- GitLab tools to use
- Additional Tools for Code Review & Collaboration
- Demo Exercises: Merge Request in GitLab
- Demo Exercises: Assigning, Reviewing, and Approving in GitLab
- Demo Exercises: Additional Tools for working with code
- Hands-On Labs: Code Creation and Review
Part 5: GitLab's CI/CD Functions
- What is CI/CD?
- CI/CD Advantages
- Concurrent DevOps lifecycle
- CI/CD Features in GitLab
- CI/CD Automated tasks
- GitLab CI/CD Key Ingredients
- Anatomy of a CI/CD Pipeline
- Demo Exercises: CI/CD Examples
- Hands-on Labs: CI/CD Pipelines
Part 6: GitLab's Package and Release Features
- What are Package and Container Registries?
- Release Features in GitLab
- What is Auto DevOps?
- Demo Exercises: Auto DevOps and Interactive Web Terminal
Part 7: GitLab Security Scanning
- Demo Exercises: Using SAST Templates
- Hands-On Labs: How to run a SAST scan
- Hands-On Labs: View the scanning reports in the Security Dashboard
Posted on October 21, 2021 by cprime-admin -
Part 1: GitLab Overview and Organization
- GitLab Workflow Components
- Planning a Project
- The Plan Stage: Project Management
- GitLab Organization – Groups, Projects, and Issues, oh my!
Part 2: GitLab Plan Stage
- Organizing the Work
- Planning the Work
- Doing the Work
- Document the Work
- GitLab Agile Planning Structures
Part 3: Planning Functions in GitLab
- Issues: The Starting Point for your Workflow
- Issues Keep Everyone Connected and Synchronized
- What Can you Expect on an Issue Page?
- Hands-On Lab: Issues and Labels
- Hands-On Lab: Quick Actions
- Why Milestones are Important
- The Use of Kanban Boards
- GitLab's Service Desk
- Utilizing Wikis
- Hands-On Lab: Create a Wiki Page
- Hands-On Lab: Epics and Kanban Boards
- Project Management Review Quiz
- Hands-On Lab: Practice Lab 1
- Hands-On Lab: Practice Lab 2
Posted on June 25, 2021 by cprime-admin -
Part 1: Introduction
- Required knowledge
- Git: Committing code and creating pull requests
- Kubernetes: Deploying a service to Kubernetes and basic checks with kubectl
- Docker: Pushing an image to a Docker repository
- CI/CD: GitOps reverses the traditional understanding of continuous integration/continuous development.
- Core concepts: A quick introduction
- Immutable infrastructure
- Infrastructure as code
- Orchestration
- Convergence
- CI/CD
- What GitOps is not
- GitOps is not infrastructure as code.
- GitOps doesn't replace continuous integration (CI).
- The use case: Deploying a highly available microservice
- Deploying a microservice to Kubernetes, with all the surrounding infrastructure to make it available
Part 2: Setting up the Tools
- Kubernetes
- In advance: Setting up a cluster from scratch is time-consuming, even if you use a managed solution like EKS. Pre-allocating a cluster per person is something you can do in advance.
- Preparation: Setting up kubectl
- Every participant should have credentials to connect to the cluster using kubectl.
- Preparation: Access a cluster through kubectl/k9s.
- Check running pods.
- Check deployments.
- Repository
- Preparation: Infrastructure repository
- An empty repository in GitHub/GitLab to use for deploying infrastructure
- Application repository
- Strictly speaking, you don't need to separate the application and infrastructure, but it's easier to understand what goes where this way.
- A sample application that serves a web server with a hello world response as the baseline (NodeJS-based, for instance)
- ArgoCD
- Why ArgoCD?
- ArgoCD is tightly integrated with Kubernetes and closely follows the GitOps mindset. Therefore, it's a good tool to showcase GitOps.
- Exercise: Add ArgoCD to the cluster.
- Create namespace.
- Deploy ArgoCD to the cluster.
- Access ArgoCD using the CLI.
Part 3: Deploying a Microservice
- Exercise: Prepare a simple microservice to be deployed in k8s.
- Build sample application as a Docker container (Dockerfile can be provided in advance)
- Push service to a Docker registry (cloud-native, docker.io, or quay.io)
- Exercise: Create a k8s deployment.
- Create a Kubernetes deployment definition in code for the application (here's a sample).
- Push code to infrastructure repository.
- Create an application in ArgoCD.
- This time, you'll use the ArgoCD CLI so you can see that part. You'll move to use Git from here on, which is more aligned to GitOps.
- Sync the application.
- Again, use the CLI.
- Test: Use kubectl check to ensure that deployment works.
- Automated synchronization
- Pull versus push: How ArgoCD can read from a repository and automatically apply the changes
- Exercise: Activate synchronization so that further changes happen when you push code to the infrastructure repository.
- Exercise: Create a k8s service. (A deployment alone doesn't expose the microservice, so let's build on that.)
- Create service definition.
- Pull request
- This is an opportunity to introduce the pull request aspect of the flow. You can extend pull requests so that extra checks are performed, using something like GitHub Actions.
- Implementing CI with something like GitHub Actions isn't part of the exercise, although it's something that you can complete as an extra exercise. (See the bonus section at the end of this post.)
- Test: Carry out a kubectl check to prove that service was deployed.
- Exercise: Create a load balancer. (You still can't access service from the outside.)
- Create loadbalancer k8s definition for cloud provider.
- Pull request
- Test: Curl to load balancer address to ensure that service is actually online.
- Exercise: Update the application.
- Change something in the application, such as the body of the response of a route in the application.
- Rebuild container with a new tag and push it to Docker registry.
- Update k8s deployment to use new tag.
- Pull request
- Test: New version of the app should be deployed.
- Exercise: Update the infrastructure. (Why do this? So you can demonstrate that changing the application and the infrastructure results in blurry boundaries.)
- Update k8s deployment to be highly available (more than one replica).
- Pull request
- Test: kubectl shows that there are multiple pods running.
- Wrap-up: This covers the workflow of deploying an application and then performing updates and changes on it.
- This is the core of GitOps!
- There are also other, more advanced use cases to cover.
Part 4: Promoting Changes Through Different Environments
- In advance: Prepare a second cluster.
- As with the first cluster, this is something to have prepared in advance.
- Preparation: Register the cluster in ArgoCD to allow deployments to it.
- From development to production
- Which options are there to represent different stages?
- This is an open discussion, as there's no set recipe to do environment promotion, with different options:
- Use different infrastructure repositories.
- Use different folders in the same infrastructure repository.
- Use branches.
- Exercise: Promotion of a version
- Set up a second cluster (production) to read from a different folder.
- Copy the infrastructure created for the first folder into this one.
- Pull request
- Test: Second cluster should have the service available as well.
- More advanced deployment scenarios (Controlled release is an important part of releasing traffic, especially to production. It's worth talking about the options that you have that can be based on the exact same building blocks as explained before.)
- Exercise: Blue/Green
- Enable Argo Rollouts in cluster.
- Test: Observe rolling deployment with kubectl.
- Install argo-rollouts plugin for kubectl.
- Create rollout to apply to existing microservice.
- Canary release
- Theory only (This can be a good lead-in to a discussion of the merits and tradeoffs of different deployment strategies.)
- Exercise: Error handling (This exercise shows that failure in infra deployment is expected and is handled through code changes—not panicked actions!)
- Introduce an error in the hello world application (this results in a thrown exception instead of starting the webserver).
- Rebuild the container with a new tag and push it to Docker registry.
- Update k8s deployment to use new tag.
- Pull request
- Test: Confirm with kubectl that deployment is failing.
- Revert a failed change through code.
Part 4: Security in GitOps
- Accessing resources
- kubectl shouldn't replace observability, such as logging and monitoring (similar to secure shell—SSH—into a production server)
- Secrets
- No plaintext secrets should ever be stored in Git.
- Vault
- This is theory only because it's probably too much to do for a practical exercise.
- Exercise: Sealed secrets
- Depending on time, this can be treated as theory or as an exercise. Furthermore, you can split it in two depending on how much time you have.
- Modify microservice to read the secret and make it available through a request.
- Provision secrets in the infrastructure repository.
- Use secrets from either the cluster or the application.
- Install the sealed secrets controller.
- Inject an encrypted secret in the infrastructure repository.
- Modify Kubernetes deployment to inject a secret into the microservice.
Part 7: Recap
- Core concepts: Infra as code, Git as the source of truth, pull model, converging changes
- Core flow: declare infrastructure, commit it, pull request for review, merge to apply
- Next steps
- Automated promotion (If a deployment to a staging environment succeeds, then trigger a deployment to production.)
- Observability (microservices that export metrics, logging aggregator, and monitoring)
Bonus exercises
- Replace manual steps (push Docker container, build application code) with CI, such as GitHub actions.
- Introduce templating (Jsonnet, Helm) to foster reuse of Kubernetes resources.
- Exercise: Parameterize deployment so that port can be defined as configuration. Deploy a second copy of the same service with a different name, running on a different port.
- Install more advanced resources in the cluster using the same mechanism, such as an ingress controller.
Posted on January 27, 2021 by cprime-admin -
Part 1: Getting Started
- Introductions
- Course Goal
- Team Agreement
Part 2: Azure DevOps Overview
- Hierarchy
- List
- Board
- Backlog
- Work items
Part 3: Agile & Scrum Review
- Manifesto
- Principle
- Scrum Overview
Part 4: Accessing Azure DevOps
- Login
- Navigation
Part 5: Managing Iterations
- Configure the time box iteration
- Setting interaction goal
- Configure Team Capacity
Part 6: Backlog Hierarchy
- Product backlog
- Attributes
- Epics, Feature, Stories
- Managing Work Item
- Adding Stories
- Link Epics to Feature to Stories to Tasks
- Prioritization guide line
- Adding Priority
- Estimating Guidelines
- Add Estimates
- Task Breakdown
- Adding Task
- Adding PBI’s to Iterations
- Blocking Tasks
- Kanban Overview
Part 7: Queries
- Creating Queries
- Running Queries
Part 8: Wikis
- Creating
- Editing
Part 9: Dashboards
- Configure Widgets
- Creating the Dashboard
Posted on December 1, 2020 by cprime-admin -
Part 1: Infrastructure as Code
In this section, we will introduce the benefits that Infrastructure as Code (IaC) can bring to organizations and how IaC fits within modern DevOps best practices.
- Motivation for Infrastructure as Code
- Applying Infrastructure as Code in DevOps
- Infrastructure as Code principles and best practices
- Benefits of Infrastructure as Code
- The case for Terraform
Part 2: Terraform Overview
This section provides an overview of Terraform concepts and vocabulary and instructs how Terraform manages infrastructure configuration in cloud environments.
- Terraform architecture
- Terraform configuration language overview
- Terraform CLI
- The lifecycle of a configuration
- Managing configuration state
Hands-on Labs:
- Using the Terraform CLI
- Setting up a Terraform project
Part 3: AWS Resources
In this section, participants will get hands-on practice using Terraform to create a simple application environment in AWS and learn the essential constructs in Terraform for defining resources.
- Resource types
- Best practices in declaring resources
- Network resources (VPC, subnet, security group)
- Compute resources (virtual machine)
- Storage resources (database)
- Local values in a configuration
- Augmenting a configuration with data sources
Hands-on Labs:
- Creating a VPC and subnets
- Adding a virtual machine into your VPC
- Adding a database to your VPC
- Using locals for replicated values
- Using a data source to read external configuration
Part 4: Terraform Programming
This section introduces programming constructs within Terraform that enable you to add more control and flexibility in defining resources.
- Data structures (primitives, maps, lists, objects, etc.)
- Types of expressions to set values
- Creating multiples of a resource
- Dynamic blocks
- Parameterizing a configuration with variables
- Outputs from a configuration
- Functions
- Handling errors
Hands-on Labs:
- Using variables in a configuration
- Getting outputs from a configuration
- Creating a re-sizable cluster of virtual machines
- Creating multiple resources through iteration loops
- Leveraging functions in your code
Part 5: Modules
This section shows how modules can be used to create reusable components in Terraform and teaches best practices in organizing Terraform code.
- Purpose of modules
- Module structure and code organization
- Invoking modules
- Module sources and versioning
- Nested modules
- Publishing modules
Hands-on Labs:
- Using an external module in your configuration
- Refactoring your code to implement a module
Part 6: Wrapping Up
This section wraps up the course with reviews to reinforce what you have learned.
- Reference material to learn more
- Course review
- Next steps
Posted on October 27, 2020 by cprime-admin -
Part 1 – Infrastructure Platform: AWS Cloud
- Installing and using the AWS CLI (Command Line Interface)
- AWS Networking
- VPC’s (Virtual Private Clouds)
- Subnets
- Internet Gateways
- Route Tables
- Route Table Associations
- Creating AWS Networking Components
- Launching VMs in AWS Cloud
Part 2 – Git: Source Control Management: GitHub
- This course doesn’t teach the basics of git. Git experience is assumed (see the ‘DevOps Pipeline’ course if your team needs basic git knowledge)
Part 3 – Infrastructure Deployment: Terraform
- Intro to Terraform
- Creating cloud buckets for storage
- Separating code: Multiple Terraform configuration files
- Storing state remotely
- Git branching
- Displaying resource outputs
- Creating cloud networking components with Terraform
- Configuring cloud Security groups
- Using SSH Public/Private Keys with Terraform
- Launching and Destroying cloud VM instances with Terraform
- Creating reusable code with modules
- Using Terraform variables
Part 4 – Configuration Management: Terraform with Ansible
- Ansible Provisioners in Terraform
- Integrating Terraform-managed instances with Ansible Control Nodes
- Launching multi-tiered architectures (web servers and load balancers) with Terraform and Ansible
Part 5 – Notifications: Slack
- Integrating CI/CD with Slack
- Using Slack for CI/CD approvals and notifications
Part 6 – Containerization: Docker
- Purpose and use case for Docker
- Docker Hub
- Basic Docker commands
- Docker Networking
- Launching and debugging NGINX containers
- Mounting Volumes to containers
- Docker mount points: Multiple containers, one shared code location
- Launching Docker hosts and Docker containers automatically
- Port mapping with containers
- Launching multi-tiered architectures (web servers and load balancers): an automated approach
- Customizing containers with Docker Hub and Dockerfiles
- Reducing infrastructure bloat: Buster-Slim Docker containers
Part 7 – Managed OS: Linux Only
- Management of Linux Servers only
Part 8 – Container Management: Kubernetes (Optional)
- Kubernetes (K8S) overview and use case
- K8S architecture
- Installation and configuration
- Master and node server components
- Creating K8S load-balanced clusters
- Deploying Apps with K8S
- Scaling Apps
- K8S monitoring and App repair
- Updating Apps with K8S
Posted on October 17, 2020 by cprime-admin -
Part 1: Course Introduction
- Azure Repos-Chef-Azure Pipelines: A DevOps Pipeline
- Course Purpose
- Agenda
- Introductions
- Lab Environments
Part 2: Technology Overview
- Git – Source Control Management
- Chef – Configuration Management
- Azure Pipelines – Continuous Integration
- An End-To-End CI/CD (Continuous Integration/Continuous Deployment) Pipeline
Part 3: Git/Azure Repos – Source Control Management
- Git purpose and Workflow
- Git configuration
- Getting help with git
- Basic git commands
- Remote, status, add, commit, push, log, diff
- Creating and checking out branches
- Creating a repository in Azure Repo
- Accessing a private repository with SSH keys
- Pull requests
- Merging and deleting branches
Part 4: Chef – Configuration Management
- Chef purpose and use cases
- Chef basics: Resources, recipes, and cookbooks
- Chef policy files
- Integration testing with Inspec and Test kitchen
- Chef variables: Attributes and Ohai
- Dynamic file creation with templates
- Using Chef Supermarket and community cookbooks
- Wrapper cookbooks
- Automating infrastructure with Chef Search
- Centralized management with Chef Infra Server
- Automating Chef convergence
- Managing nodes with policy groups
Part 5: Azure Pipelines
- CI/CD = Continuous Integration / Continuous Deployment
- Purpose
- Projects
- Jobs
- YAML scripting – CI/CD as Code
- Managing credentials and secret files
- Integrating with Source Control Management: Azure Repos
- Triggers: Scheduled Polling and Webhooks
- Automated cookbook linting: Foodcritic and Cookstyle
- Automated cookbook testing with Test Kitchen
- Azure Pipelines Integration with Chef Server
- Creating Separate Build and Release Pipelines
- Continuous Deployment of Chef cookbooks with Azure Pipelines
Posted on October 17, 2020 by cprime-admin -
Please Note: This is not a traditional training event: You will not experience periods of instruction followed by exercises. Rather, you will actively perform every step listed below, and the Facilitator will provide any explanations and guidance that you require.
Step 1: Understand what you do in terms of Value Streams
- Create a working definition of value that is relevant to your context
- Determine how to measure Value in a Value Stream
- Establish common heuristics on value
- Determine how you provide value to customers
- Identify your customers and the needs they have that you satisfy
- Articulate what value means to your Customer/user
- Identify your services and how each satisfies customers’ needs
- Identify your Value Streams
- Identify the processes (activities) required for each service:
- Identify Value-add activities
- Identify Directing activities
- Identify Supporting activities
- Arrange services into Service Families based on similarity in processes (activities)
- Identify each Value Stream
- Explain how Value Streams relate to:
- Conventional supply chains
- Agile practices
- DevOps and IT services
- PMOs and project management
- Product life cycles
- Enterprise costs and revenues
- Other use cases
Step 2: Choose a Value Stream to improve
- Prioritize Value Streams
- Based on value for your customers
- Based on value to your organization
- Identify problematic Value Streams
- Issues with what is delivered to customers
- Issues with timeliness of value delivery
- Issues with cost to the organization
- Choose a high value & problematic Value Stream to improve
Step 3: Prepare for Value Stream Mapping
- Identify (or appoint) the Value Stream Manager
- Understand the role of Leadership
- Collect required data
- Customer data
- Process Data
- Inventory Data
- Supplier Data
- Lead Time for the total Value Stream
Step 4: Map the current (as is) Value Stream
- Visualize workflows
- Visualize functional areas of work and how they interact
- Flesh out how value-added workflows through the organization
- Establish an accurate description of the environment’s current state
- Map the flow of work through functional groups:
- Business Teams
- Development
- Product Ownership
- Security and Governance
- Change Management
- Testing and QA
- Data Management
- Release Process
- Other IT Operations
- Map the Customer
- Map the Processes
- Map the Suppliers
- Map the Inventory
- Map the Service flow
- Trace handoffs for different phases of work
- Visualize Queues in your Value Stream Map
- Map the Information flow
- Map the Timeline
- Distinguish between Value-Add and non-Value-Add activities in a Value Stream
- Measure value-added vs. non-value-added time
- Trace waiting times for different phases of work
- Identify Wait times and total wait time in a Value Stream
- Measure waiting, frequency of deployments/releases/versions, lead times, MTTD & MTTR, change volume
- Establish common heuristics on waste
Step 5: Identify problems with the current (as is) Value Stream
- Find Wastes
- Find root causes for waiting and waste in workflows
- Find dependencies among teams
- Resolve misunderstandings and misperceptions across different departments
- Identify Overproduction (Excess Inventory)
- Note where work is not paced to “takt time”
- Identify impediments to flow among processes
- Opportunities to use continuous flow
- Push vs Pull relationships
- Note instances of ineffective flow management
- Scheduling processes independently
- Lack of a “Pacemaker” process
- Identify unevenness of flow
- Different services not distributed evenly over time
- Pitch (increments of work) too large and not related to takt
- Inability to do every activity every day (or every pitch)
- Make note of wasteful Processes
- Movement (including unnecessary searching)
- Over-processing
- Transportation
- Latent talent
- Defects
Step 6: Analyze the future (to be) Value Stream
- Principle: “At first, assume existing designs, facilities, and remote activities cannot be changed and make other improvements.”
- Plan and budget for future fixes to those bigger issues.
- Determine: What is the takt time?
- Decide: Will you build to a finished-service supermarket from which the customer pulls, or respond directly to customer demand?
- Identify: Where can you use continuous flow processing?
- Identify: Where will you need to use pull systems to control upstream processes?
- Determine: At what single point in the Value Stream (the “pacemaker process”) will you schedule the work?
- Decide: How will you level the mix of work at the pacemaker process?
- Define: What increment of work (Pitch) will regularly release at the pacemaker process?
- Identify: What process improvements will be necessary for the value stream to flow as your future-state design specifies?
Step 7: Map the future (to be) Value Stream with needed process improvements noted
- Draw the To-Be Value Stream
- Build a description of your desired future state
- Identify the value-stream loops
- Pacemaker loop
- Upstream loops
- Define Objectives and Goals for each loop
- Establish improvement priorities
- Plan for optimizing processes, overall flow, speed and value
- Establish common heuristics on priority
- Define the goals and objectives for a Value Stream
- Discover opportunities for automation and modernization
- Choose the relevant metrics to improve
- Define Improvement Targets (e.g. delivery frequency, product flows, projects & programs, mapping portfolios, end-to-end value)
- Prioritize improvement targets against each other
- Choose the Prioritization Heuristic to use
Step 8: Iteratively improve the Value Stream
- Map a path to get to your desired future state
- Plan one step toward attaining the future (to be) Value Stream
- Pick the starting point (which value-stream loop to improve first)
- Loop improvement pattern:
- (First!) Develop a continuous flow that operates based on takt time
- Establish a pull system to control work
- Introduce leveling
- (Last!) Practice kaizen to continually eliminate waste, reduce batch sizes, shrink inventory, and extend the range of continuous flow
- Determine how you will manage that one improvement step
- Define how to collect data on the improved Value Stream
- Determine how you will identify problems with the improved Value Stream
- Plan to update the future (to be) Value Stream Map
- Expect to Refine Value Stream Loops and their Objectives and Goals
- Plan to repeat until the Value Stream Loop Goals have been achieved
Posted on October 17, 2020 by cprime-admin -
Part 1: Technology Overview
- Git – Source Control Management
- Ansible – Configuration Management
- Jenkins – Continuous Integration/Continuous Deployment
Part 2: Git – Source Control Management
- Purpose overview and use cases
- Git workflow
- Configuring git on your local machine
- Getting help with Git
- Local vs. Global vs. System configurations
- Basic Git Commands
- Creating local git repositories
- Branching and merging
- Using remote repositories
- Pushing code to Github using public and private SSH keys
Part 3: Ansible – Configuration Management
- Ansible purpose and use cases
- Architecture and call flow
- Ansible installation, configuration, and validation
- Control nodes and managed nodes
- Ansible managed hosts
- Host inventory; hosts and groups
- Repeatable code: Playbooks
- Introduction to YAML
- Modularizing code: Roles
- Ansible variables
- Dynamic configuration with facts
- Finding errors: Ansible unit testing
- Ensuring code quality: Ansible integration testing
Part 4: Jenkins – Continuous Integration / Continuous Deployment
- CI/CD overview, use cases and history
- Plugin architecture
- Initializing a Jenkins server
- Projects and jobs
- Freestyle jobs
- CI/CD as Code: Pipeline projects
- Declarative vs. scripted pipelines
- Jenkins Environment variables and parameters
- Distributed architecture: Master and agent nodes
- Views and Folders
- Managing credentials and secrets
- Integrating with git Source Control Management
- Triggers: Webhooks and Polling
- Notifications: Instant messaging and SMTP Email
- Approval inputs
- Testing Ansible playbooks in Jenkins
- Multibranch Pipelines: Reading entire repositories
- Conditional Logic
- Deploying Ansible playbooks with Jenkins: An automated end-to-end deployment pipeline
Posted on October 17, 2020 by cprime-admin -
Part 1: Source Control Management with Git
- Purpose and overview of Git
- Use cases for Git
- Git flow
- Git providers
- Git configuration
- Finding help on Git
- Creating Local Git Repositories
- Basic Commands: add, commit, status, log
- Comparing commits: git diff
- Using a Repository: git push
- Branches: creating, merging and deleting
- Resolving merge conflicts
- Managing Pull Requests
- Using SSH keys with git platform private repositories
Part 2: Continuous Integration/Continuous Deployment with Jenkins
- Continuous Integration / Continuous Delivery (CI/CD): Jenkins
- CI/CD = Continuous Integration / Continuous Deployment
- Jenkins use case, purpose & history
- Architecture
- Using Plugins
- Initializing a Jenkins Master
- Projects / jobs
- Freestyle UI jobs
- CI/CD as Code: Pipeline Projects
- Declarative versus Scripted pipelines
- Views and folders
- Managing credentials and secrets
- Distributing workloads – Master and Agent nodes
- Integrating with Git: Source Control Management
- Triggers: Scheduled Polling and Webhooks
- Notifications: Instant Messaging Integration
- Requiring human input and approval
- Automated code linting and testing
- Jenkins Integration with managed nodes
- Continuous deployment through Jenkins
Part 3: Code Deployment and Release Management
- Java
- Building an artifact
- Storing Artifacts locally
- Python
- Building an artifact
- Storing Artifacts locally
Part 4: Notifications with Slack
- Integration setup
- Using Slack for CI/CD notifications
Part 5: Linux Management