GitHub Actions Complete CI/CD Pipeline Guide for Linux Projects
π Table of Contents
π Table of Contents
Continuous integration and deployment pipelines used to require dedicated infrastructure β a Jenkins server you had to maintain, patch, and babysit; a GitLab runner you had to provision; a complex YAML stack that took days to get right. GitHub Actions changed that calculus. If your code is on GitHub, your CI/CD pipeline lives in the same repository, runs on GitHub’s infrastructure, costs nothing for public repositories, and requires nothing beyond a YAML file checked into your codebase. For Linux projects β applications deployed to Linux servers, containerized workloads, packages, scripts β GitHub Actions is now the de facto standard. This guide covers everything from first workflow to production deployment.
Key Concepts
Before writing any YAML, get these terms straight. They appear throughout GitHub Actions documentation and are frequently confused by people coming from other CI systems.
Workflow: A YAML file stored in .github/workflows/ that defines automation. A repository can have multiple workflows β one for CI on pull requests, one for deployment on merge, one for scheduled tasks.
Event: The trigger that starts a workflow. Common events include push, pull_request, schedule, workflow_dispatch (manual trigger), and release. You can also trigger workflows from other workflows.
Job: A set of steps that runs on a single runner. Jobs in the same workflow run in parallel by default. You can define dependencies between jobs using needs.
Step: A single task within a job β either a shell command or an Action. Steps within a job run sequentially on the same runner and share the job’s filesystem.
Runner: The machine that executes a job. GitHub provides hosted runners for Ubuntu, macOS, and Windows. You can also register self-hosted runners on your own infrastructure, which is essential for deploying to private environments or for workloads with specific hardware requirements.
Action: A reusable unit of automation β essentially a pre-packaged step. Actions come from the GitHub Marketplace, from your own repository, or from third-party repositories. They save you from writing boilerplate shell commands for common operations like checking out code, setting up language runtimes, or caching dependencies.
Secrets: Encrypted values stored at the repository, environment, or organization level. They’re injected into workflows as environment variables and never appear in logs. This is where SSH keys, API tokens, and registry credentials live.
Artifacts: Files produced by a workflow that you want to persist beyond the job. Build outputs, test reports, and compiled binaries are typical artifacts.
Your First Workflow
Create the workflows directory in your repository:
mkdir -p .github/workflows
Create a basic CI workflow at .github/workflows/ci.yml:
name: CI
on:
push:
branches: [ main, develop ]
pull_request:
branches: [ main ]
jobs:
build-and-test:
runs-on: ubuntu-24.04
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: '3.12'
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install -r requirements.txt
- name: Run tests
run: pytest tests/ -v --tb=short
- name: Run linter
run: flake8 src/ --max-line-length=100
This workflow runs on every push to main or develop, and on pull requests targeting main. It checks out your code, sets up Python, installs dependencies, and runs your test suite. Push this file to your repository and GitHub will execute it automatically.
Matrix Builds
Testing across multiple versions of a language runtime or multiple operating systems is a common requirement. Matrix builds let you define a grid of variable values and run a job for each combination without duplicating YAML.
name: Matrix CI
on: [push, pull_request]
jobs:
test:
runs-on: ${{ matrix.os }}
strategy:
fail-fast: false
matrix:
os: [ubuntu-22.04, ubuntu-24.04]
python-version: ['3.10', '3.11', '3.12']
exclude:
- os: ubuntu-22.04
python-version: '3.12'
steps:
- uses: actions/checkout@v4
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v5
with:
python-version: ${{ matrix.python-version }}
- name: Install and test
run: |
pip install -r requirements.txt
pytest tests/
The fail-fast: false option ensures all matrix combinations run even if one fails, which is almost always what you want during debugging. The exclude block lets you remove specific combinations from the matrix when certain combinations are unsupported or redundant.
Caching Dependencies
Without caching, every job installs dependencies from scratch. For large Python, Node, or Go projects, this can add minutes to every pipeline run. The actions/cache action stores directories between runs using a cache key based on a hash of your lockfile.
Python dependency caching:
- name: Cache pip packages
uses: actions/cache@v4
with:
path: ~/.cache/pip
key: ${{ runner.os }}-pip-${{ hashFiles('**/requirements*.txt') }}
restore-keys: |
${{ runner.os }}-pip-
- name: Install dependencies
run: pip install -r requirements.txt
Node.js with npm:
- name: Cache node modules
uses: actions/cache@v4
with:
path: ~/.npm
key: ${{ runner.os }}-node-${{ hashFiles('**/package-lock.json') }}
restore-keys: |
${{ runner.os }}-node-
- name: Install dependencies
run: npm ci
Go module caching:
- name: Cache Go modules
uses: actions/cache@v4
with:
path: ~/go/pkg/mod
key: ${{ runner.os }}-go-${{ hashFiles('**/go.sum') }}
restore-keys: |
${{ runner.os }}-go-
The cache key includes a hash of the lockfile. When the lockfile changes, the cache is invalidated and rebuilt. On subsequent runs with the same lockfile, the cached packages are restored in seconds.
docker-images">Building and Pushing Docker Images
Building a Docker image and pushing it to a registry is a very common CI task. Here is a complete workflow that builds on every push to main and tags the image with both the commit SHA and latest:
name: Build and Push Docker Image
on:
push:
branches: [ main ]
release:
types: [ published ]
env:
REGISTRY: ghcr.io
IMAGE_NAME: ${{ github.repository }}
jobs:
build-push:
runs-on: ubuntu-24.04
permissions:
contents: read
packages: write
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Log in to GitHub Container Registry
uses: docker/login-action@v3
with:
registry: ${{ env.REGISTRY }}
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Extract metadata
id: meta
uses: docker/metadata-action@v5
with:
images: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}
tags: |
type=sha,prefix=sha-
type=raw,value=latest,enable=${{ github.ref == 'refs/heads/main' }}
type=semver,pattern={{version}},event=tag
- name: Build and push
uses: docker/build-push-action@v5
with:
context: .
push: true
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
cache-from: type=gha
cache-to: type=gha,mode=max
The cache-from: type=gha and cache-to: type=gha,mode=max lines enable GitHub Actions cache for Docker layer caching, which can dramatically speed up image builds when only a few layers change between commits.
Deploying to a Linux Server via SSH
This is where many tutorials fall short. Getting code or containers onto an actual Linux server from GitHub Actions requires SSH access, and that means managing keys securely. Here’s the complete setup:
First, generate a dedicated SSH keypair for deployment:
ssh-keygen -t ed25519 -C "github-actions-deploy" -f ~/.ssh/github_actions_deploy -N ""
Add the public key to your server’s authorized_keys:
cat ~/.ssh/github_actions_deploy.pub >> ~/.ssh/authorized_keys
chmod 600 ~/.ssh/authorized_keys
Add the private key to GitHub Secrets. Go to your repository Settings > Secrets and variables > Actions, and create:
DEPLOY_SSH_KEYβ contents of the private key fileDEPLOY_HOSTβ your server’s IP or hostnameDEPLOY_USERβ the SSH user
Now write the deployment workflow:
name: Deploy to Production
on:
push:
branches: [ main ]
jobs:
deploy:
runs-on: ubuntu-24.04
needs: [build-test] # only deploy if CI passes
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Set up SSH
uses: webfactory/ssh-agent@v0.9.0
with:
ssh-private-key: ${{ secrets.DEPLOY_SSH_KEY }}
- name: Add host to known_hosts
run: |
ssh-keyscan -H ${{ secrets.DEPLOY_HOST }} >> ~/.ssh/known_hosts
- name: Deploy application
run: |
ssh ${{ secrets.DEPLOY_USER }}@${{ secrets.DEPLOY_HOST }} << 'ENDSSH'
cd /opt/myapp
git pull origin main
docker compose pull
docker compose up -d --remove-orphans
docker system prune -f
ENDSSH
For container-based deployments, the workflow can also push a new image tag to a registry and then SSH in to pull and restart the service, keeping the SSH command minimal:
- name: Update container on server
env:
IMAGE_TAG: ${{ github.sha }}
run: |
ssh ${{ secrets.DEPLOY_USER }}@${{ secrets.DEPLOY_HOST }} \
"IMAGE_TAG=${IMAGE_TAG} docker compose -f /opt/myapp/compose.yml up -d --no-build"
Secrets Management Best Practices
GitHub Actions has three levels of secrets:
- Repository secrets β available to all workflows in the repository
- Environment secrets β available only to workflows targeting a specific environment (e.g., production). Environments can require manual approval before deploying.
- Organization secrets β shared across multiple repositories, with configurable access control
For sensitive production credentials, always use Environment secrets with required reviewers. This means no code, no matter who pushes it, can trigger a production deployment without a human approving the deployment step.
Never print secrets in workflow steps. GitHub Actions automatically redacts known secret values from logs, but if you base64-encode or otherwise transform a secret before printing it, the redaction may not catch it.
Rotate secrets regularly and after any team member leaves. Deleting and re-creating a secret in GitHub does not automatically revoke the underlying credential β you need to rotate it at the source (SSH key, API token, etc.) as well.
Complete Multi-Job Pipeline Example
name: Full Pipeline
on:
push:
branches: [ main ]
pull_request:
branches: [ main ]
jobs:
lint:
runs-on: ubuntu-24.04
steps:
- uses: actions/checkout@v4
- uses: actions/setup-python@v5
with:
python-version: '3.12'
- run: pip install flake8 black isort
- run: black --check src/
- run: isort --check-only src/
- run: flake8 src/
test:
runs-on: ubuntu-24.04
needs: lint
services:
postgres:
image: postgres:16
env:
POSTGRES_PASSWORD: testpass
POSTGRES_DB: testdb
ports:
- 5432:5432
options: --health-cmd pg_isready --health-interval 10s --health-timeout 5s --health-retries 5
steps:
- uses: actions/checkout@v4
- uses: actions/setup-python@v5
with:
python-version: '3.12'
- uses: actions/cache@v4
with:
path: ~/.cache/pip
key: ${{ runner.os }}-pip-${{ hashFiles('requirements*.txt') }}
- run: pip install -r requirements.txt -r requirements-dev.txt
- run: pytest tests/ --cov=src --cov-report=xml
env:
DATABASE_URL: postgresql://postgres:testpass@localhost/testdb
- uses: actions/upload-artifact@v4
with:
name: coverage-report
path: coverage.xml
build:
runs-on: ubuntu-24.04
needs: test
if: github.ref == 'refs/heads/main'
steps:
- uses: actions/checkout@v4
- uses: docker/setup-buildx-action@v3
- uses: docker/login-action@v3
with:
registry: ghcr.io
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- uses: docker/build-push-action@v5
with:
push: true
tags: ghcr.io/${{ github.repository }}:${{ github.sha }}
cache-from: type=gha
cache-to: type=gha,mode=max
deploy:
runs-on: ubuntu-24.04
needs: build
environment: production
if: github.ref == 'refs/heads/main'
steps:
- uses: webfactory/ssh-agent@v0.9.0
with:
ssh-private-key: ${{ secrets.DEPLOY_SSH_KEY }}
- run: ssh-keyscan -H ${{ secrets.DEPLOY_HOST }} >> ~/.ssh/known_hosts
- run: |
ssh ${{ secrets.DEPLOY_USER }}@${{ secrets.DEPLOY_HOST }} \
"docker pull ghcr.io/${{ github.repository }}:${{ github.sha }} && \
docker stop myapp || true && \
docker rm myapp || true && \
docker run -d --name myapp -p 80:8000 \
ghcr.io/${{ github.repository }}:${{ github.sha }}"
This pipeline runs linting, then tests (with a real Postgres service container), then builds and pushes a Docker image, then deploys to production β with the production deployment gated by manual approval via GitHub Environments.
Key Takeaways
- Workflows live in
.github/workflows/and are triggered by events like push, pull_request, or schedule - Jobs run in parallel by default; use
needsto create sequential dependencies - The
actions/cacheaction can cut pipeline times significantly by caching package managers between runs - Docker image builds support GitHub Actions cache for layer caching β use
cache-from: type=gha - SSH deployments require a dedicated keypair, with the private key stored as a repository or environment secret
- GitHub Environments with required reviewers add a manual approval gate before production deployments
- Service containers let you spin up databases and other dependencies directly alongside your test jobs
- Never log secrets; use environment-level secrets for production credentials
Was this article helpful?
About Ramesh Sundararamaiah
Red Hat Certified Architect
Expert in Linux system administration, DevOps automation, and cloud infrastructure. Specializing in Red Hat Enterprise Linux, CentOS, Ubuntu, Docker, Ansible, and enterprise IT solutions.