Most teams I work with already know what they want to ship. What slows them down is the first afternoon: the bit between “we have an idea” and “there’s a URL we can show people.” Once that path exists and a push to main deploys automatically, the work becomes building the product. Until it exists, every change feels heavier than it should.

This piece is a working starter path: create a GitHub repository, push a small application, stand up a Google Cloud project, and wire Cloud Build so that every push to main builds a container and deploys it to Cloud Run. No prior GCP experience assumed. It’s the path I use for new projects when I want a deploy pipeline running before anything more interesting gets built.

By the end you’ll have a public URL serving a tiny Node app, an automated deploy on every push, and a foundation that scales up to real services without rework.

What you’ll need before you start

Three accounts and two pieces of local tooling. If any are missing, fix them first. The rest of the guide assumes they exist.

  • A GitHub account with the ability to create repositories.
  • A Google account with billing enabled on Google Cloud. New accounts get a free trial credit; the workload in this guide costs pennies after that.
  • Local git and the gcloud CLI installed and authenticated. Run gcloud auth login once and gcloud auth application-default login once. The second is what lets local tools talk to GCP on your behalf.

Optional but useful: Docker Desktop if you want to test the container locally before pushing. The pipeline doesn’t require it.

Step 1: Create the repository

On GitHub, create a new repository. Keep it simple: a name, a description, the default branch as main, no .gitignore or licence (we’ll add those locally so we control what’s committed). Make it private if the work is sensitive; public is fine if it’s a learning project.

Locally, in an empty directory:

git init
git branch -M main
git remote add origin git@github.com:<your-username>/<repo-name>.git

That’s the repo wired up. Nothing pushed yet.

Step 2: Write the minimum viable application

The point of this guide is the pipeline, not the app, so we’ll use the smallest thing that can prove the pipeline works: a Node Express server that returns “hello” on /.

Create package.json:

{
  "name": "starter",
  "version": "1.0.0",
  "type": "module",
  "scripts": {
    "start": "node server.js"
  },
  "dependencies": {
    "express": "^4.19.2"
  }
}

Create server.js:

import express from "express";

const app = express();
const port = process.env.PORT || 8080;

app.get("/", (_req, res) => {
  res.send("hello from cloud run");
});

app.listen(port, () => {
  console.log(`listening on ${port}`);
});

Two things to notice. First, the server reads PORT from the environment. Cloud Run sets this and your container must listen on it. Second, the default fallback is 8080, which is also Cloud Run’s default. Don’t hardcode a different port.

Create a Dockerfile:

FROM node:20-slim
WORKDIR /app
COPY package*.json ./
RUN npm install --omit=dev
COPY . .
ENV PORT=8080
EXPOSE 8080
CMD ["npm", "start"]

And a .gitignore:

node_modules
.env

Run npm install locally to generate the package-lock.json. Commit everything:

git add .
git commit -m "initial commit: hello world express app"
git push -u origin main

You now have a repository on GitHub with a runnable application. The remaining work is on the GCP side.

Step 3: Create the GCP project

In a terminal:

PROJECT_ID=<your-project-id>     # globally unique, lowercase, hyphens
gcloud projects create $PROJECT_ID --name="Starter"
gcloud config set project $PROJECT_ID

Link billing. Cloud Run, Cloud Build and Artifact Registry all require it, even though the spend will be negligible. You’ll need your billing account ID (gcloud billing accounts list):

gcloud billing projects link $PROJECT_ID \
  --billing-account=<your-billing-account-id>

Enable the APIs the pipeline needs. The list looks longer than it is. Most are one-line enables:

gcloud services enable \
  cloudbuild.googleapis.com \
  run.googleapis.com \
  artifactregistry.googleapis.com \
  logging.googleapis.com \
  iam.googleapis.com

Wait a minute or two after enabling. IAM propagation isn’t instant and you’ll hit transient permission errors if you race ahead.

Step 4: Create the container registry

Cloud Build needs somewhere to push images. Artifact Registry is the modern answer (Container Registry is deprecated; don’t use it for new work).

REGION=europe-west2          # or whichever region suits you
REPO=app-images

gcloud artifacts repositories create $REPO \
  --repository-format=docker \
  --location=$REGION \
  --description="Container images"

You’ll reference images as ${REGION}-docker.pkg.dev/${PROJECT_ID}/${REPO}/<image-name>:<tag>. Worth getting comfortable with the path shape. It’s used in every deploy.

Step 5: Grant the Cloud Build service account the roles it needs

This is the step that catches people out. By default, Cloud Build runs as a project-scoped service account that can build but can’t deploy. You need to grant it the deploy permissions explicitly.

Get the service account email:

PROJECT_NUMBER=$(gcloud projects describe $PROJECT_ID --format="value(projectNumber)")
CB_SA="${PROJECT_NUMBER}-compute@developer.gserviceaccount.com"

Note: as of 2024 Google moved the default Cloud Build identity from the legacy @cloudbuild.gserviceaccount.com SA to the project’s default compute SA. The line above gets the right one for new projects. If you’re on an older project that still uses the legacy SA, the cloud-build-sa email shows up in your Cloud Build settings. Substitute it in below.

Grant the roles:

for ROLE in \
  roles/run.admin \
  roles/iam.serviceAccountUser \
  roles/artifactregistry.writer \
  roles/logging.logWriter; do
  gcloud projects add-iam-policy-binding $PROJECT_ID \
    --member="serviceAccount:$CB_SA" \
    --role="$ROLE" --quiet
done

What each one does, in plain terms:

  • run.admin: deploy and update Cloud Run services.
  • iam.serviceAccountUser: attach the runtime service account to a Cloud Run revision (Cloud Run revisions run as an identity; the deploying SA needs permission to assign that identity).
  • artifactregistry.writer: push the built image.
  • logging.logWriter: write build logs (often missed; builds will fail with INVALID_ARGUMENT if this is absent).

If your build later needs to do more (deploy Cloud Functions, manage Firestore rules, write to Secret Manager) you grant additional roles to the same SA. The pattern is the same.

Step 6: Add a cloudbuild.yaml to the repo

This is the file that tells Cloud Build what to do on each trigger. Add it to the root of the repository:

steps:
  - name: gcr.io/cloud-builders/docker
    args:
      - build
      - -t
      - ${_REGION}-docker.pkg.dev/$PROJECT_ID/${_REPO}/${_SERVICE}:$SHORT_SHA
      - .

  - name: gcr.io/cloud-builders/docker
    args:
      - push
      - ${_REGION}-docker.pkg.dev/$PROJECT_ID/${_REPO}/${_SERVICE}:$SHORT_SHA

  - name: gcr.io/google.com/cloudsdktool/cloud-sdk
    entrypoint: gcloud
    args:
      - run
      - deploy
      - ${_SERVICE}
      - --image=${_REGION}-docker.pkg.dev/$PROJECT_ID/${_REPO}/${_SERVICE}:$SHORT_SHA
      - --region=${_REGION}
      - --platform=managed
      - --allow-unauthenticated

substitutions:
  _REGION: europe-west2
  _REPO: app-images
  _SERVICE: starter

options:
  logging: CLOUD_LOGGING_ONLY

Three things worth understanding:

  1. $SHORT_SHA is the first seven characters of the commit. Tagging the image with the commit hash means every build is uniquely identifiable and you can roll back by deploying an older tag.
  2. --allow-unauthenticated makes the service publicly accessible. Fine for a hello-world starter; revisit before putting anything sensitive behind it.
  3. logging: CLOUD_LOGGING_ONLY is required when using a non-default service account that doesn’t have the legacy “logs bucket” permissions. Without it, builds fail with a confusing error about logs.

Commit and push:

git add cloudbuild.yaml
git commit -m "add cloud build pipeline"
git push

The file is in the repo. Nothing builds yet. We still need to tell Cloud Build to watch the repo.

Step 7: Connect GitHub to Cloud Build

Open the Cloud Build console (console.cloud.google.com/cloud-build/triggers). Click “Create trigger.”

Configure as follows:

  • Name: deploy-main
  • Region: the same region as your Artifact Registry (matching regions keeps things predictable).
  • Event: Push to a branch.
  • Source: click “Connect new repository,” choose GitHub, authorise the GCP app on your GitHub account, and select your repository.
  • Branch: ^main$ (regex; exact match on main).
  • Configuration: Cloud Build configuration file. Path: cloudbuild.yaml.

Save the trigger.

The first time you connect GitHub, you’ll be asked to install the Cloud Build GitHub App and grant it access to the repository. Scope it to just this repo unless you have a strong reason to grant it organisation-wide.

Step 8: Trigger the first build

Make any change to the repo (change the response string in server.js, say) and push:

git commit -am "trigger first deploy"
git push

Open the Cloud Build history page. You should see a build running. It will take 60–120 seconds for this trivial application: Docker image build, push to Artifact Registry, Cloud Run deploy.

When it finishes successfully, the Cloud Run console will show a service named starter with a public URL, something like https://starter-<hash>-nw.a.run.app. Open it. You should see your hello string.

That’s the loop closed. Every subsequent push to main will build a new image, tag it with the commit, and deploy a new Cloud Run revision. You can roll back from the Cloud Run revisions tab in a couple of clicks.

When something doesn’t work

Three failure modes account for most first-time problems:

The build fails with a permission error. Almost always a missing IAM role on the Cloud Build service account. The error message usually names the resource it couldn’t access. Match it to one of the roles in Step 5. After granting the role, re-run the build; you don’t need to push again, just click “Rebuild” in the Cloud Build UI.

The build succeeds, but Cloud Run shows the service in a failed state. Check the Cloud Run logs. The most common cause is the container not listening on the PORT environment variable: the app starts on a hardcoded port, Cloud Run can’t reach it, the health check fails. Fix the app to read process.env.PORT.

The trigger doesn’t fire on push. Check the trigger’s branch regex (typo on ^main$ is a classic) and confirm the Cloud Build GitHub App still has access to the repository. Revoked permissions silently disable triggers.

For anything else, the build logs in Cloud Build and the request logs in Cloud Run are usually enough to pin down the cause in two or three minutes. Both are available from the console without needing to leave the page.

What this gives you, and what to do next

You now have:

  • A GitHub repository connected to a continuous deploy pipeline.
  • A public URL serving a containerised application on Cloud Run, redeployed automatically on every push to main.
  • A foundation that scales up to real services: environment variables via --set-env-vars (or Secret Manager for sensitive values), a custom domain via a Cloud Run domain mapping, staging environments by triggering different services off different branches, structured logging via Cloud Logging.

The next steps depend on what you’re actually building. If it’s a static site or a frontend app, swap the Node Express server for whatever build toolchain you’re using. The pipeline shape doesn’t change. If it’s a real API, add a database. Cloud SQL or Firestore both connect cleanly to Cloud Run with a few extra IAM grants. If multiple environments matter, duplicate the trigger for a staging branch deploying to a separate Cloud Run service.

What you’ll find, once the pipeline exists and a git push is all that stands between you and a deploy, is that you build differently. Smaller commits, faster iteration, less ceremony around releases. The friction tax most teams pay on every change just isn’t there anymore.

That’s the actual value of getting this set up in an afternoon. The hello-world deploy isn’t the point. The point is that the next thing you ship is already wired.


Keith Biggin runs Biggin Insights, a senior technology leadership practice. Engagements take three shapes (Lead, Build and Guide) built around the same operator-grade credibility. About the practice · Services · Book a conversation.