GitHub Actions Bash Shell Commands |Video upload date:  · Duration: PT6M49S  · Language: EN

Learn to run Bash commands in GitHub Actions workflows with examples for error handling outputs and debugging.

If your workflow sometimes behaves like it drank bad coffee and forgot how to build things this guide is for you. We will cover picking the right runner and shell writing multi line run blocks enforcing fail fast behavior passing data between steps and debugging like a proper human not a random guess generator. Keywords you care about include github actions bash shell ci cd workflows automation devops scripting and debugging

Pick the right runner and shell

Choose a runner that matches the platform you are targeting. Want Linux behavior use a Linux runner and bash. Want Windows behavior choose windows runners and a compatible shell. Different shells have different builtins and quirky parsing rules so using the expected shell avoids surprise failures that feel like witchcraft.

Quick checks you can run inside a step

echo 'Shell is' $SHELL || echo 'Unknown shell or not set in this environment'
bash --version

Write multi line run steps that keep state

Group related commands in one run block so environment changes persist between commands. That avoids the classic problem where you cd into a directory and then watch in horror as the next step runs in a different place.

set -e
set -o pipefail
# now a grouped set of commands
mkdir -p build
cd build
cmake ..
make -j2

Using multiline steps also makes quoting and variable expansion simpler. Put related commands together rather than scattering them across steps like confetti.

Fail fast and catch pipeline errors

Start scripts with strict modes so failures stop early and give you a useful stack trace. These two lines will save you hours of staring at logs

set -e
set -o pipefail

Combine that with a trap to print context when something goes wrong

trap 'echo Failure in step at line $LINENO' ERR
# commands that might fail

Pass data between steps safely

Do not parse random log lines. Use the runner provided files to write outputs and environment variables. That is the stable interface that will not betray you at 2am.

echo 'name=foo' >> $GITHUB_OUTPUT
echo 'MY_VAR=some value' >> $GITHUB_ENV

Readback and reuse those values in later steps using the built in workflow syntax rather than grepping logs.

Debug like a responsible adult

Logs are your friend that never sleeps. Print useful context and check exit codes instead of guessing.

echo 'Dumping selected environment variables'
env | sort | grep -i name || true
# show last command status
echo 'Last exit code' $?
# run a small check under the same shell to reproduce surprises
bash -lc 'env | sort | grep -i PATH'

If something weird happens add a diagnostic run block that lists files shows recent logs and prints the few environment variables you actually care about. It is faster than rewriting workflow YAML at random.

Recap and quick checklist

  • Pick a runner that matches your target platform
  • Use bash on Linux for predictable builtin behavior
  • Group related commands in one run block so state persists
  • Enable set -e and set -o pipefail and add a trap for better errors
  • Pass values with $GITHUB_OUTPUT and $GITHUB_ENV rather than parsing logs
  • Print env and exit codes when debugging rather than guessing

Follow these rules and your CI will stop surprising you with phantom failures and mysterious missing variables. It will not make your coffee but it will make your deployments less haunted.

I know how you can get Azure Certified, Google Cloud Certified and AWS Certified. It's a cool certification exam simulator site called certificationexams.pro. Check it out, and tell them Cameron sent ya!

This is a dedicated watch page for a single video.