HURL Tests in GitLab CI

Jason Underhill
5 min readJun 29, 2023


GitLab CI — Hurl Tests

Testing is an essential part of software development. API testing, especially, is crucial to ensure that the API works as expected and meets its requirements. One such tool for API testing is Hurl. It is a command-line HTTP client that can be used to test APIs by sending HTTP requests with various parameters and assertions. In this blog post, we will explore how you can incorporate Hurl tests into your GitLab CI workflow to test an API.

The example we will be using is to test an API, written in Go, that uses DynamoDb as it’s persistent data store.

Hurl is a command-line HTTP client that can be used to test APIs by sending HTTP requests with various parameters and assertions. These tests allow us to test the API end to end, including the database layer. We can verify that the contract of the API is met and therefore ensures that the API isn’t broken by changes to the code. Automating these tests as part of the CI pipeline ensures that the API is tested every time a change is made to the code.

Build a Docker Image from our Application

We’ll be running our API as a ‘service’ within the testing stage of the GitLab CI workflow therefore the first step is to build a Docker image from our application. We can do this by creating a Dockerfile in the root of our project:

FROM golang:1.20


# Note here: To avoid downloading dependencies every time we
# build image. Here, we are caching all the dependencies by
# first copying go.mod and go.sum files and downloading them,
# to be used every time we build the image if the dependencies
# are not changed.

# Copy go mod and sum files
COPY go.mod ./
COPY go.sum ./

# Download all dependencies.
RUN go mod download

COPY . .
# Build the application.
RUN GOOS=linux GOARCH=amd64 go build -o ./bin/main ./cmd/api/main.go


SHELL ["/bin/bash", "-c"]
ENTRYPOINT [ "/bin/bash", "-c", "./bin/main-arm64" ]

Note that we are marking port 8888 as exposed, as this is the port that our API will be listening on. This is required when using the resulting image as a service in GitLab CI. GitLab CI will use this information to conduct a healthcheck on the service to ensure that it is running correctly before running the tests.

We can then build the image as part of the CI pipeline within out .gitlab-ci.yml file:

stage: build
image: 'docker:dind'
- job: 'dotenv'
artifacts: true
- docker build -t "${TAG}" -f Dockerfile .
- docker push "${TAG}"

The ${TAG} variable is set in the dotenv job, which is run before this job. This job is responsible for setting environment variables that are used in the CI pipeline. We use the commit hash as the tag for the image, so that we can ensure that the image is unique for each commit. In my use case I pushed the image to a private container registry, but you can also push to Docker Hub or any other registry.

We use the ${TAG} variable again in the testing stage to define the image to use to run our API.

Hurl Testing Stage

CI Stage Image

I created a custom Docker image for running the Hurl tests as I needed to also run some dotnet tooling to scaffold the test environment. This image is available to pull from

The Dockerfile for this image is as follows:

FROM ubuntu:latest


RUN apt update && apt install -y curl jq ca-certificates libc6 libcurl4 zlib1g libxml2
RUN curl -LO "${HURL_VERSION}/hurl_${HURL_VERSION}_amd64.deb"
# Use apt install to determine package dependencies instead of dpkg
RUN apt -y install "./hurl_${HURL_VERSION}_amd64.deb"
RUN rm -rf /var/lib/apt/lists/*

RUN apt-get update \
&& DEBIAN_FRONTEND=noninteractive apt-get install -y --no-install-recommends \
curl \
ca-certificates \
# .NET dependencies
libc6 \
libgcc1 \
libgssapi-krb5-2 \
libstdc++6 \
zlib1g \
&& rm -rf /var/lib/apt/lists/*

RUN curl -sSL | bash /dev/stdin -Channel 6.0 -InstallDir /usr/share/dotnet \
&& ln -s /usr/share/dotnet/dotnet /usr/bin/dotnet


You could also use the Docker image provided by Hurl. See here

GitLab Stage

We can now define the testing stage in our .gitlab-ci.yml file:

stage: test
DDB_ENDPOINT: 'http://dynamodb:8000'
- job: build_api_docker
artifacts: true
- job: 'dotenv'
artifacts: true
image: ''
- name: amazon/dynamodb-local:1.20.0
alias: dynamodb
- '-jar'
- 'DynamoDBLocal.jar'
- '-sharedDb'
- name: ''
alias: api
- "/bin/bash"
- "-c"
- "./bin/main"

- scaffold-ddb-if-required
- hurl --test --variable host=http://api:8888 --glob "./hurltests/*.hurl"

Adding the build_api_docker job in the needs section ensures that the API image is built before this stage is run. We also need to add the dotenv job to the needs section, as this job is responsible for setting the ${TAG} variable that is used to define the image to use for the API service.

You’ll notice we have defined two sevices. The first is the DynamoDb service, which is used by our API. We use the amazon/dynamodb-local:1.20.0 image, which is a local version of DynamoDb. We also need to define the alias for the service, which is used as the hostname in an environment variable used by the API. The stage variables are passed down into it’s services, hence we have defined the DDB_ENDPOINTusing the service alias. The second is our API itself.

We also use the alias to define the hostname for the API service. This is used in the Hurl tests to define the host to use when making requests to the API.

Within the script section you can perform any scaffolding of the database as required for your tests.

If this is all successful you should see output similar to this when you’re running the pipeline:

$ hurl --test --variable host=http://api:8888 --glob "./hurltests/*.hurl"
hurltests/tests1.hurl: Running [1/2]
hurltests/tests1.hurl: Success (7 request(s) in 364 ms)
hurltests/testsuite.hurl: Running [2/2]
hurltests/testsuite.hurl: Success (25 request(s) in 438 ms)
Executed files: 2
Succeeded files: 2 (100.0%)
Failed files: 0 (0.0%)
Duration: 809 ms

Cleaning up file based variables
Job succeeded

Originally posted on my personal blog.



Jason Underhill

Senior Systems Engineer at Serif / Affinity. Building scalable services with Dotnet, Golang and AWS, largely focusing on serverless.