Building Python Scripts as Containerized Applications with Podman Kube Play
Step from local develpment to remote execution.

Introduction:

For this post the focus will be on how podman kube play can be used to aid getting a locally running script to a remotes k8s environment using containerization. There is a large amount of concepts being touched upon within this post. While it would be good to cover all the points, that will be impossible to cover in one topic. Link will be added to terms which will link to extra reading on that term.

Containerization is the packaging of software with the operating system libraries and dependencies required to run the code into a lightweight executable, called a container. These containers allow the software to run in a number of different environments. Containerization simplifies the the deployment and scaling of applications.

System like OpenShift, Kubernetes and k3s are all orchestration platforms. These orchestration platforms are used to manage the deployments of containers. There can be hundreds nodes in the system managing thousands of containers. This is where the scripts will finally run. 

Understanding Containerization and Podman Kube Play:

Podman is a open source tool for building, inspecting and running containers. There is support for kubernetes resources, these resources are defined in yaml files. In the context of this article these files will be referred to as kubefiles.

The main command that will be used is the kube and pod sub commands. These commands will allow the deploying of a container locally, checking the logs of that container and the container deletion. 

Podman will also be used used for building the images with the use of the build command.

Setting up the Development Environment:

For the development environment set up there is a repo created that can be used to explain the concepts. The README goes into more details on the different deployment methods supported by the repo. That does make for  a good read.

To setup up the repo poetry should be in the the system PATH and podman. For setting up podman this guide should be followed. For poetry this guide can be used. Any python version of 3.9 or greater can be used.

Creating the Python Script:

The function of the scripts does not matter as much as the process of containerizing and deploying the container. But to it is good to have an understanding of that project layout.

all the code is added to the k8s_container package. __main__.py holds the entry points or commands to execute the scripts. There are three scripts that can be executed, these are:

  • basic
  • secret
  • config

The scripts related to the k8s resources that they consume. The basic does not require any external resources to be on the cluster. While secret and config require kinds of Secret and ConfigMap with the correct fields respectfully.

The Makefile has a number of commands. There are also variables that can be configured, it is worth looking at. By default podman tool used to work with the code base. Method 5 will require podman.

Below is the project layout. The files that matter most here are the pyproject.toml, Containerfile and kubefile directory.

├── Containerfile
├── k8s_container
│   ├── __init__.py
│   ├── __main__.py
│   └── utils.py
├── kubefile
│   ├── cmd-basic.yaml
│   ├── cmd-config.yaml
│   └── cmd-secret.yaml
├── Makefile
├── poetry.lock
├── pyproject.toml
└── README.md

For this article the python configurations that is most important is in the pyproject.toml and more so the tool.poetry.scripts section.

[tool.poetry.scripts]
basic = "k8s_container.__main__:basic"
config = "k8s_container.__main__:config"
secret = "k8s_container.__main__:secret"

In the script section an entry for each script is added with a path to the python function that should be executed. It is these functions that will be used later in the containers. In this example these functions are only logging some log statements. What the scripts do is not the focus but how they are called is.

Containerizing the Python Script:

The Containerfile is very standard, the one notable put is there is no entry point created to execute the scripts. This will be defined later in the kubefiles. This means there can be be a number of different scripts defined with in the same image. Some down sides to doing this is when running the container with podman run the script name most always be past in.

The Containerfile uses a multi stage build process to create the finial container image. This has the advantage of decreasing the size of the finial image. But more importantly it reduces the number of tools in the finial image which helps with security as no build dependencies are included.

The first stage is setting the image that will be used, doing some updates and setting the working directory. This stage is the base stage.

FROM python:3.11-slim As base
RUN pip install --upgrade pip

WORKDIR /app

A builder stage is next. This stage is where the package is built which means development dependencies need to be installed, like poetry. Poetry is also configured to create the virtual environment  in the working directory of the app.

FROM base AS builder
ENV POETRY_VERSION=1.3.1

RUN pip install "poetry==$POETRY_VERSION"

COPY pyproject.toml poetry.lock README.md ./

RUN poetry config virtualenvs.in-project true
RUN poetry install --only=main --no-root

COPY k8s_container/ ./k8s_container
RUN poetry build

The last stage is the runtime stage, this is the stage that will be make up the final image. From the builder the virtual environment and package wheels are copied over. The package wheels are then installed into the virtual environment. Finally the virtual environment is added to the PATH of the container. This allows access to the scripts in the virtual environment.

FROM base AS runtime
COPY --from=builder /app/.venv .venv
COPY --from=builder /app/dist/*.whl ./
RUN .venv/bin/pip install --no-cache-dir *.whl && rm -rf *.whl

ENV PATH="/app/.venv/bin:${PATH}"

The finished Containerfile file looks like below.

# Containerfile

FROM python:3.11-slim As base RUN pip install --upgrade pip WORKDIR /app FROM base AS builder ENV POETRY_VERSION=1.3.1 RUN pip install "poetry==$POETRY_VERSION" COPY pyproject.toml poetry.lock README.md ./ RUN poetry config virtualenvs.in-project true RUN poetry install --only=main --no-root COPY k8s_container/ ./k8s_container RUN poetry build FROM base AS runtime COPY --from=builder /app/.venv .venv COPY --from=builder /app/dist/*.whl ./ RUN .venv/bin/pip install --no-cache-dir *.whl && rm -rf *.whl ENV PATH="/app/.venv/bin:${PATH}"

Building and Running the Container:

With the Containerfile created the image can be built.  Podman  run is used to build the images. Podman kube play does have the ability to build images but the structure of the project needs to be much different. It would be suited to a mono repo were there are multiply projects with in the same repo. That is well out side the scope of this article. 

podman build --tag hello:latest .
This will then go build the image required. The first time doing this can take sometime as it most pull all the image layers before it can build the image layers we defined. Runs later will be much faster.

It would be a good practice to check that the image can be ran at this stage. The basic command can be ran without passing in any environment variables. 

podman run --rm hello:latest basic

This should give a log message of hello world. In the run command the --rm flag is past, this flag will remove the container once it has completed.

Running the container with Podman Kube Play:

With the image build and known to be run-able, it is time to create the kubefiles. The files are very similar to each other with only small changes between them. Starting with the must basic file which is running the basic command. 

In the kubefile directory there is a file called cmd-basic.yaml. This file defines a kubernetes resource kind of type Pod. A Pod is a collection of containers. The metadata.name states the name for the Pod. While in the spec.containers the different containers are defined. The container has a command set which is the basic script and the image that is is used. In this case it was important to set the restartPolicy to Never. If not the container would never finish. This kubefile defines in configuration the same action carried out by the podman run command from earlier. 

# kubefile/cmd-basic.yaml

apiVersion: v1
kind: Pod
metadata:
    name: basic
spec:
  containers:
    - command:
        - basic
      name: container
      image: hello:latest
  restartPolicy: Never

Now it is time to run this kubefile with podman kube play.

$ podman kube play kubefile/cmd-basic.yaml

Resource limits are not supported and ignored on cgroups V1 rootless systems
Pod:
bc5a0add2f4be5e1765bf697e82644fb8c6305cb76cea3c4b318b823252f3f4f
Container:
e4f4db4f9c706f1db9d2c19eaa7afd09914cae5003ed0f4bfc24c9e7e9d8dada

The message about cgroups can be ignored for now and if you don't get the message that is also ok. What is seen is a UID for the Pods and Containers that were created. What is not seen is the log message from the running of the basic command. This is were interacting with the pods changes from interacting with normal containers.

The logs for the Pod can be view using podman pod logs.

$ podman pod logs basic
e4f4db4f9c70 2023-06-02 10:02:57,804 INFO: hello world

The basic in the command is the name that is set in the metadata of the kubefile. Each log will also have a shorted UID at the start of the message. Remember a Pod can have multiply containers.

To list the pods the command podman pod ls is used.

$ podman pod ls
POD ID        NAME        STATUS      CREATED        INFRA ID      # OF CONTAINERS
bc5a0add2f4b  basic       Exited      9 minutes ago  df5527534d0d  2

The two interesting points to note here is the number of containers listed in the Pod is 2 and there is a infra Id. The second container is used to resources like ConfigMaps and Secrets and connection between containers with in the Pod. If the kubefile was to be deployed to a kubernetes cluster the second infra container would not be there.

The containers in pods can be listed using podman container ls

$ podman container ls --all --pod
CONTAINER ID  IMAGE                                    COMMAND     CREATED         STATUS                     PORTS       NAMES               POD ID        PODNAME
df5527534d0d  localhost/podman-pause:4.5.0-1681486942              16 minutes ago  Exited (0) 16 minutes ago              bc5a0add2f4b-infra  bc5a0add2f4b  basic
e4f4db4f9c70  localhost/hello:latest                               16 minutes ago  Exited (0) 16 minutes ago              basic-container     bc5a0add2f4b  basic

Unlike the podman run command which used the --rm flag to remove the containers after exiting, the pods need to be removed manually. This can be done with the podman kube down command. 

$ podman kube down kubefile/cmd-basic.yaml
Pods stopped:
bc5a0add2f4be5e1765bf697e82644fb8c6305cb76cea3c4b318b823252f3f4f
Pods removed:
bc5a0add2f4be5e1765bf697e82644fb8c6305cb76cea3c4b318b823252f3f4f
Secrets removed:
Volumes removed:

That is the basics of using podman kube play. The next step is to look at the kubefiles for the config and secret command/scripts. These scripts both need environment variables configured in order to run. There are kubernetes resources of kind ConfigMap and Secret. These kinds will be configured to in the kubefiles to set the the values for the environment variables.

Looking at the kubefile/cmd-config.yaml. There is now a second kind configured, a Kind of ConfigMap with a name of foo. This ConfigMap has two entries for the data. The container in the Pod spec now has a new section env which is defining the environment variables for the container and where those values are taken from. Also the container command has being change to config

# kubefile/cmd-config.yaml

apiVersion: v1
kind: ConfigMap
metadata:
  name: foo
data:
  loop: "5"
  delay: "1"

---
apiVersion: v1
kind: Pod
metadata:
    name:config
spec:
  containers:
    - command:
        - config
      name: container
      image: hello:latest
      env:
        - name: LOOP
          valueFrom:
            configMapKeyRef:
              key: loop
              name: foo
        - name: DELAY
          valueFrom:
            configMapKeyRef:
              key: delay
              name: foo

  restartPolicy: Never

Running and interacting with the Pod is done using the same podman commands used to interact with the basic Pod. The only different this time is the Pod name is config and the kubefile being used is kubefile/cmd-config.yaml.

The kubefile/cmd-secret.yaml is very much the same but defines a kind of Secret and the command has changed to use the secret script.

# kubefile/cmd-secret.yaml

apiVersion: v1
kind: Secret
metadata:
  name: foo
data:
  count: NQ==
  sleep: MQ==

---
apiVersion: v1
kind: Pod
metadata:
    name: secret
spec:
  containers:
    - command:
        - secret
      name: container
      image: hello:latest
      env:
        - name: COUNT
          valueFrom:
            secretKeyRef:
              key: count
              name: foo
        - name: SLEEP
          valueFrom:
            secretKeyRef:
              key: sleep
              name: foo

  restartPolicy: Never

This is the basics of the using podman kube play. There are more things that can be done like attaching volumes, not starting the pods automatically and even passing more ConfigMaps that are not defined directly in the kubefile. But these are all things to look at a later time

Deploying the Containerized Python Script:

This next bit is one of the reason for using kube play and that is to get the workload on to a remote kubernetes cluster. Main assumption here is kubectl is installed locally and logged into an active cluster with a namespace of default.

The first think that needs to change is the name of the images that are used in the kubefiles. These images should point to a public accessible image, in this case it will be changed to quay.io/jfitzpat/hello:latest. Below is the kubefile for the config command after the update, all the kubefiles would need the same update

# kubefile/cmd-config.yaml

apiVersion: v1
kind: ConfigMap
metadata:
  name: foo
data:
  loop: "5"
  delay: "1"
---
apiVersion: v1
kind: Pod
metadata:
    name: config
spec:
  containers:
    - command:
        - config
      name: container
      image: quay.io/jfitzpat/hello:latest
      env:
        - name: LOOP
          valueFrom:
            configMapKeyRef:
              key: loop
              name: foo
        - name: DELAY
          valueFrom:
            configMapKeyRef:
              key: delay
              name: foo
  restartPolicy: Never

To deploy the work load on the remote cluster the kubectl apply command is used.

kubectl apply --filename kubefile/cmd-config.yaml --namespace default

This will deploy both the Pod and ConfigMap to the cluster. The resources can be shown with the following commands.

 kubectl get pods --namespace default

kubectl get configmap --namespace default

Cleaning these resources up can be done with the kubectl delete command.

kubectl delete -f kubefile/cmd-config.yaml --namespace default

All the kubefiles will work in the same manner, just remember the images need to be pointing to publicly accessible images. Of course private images could be used but that would depend on the configuration of the cluster which is far beyond the scope of this article.

Conclusion:

Note: While the structure provided offers a logical flow for the blog post, feel free to adjust and modify it to suit your writing style and specific content requirements.

In this article it has being shown how to configure a python script to be executable from a command in the system PATH. This was then used to build an image which the kubefile would consume and extend with the inclusion of ConfigMaps and Secrets. This set up made the local deployment as close to a kubernetes deployment without having kubernetes deployed locally. The kubefile allows the moving of the workload to a cluster once the development work is complete.

This process allows for fast development but also gives easy of deploying. 


Building Python Scripts as Containerized Applications with Podman Kube Play
Jim Fitzpatrick 2 June, 2023
Share this post
Our blogs
Archive
Sign in to leave a comment
Always learning about configurations
How others see configurations