Get ready with Walrus-From DevOps and developers' perspectives
Walrus a brand new application deployment management platform, has finally been released officially. Walrus well adopts the concept of platform engineering, and it provides a user-friendly and consistent application management and deployment experience for the development as well as for operations teams, by reducing the complexity of infrastructure operations. Walrus allows DevOps engineers focus more on the infrastructure and developers can deploy and manage applications without needing extensive knowledge of infrastructure.
In this article, we will elaborate on how to build a Java Web service from source code and deploy it to Kubernetes with Walrus from the perspectives of a DevOps engineer and a developer.
From DevOps Perspective
Peter is a DevOps engineer at Alpha Company, and he now needs to set up a self-service platform for the development team, who may not be familiar with containers and Kubernetes.
He performs the following preparations:
Prepare a Linux server with at least 4 CPUs, 8GB of memory, and at least 50GB of free disk space.
Install Docker following the instructions in the Docker official documentation
Open ports 80and 443 on the server.
Install a Kubernetes cluster and obtain the Kubeconfig file.
Deployment
Execute the following command to start the Walrus service:
sudo docker run -d --privileged --restart=always -p 80:80 -p 443:443 --name walrus sealio/walrus:v0.3.1
Access
Access Walrus's UI through https://<server-address>.
Upon the first login, follow the UI instructions to run the following command on the server to obtain the initial admin password:
sudo docker logs walrus 2>&1 | grep "Bootstrap Admin Password"
Login to Walrus with the username admin and the initial administrator password, and set a new password and Walrus's access address as prompted by the UI.
Configure Image Repository Authentication Key
Configure a test account for the image repository for developers' use:
Go to the Operations Center -> Global Keys menu, click the New Key button.
Fill in REGISTRY_USERNAME in the Name field and the image repository authentication username in the Content field, then click Save.
Click the New Key button again.
Fill in REGISTRY_PASSWORD in the Name field and the image repository authentication password in the Content field, then click Save.
Configure Kubernetes and Environment
Add the Kubernetes cluster as the deployment target for the application:
Go to the Operations Center -> Connectors menu, click the New Connector button.
Enter test-k8s in the Name field and paste the prepared cluster Kubeconfig file into the Kubeconfig field, then click Save.
Go to the Operations Center -> Environments menu, click the New Environment button.
Enter development in the Name field.
Click the Add Connector button, select the test-k8s connector, and click Save.
Note:
Connectors are abstract objects that integrate with various infrastructure and services, such as Kubernetes, public/private clouds, virtual machines, version control systems, and more.
Environments are the deployment targets for applications, and they can be associated with multiple connectors.
Now Peter has completed the infrastructure setup! He can integrate various infrastructures and add application modules with DevOps best practices for the development team to use in the Seal platform. For the task mentioned here could be accomplished with Walrus built-in moduules
From Developer's Perspective
John is a developer at Alpha Company, and he has no sufficient knowledge on Kubernetes. He would like to quickly set up a development test environment (self-service) without submitting tickets to the DevOps team.
Demo repository is here: https://github.com/seal-demo/spring-boot-docker-sample
Creating the Application
John logs into the Walrus platform and performs the following steps:
Go to the Application Management -> "Applications" menu, click the New Application button.
Enter myapp in the "Name" field, click the + button in the module configuration section.
Enter s2i in the module name field, choose build-container-image from the module list, and enter the Git URL https://github.com/seal-demo/spring-boot-docker-sample
Click the "Build" tab, enter the image name registry.alpha.org/myproject/myimage:latest (Note: This is the repository address for Yan's Alpha Company; replace it with your own repository address).
Check Registry Authentication and enter ${secret.REGISTRY_USERNAME} in the "Username" field and ${secret.REGISTRY_PASSWORD} in the "Password" field. Walrus UI will guide you to fill in the references to the configured keys. Click OK to save the configuration for the image building module.
Click the + button in the Module Configuration section again.
Enter web in the module name field, confirm webservice from the module list, and enter ${module.s2i.image} in the "Image Name" field. Walrus' UI will guide you to fill in the references to other module outputs.
Modify Ports to 8888 (this is the port John's code listens on). Click "OK" to save the configuration for the web service module.
Click the Save button to save the application configuration.
Deploying Application
Now John can deploy the test environment on Walrus with a single click:
Go to the details page of the myapp application.
Click the "+" button next to Application Information to add an instance.
Enter dev1 in the Name field and select the development environment provided by Wei from DevOps team. Click "OK" to create the application instance.
Wait for the deployment to complete, and the application instance's access address will appear on the UI.
TADA!John can now access his service test environment! He can share the application with other development and testing team members and create multiple application instances.
Conclusion
This article has illustrated how Walrus achieves the separation of concerns between development and operations by delineating the responsibilities from the perspectives of two distinct roles. It has also showcased Walrus'application model abstraction through the deployment process, spanning from source code to Kubernetes deployment. However, it's worth noting that an application module can encompass more than this—ranging from building logic and cloud-native workloads to traditional deployment payloads or other resource abstractions.