Cloud Native Catalog
Easily import any catalog item into Meshery. Have a design pattern to share? Add yours to the catalog.
Meshery CLI
Import using mesheryctl, visit docs for steps.
1. Apply a pattern file.
mesheryctl pattern apply -f [file | URL]
2. Onboard an application.
mesheryctl app onboard -f [file-path]
3. Apply a WASM filter file.
mesheryctl exp filter apply --file [GitHub Link]



No results found
AWS cloudfront controller

MESHERY481b

RELATED PATTERNS
Istio Operator

MESHERY4a76
AWS CLOUDFRONT CONTROLLER
Description
This YAML file defines a Kubernetes Deployment for the ack-cloudfront-controller, a component responsible for managing AWS CloudFront resources in a Kubernetes environment. The Deployment specifies that one replica of the pod should be maintained (replicas: 1). Metadata labels are provided for identification and management purposes, such as app.kubernetes.io/name, app.kubernetes.io/instance, and others, to ensure proper categorization and management by Helm. The pod template section within the Deployment spec outlines the desired state of the pods, including the container's configuration. The container, named controller, uses the ack-cloudfront-controller:latest image and will run a binary (./bin/controller) with specific arguments to configure its operation, such as AWS region, endpoint URL, logging level, and resource tags. Environment variables are defined to provide necessary configuration values to the container. The container exposes an HTTP port (8080) and includes liveness and readiness probes to monitor and manage its health, ensuring the application is running properly and is ready to serve traffic.
Read moreCaveats and Considerations
1. Environment Variables: Verify that the environment variables such as AWS_REGION, AWS_ENDPOINT_URL, and ACK_LOG_LEVEL are correctly set according to your AWS environment and logging preferences. Incorrect values could lead to improper functioning or failure of the controller. 2. Secrets Management: If AWS credentials are required, make sure the AWS_SHARED_CREDENTIALS_FILE and AWS_PROFILE environment variables are correctly configured and the referenced Kubernetes secret exists. Missing or misconfigured secrets can prevent the controller from authenticating with AWS. 3. Resource Requests and Limits: Review and adjust the resource requests and limits to match the expected workload and available cluster resources. Insufficient resources can lead to performance issues, while overly generous requests can waste cluster resources. 4. Probes Configuration: The liveness and readiness probes are configured with specific paths and ports. Ensure that these endpoints are correctly implemented in the application. Misconfigured probes can result in the pod being killed or marked as unready.
Read moreTechnologies
Related Patterns
Istio Operator

MESHERY4a76
AWS rds controller

MESHERY46e4

RELATED PATTERNS
Istio Operator

MESHERY4a76
AWS RDS CONTROLLER
Description
This YAML manifest defines a Kubernetes Deployment for the ACK RDS Controller application. It orchestrates the deployment of the application within a Kubernetes cluster, ensuring its availability and scalability. The manifest specifies various parameters such as the number of replicas, pod template configurations including container settings, environment variables, resource limits, and security context. Additionally, it includes probes for health checks, node selection preferences, tolerations, and affinity rules for optimal scheduling. The manifest encapsulates the deployment requirements necessary for the ACK RDS Controller application to run effectively in a Kubernetes environment.
Read moreCaveats and Considerations
1. Resource Allocation: Ensure that resource requests and limits are appropriately configured based on the expected workload of the application to avoid resource contention and potential performance issues. 2. Security Configuration: Review the security context settings, including privilege escalation, runAsNonRoot, and capabilities, to enforce security best practices and minimize the risk of unauthorized access or privilege escalation within the container. 3. Probe Configuration: Validate the configuration of liveness and readiness probes to ensure they accurately reflect the health and readiness of the application. Incorrect probe settings can lead to unnecessary pod restarts or deployment issues. 4. Environment Variables: Double-check the environment variables provided to the container, ensuring they are correctly set and necessary for the application's functionality. Incorrect or missing environment variables can cause runtime errors or unexpected behavior. 5. Volume Mounts: Verify the volume mounts defined in the deployment, especially if the application requires access to specific data or configuration files. Incorrect volume configurations can result in data loss or application malfunction.
Read moreTechnologies
Related Patterns
Istio Operator

MESHERY4a76
Accelerated mTLS handshake for Envoy data planes

MESHERY4421

RELATED PATTERNS
prometheus-operator-crd-cluster-roles

MESHERY4571
ACCELERATED MTLS HANDSHAKE FOR ENVOY DATA PLANES
Description
Cryptographic operations are among the most compute-intensive and critical operations when it comes to secured connections. Istio uses Envoy as the “gateways/sidecar” to handle secure connections and intercept the traffic. Depending upon use cases, when an ingress gateway must handle a large number of incoming TLS and secured service-to-service connections through sidecar proxies, the load on Envoy increases. The potential performance depends on many factors, such as size of the cpuset on which Envoy is running, incoming traffic patterns, and key size. These factors can impact Envoy serving many new incoming TLS requests. To achieve performance improvements and accelerated handshakes, a new feature was introduced in Envoy 1.20 and Istio 1.14. It can be achieved with 3rd Gen Intel® Xeon® Scalable processors, the Intel® Integrated Performance Primitives (Intel® IPP) crypto library, CryptoMB Private Key Provider Method support in Envoy, and Private Key Provider configuration in Istio using ProxyConfig.
Read moreCaveats and Considerations
Ensure networking is setup properly and correct annotation are applied to each resource for custom Intel configuration
Technologies
Related Patterns
prometheus-operator-crd-cluster-roles

MESHERY4571
Acme Operator

MESHERY4627

RELATED PATTERNS
ACME OPERATOR
Description
Let’s Encrypt uses the ACME protocol to verify that you control a given domain name and to issue you a certificate. To get a Let’s Encrypt certificate, you’ll need to choose a piece of ACME client software to use.
Caveats and Considerations
We recommend that most people start with the Certbot client. It can simply get a cert for you or also help you install, depending on what you prefer. It’s easy to use, works on many operating systems, and has great documentation.
Read moreTechnologies
Related Patterns
Amazon Web Services IoT Architecture Diagram

MESHERY449f

RELATED PATTERNS
Istio Operator

MESHERY4a76
AMAZON WEB SERVICES IOT ARCHITECTURE DIAGRAM
Description
This comprehensive IoT architecture harnesses the power of Amazon Web Services (AWS) to create a robust and scalable Internet of Things (IoT) ecosystem
Caveats and Considerations
It cannot be deployed because the nodes used to create the diagram are shapes and not components.
Technologies
Related Patterns
Istio Operator

MESHERY4a76
Apache Airflow

MESHERY41d4

RELATED PATTERNS
My first k8s app

MESHERY496d
APACHE AIRFLOW
Description
Apache Airflow (or simply Airflow) is a platform to programmatically author, schedule, and monitor workflows. When workflows are defined as code, they become more maintainable, versionable, testable, and collaborative. Use Airflow to author workflows as directed acyclic graphs (DAGs) of tasks. The Airflow scheduler executes your tasks on an array of workers while following the specified dependencies. Rich command line utilities make performing complex surgeries on DAGs a snap. The rich user interface makes it easy to visualize pipelines running in production, monitor progress, and troubleshoot issues when needed. Airflow works best with workflows that are mostly static and slowly changing. When the DAG structure is similar from one run to the next, it clarifies the unit of work and continuity. Other similar projects include Luigi, Oozie and Azkaban. Airflow is commonly used to process data, but has the opinion that tasks should ideally be idempotent (i.e., results of the task will be the same, and will not create duplicated data in a destination system), and should not pass large quantities of data from one task to the next (though tasks can pass metadata using Airflow's XCom feature). For high-volume, data-intensive tasks, a best practice is to delegate to external services specializing in that type of work. Airflow is not a streaming solution, but it is often used to process real-time data, pulling data off streams in batches. Principles Dynamic: Airflow pipelines are configuration as code (Python), allowing for dynamic pipeline generation. This allows for writing code that instantiates pipelines dynamically. Extensible: Easily define your own operators, executors and extend the library so that it fits the level of abstraction that suits your environment. Elegant: Airflow pipelines are lean and explicit. Parameterizing your scripts is built into the core of Airflow using the powerful Jinja templating engine. Scalable: Airflow has a modular architecture and uses a message queue to orchestrate an arbitrary number of workers.
Read moreCaveats and Considerations
Make sure to fill out your own postgres username ,password, host,port etc to see airflow working as per your database requirements. pass them as environment variables or create secrets for password and config map for ports ,host .
Technologies
Related Patterns
My first k8s app

MESHERY496d
Apache ShardingSphere Operator

MESHERY4803

RELATED PATTERNS
Delay Action for Chaos Mesh

MESHERY4dcc
APACHE SHARDINGSPHERE OPERATOR
Description
The ShardingSphere Kubernetes Operator automates provisioning, management, and operations of ShardingSphere Proxy clusters running on Kubernetes. Apache ShardingSphere is an ecosystem to transform any database into a distributed database system, and enhance it with sharding, elastic scaling, encryption features & more.
Read moreCaveats and Considerations
Ensure Apache ShardingSphere and Knative Service is registered as a MeshModel
Technologies
Related Patterns
Delay Action for Chaos Mesh

MESHERY4dcc
App-graph

MESHERY4f74

RELATED PATTERNS
Pod Readiness

MESHERY4b83
APP-GRAPH
Description
""
Caveats and Considerations
""
Technologies
Related Patterns
Pod Readiness

MESHERY4b83
Argo CD w/Dex

MESHERY4c82

RELATED PATTERNS
Delay Action for Chaos Mesh

MESHERY4dcc
ARGO CD W/DEX
Description
The Argo CD server component exposes the API and UI. The operator creates a Service to expose this component and can be accessed through the various methods available in Kubernetes.
Caveats and Considerations
Dex can be used to delegate authentication to external identity providers like GitHub, SAML and others. SSO configuration of Argo CD requires updating the Argo CD CR with Dex connector settings.
Technologies
Related Patterns
Delay Action for Chaos Mesh

MESHERY4dcc
ArgoCD application controller

MESHERY48a9

RELATED PATTERNS
Istio Operator

MESHERY4a76
ARGOCD APPLICATION CONTROLLER
Description
This YAML configuration describes a Kubernetes Deployment for the ArgoCD Application Controller. It includes metadata defining labels for identification purposes. The spec section outlines the deployment's details, including the desired number of replicas and a pod template. Within the pod template, there's a single container named argocd-application-controller, which runs the ArgoCD Application Controller binary. This container is configured with various environment variables sourced from ConfigMaps, defining parameters such as reconciliation timeouts, repository server details, logging settings, and affinity rules. Port 8082 is specified for readiness probes, and volumes are mounted for storing TLS certificates and temporary data. Additionally, the deployment specifies a service account and defines pod affinity rules for scheduling. These settings collectively ensure the reliable operation of the ArgoCD Application Controller within Kubernetes clusters, facilitating efficient management of applications within an ArgoCD instance.
Read moreCaveats and Considerations
1. Environment Configuration: Ensure that the environment variables configured for the application controller align with your deployment requirements. Review and adjust settings such as reconciliation timeouts, logging levels, and repository server details as needed. 2. Resource Requirements: Depending on your deployment environment and workload, adjust resource requests and limits for the container to ensure optimal performance and resource utilization. 3. Security: Pay close attention to security considerations, especially when handling sensitive data such as TLS certificates. Ensure that proper encryption and access controls are in place for any secrets used in the deployment. 4. High Availability: Consider strategies for achieving high availability and fault tolerance for the ArgoCD Application Controller. This may involve running multiple replicas of the controller across different nodes or availability zones. 5. Monitoring and Alerting: Implement robust monitoring and alerting mechanisms to detect and respond to any issues or failures within the ArgoCD Application Controller deployment. Utilize tools such as Prometheus and Grafana to monitor key metrics and set up alerts for critical events.
Read moreTechnologies
Related Patterns
Istio Operator

MESHERY4a76
ArgoCD-Application [Components added for Network, Storage and Orchestration]

MESHERY41e0

RELATED PATTERNS
Istio Operator

MESHERY4a76
ARGOCD-APPLICATION [COMPONENTS ADDED FOR NETWORK, STORAGE AND ORCHESTRATION]
Description
This is design that deploys ArgoCD application that includes Nginx virtual service, Nginx server, K8s pod autoscaler, OpenEBS's Jiva volume, and a sample ArgoCD application listening on 127.0.0.4
Caveats and Considerations
Ensure networking is setup properly
Technologies
Related Patterns
Istio Operator

MESHERY4a76
Autogenerated

MESHERY4102

RELATED PATTERNS
Istio Operator

MESHERY4a76
AUTOGENERATED
Description
This YAML manifest defines a Kubernetes Deployment for the Thanos Operator, named "thanos-operator," with one replica. The deployment's pod template is labeled "app: thanos-operator" and includes security settings to run as a non-root user with specific user (1000) and group (2000) IDs. The main container, also named "thanos-operator," uses the "thanos-io/thanos:latest" image, runs with minimal privileges, and starts with the argument "--log.level=info." It listens on port 8080 for HTTP traffic and has liveness and readiness probes set to check the "/metrics" endpoint. Resource requests and limits are defined for CPU and memory. Additionally, the pod is scheduled on Linux nodes with specific node affinity rules and tolerations for certain node taints, ensuring appropriate node placement and scheduling.
Read moreCaveats and Considerations
1. Security Context: 1.1 The runAsUser: 1000 and fsGroup: 2000 settings are essential for running the container with non-root privileges. Ensure that these user IDs are correctly configured and have the necessary permissions within your environment. 1.2 Dropping all capabilities (drop: - ALL) enhances security but may limit certain functionalities. Verify that the Thanos container does not require any additional capabilities. 2. Image Tag: The image tag is set to "latest," which can introduce instability since it pulls the most recent image version that might not be thoroughly tested. Consider specifying a specific, stable version tag for better control over updates and rollbacks. 3. Resource Requests and Limits: The defined resource requests and limits (memory: "64Mi"/"128Mi", cpu: "250m"/"500m") might need adjustment based on the actual workload and performance characteristics of the Thanos Operator in your environment. Monitor resource usage and tweak these settings accordingly to prevent resource starvation or over-provisioning.
Read moreTechnologies
Related Patterns
Istio Operator

MESHERY4a76
Autoscaling based on Metrics in GKE

MESHERY400b

RELATED PATTERNS
HorizontalPodAutoscaler

MESHERY41d1
AUTOSCALING BASED ON METRICS IN GKE
Description
This design demonstrates how to automatically scale your Google Kubernetes Engine (GKE) workloads based on Prometheus-style metrics emitted by your application. It uses the [GKE workload metrics](https://cloud.google.com/stackdriver/docs/solutions/gke/managing-metrics#workload-metrics) pipeline to collect the metrics emitted from the example application and send them to [Cloud Monitoring](https://cloud.google.com/monitoring), and then uses the [HorizontalPodAutoscaler](https://cloud.google.com/kubernetes-engine/docs/concepts/horizontalpodautoscaler) along with the [Custom Metrics Adapter](https://github.com/GoogleCloudPlatform/k8s-stackdriver/tree/master/custom-metrics-stackdriver-adapter) to scale the application.
Read moreCaveats and Considerations
Add your own custom prometheus to GKE for better scaling of workloads
Technologies
Related Patterns
HorizontalPodAutoscaler

MESHERY41d1
Bank of Anthos

MESHERY48be

RELATED PATTERNS
Pod Readiness

MESHERY4b83
BANK OF ANTHOS
Description
Bank of Anthos is a sample HTTP-based web app that simulates a bank's payment processing network, allowing users to create artificial bank accounts and complete transactions.
Caveats and Considerations
Ensure enough resources are available on the cluster.
Technologies
Related Patterns
Pod Readiness

MESHERY4b83
BookInfo App w/o Kubernetes

MESHERY47b4

RELATED PATTERNS
Istio Operator

MESHERY4a76
BOOKINFO APP W/O KUBERNETES
Description
The Bookinfo application is a collection of microservices that work together to display information about a book. The main microservice is called productpage, which fetches data from the details and reviews microservices to populate the book's page. The details microservice contains specific information about the book, such as its ISBN and number of pages. The reviews microservice contains reviews of the book and also makes use of the ratings microservice to retrieve ranking information for each review. The reviews microservice has three different versions: v1, v2, and v3. In v1, the microservice does not interact with the ratings service. In v2, it calls the ratings service and displays the rating using black stars, ranging from 1 to 5. In v3, it also calls the ratings service but displays the rating using red stars, again ranging from 1 to 5. These different versions allow for flexibility and experimentation with different ways of presenting the books ratings to users.
Read moreCaveats and Considerations
Users need to ensure that their cluster is properly configured with Istio, including the installation of the necessary components and enabling sidecar injection for the microservices. Ensure that Meshery Adapter for Istio service mesh is installed properly for easy installation/registration of Istio's MeshModels with Meshery Server. Another consideration is the resource requirements of the application. The Bookinfo application consists of multiple microservices, each running as a separate container. Users should carefully assess the resource needs of the application and ensure that their cluster has sufficient capacity to handle the workload. This includes considering factors such as CPU, memory, and network bandwidth requirements.
Read moreTechnologies
Related Patterns
Istio Operator

MESHERY4a76
Browerless Chrome

MESHERY4c4b

RELATED PATTERNS
Dapr OAuth Authorization to External Service

MESHERY4ce9
BROWERLESS CHROME
Description
Chrome as a service container. Bring your own hardware or cloud. Homepage: https://www.browserless.io ## Configuration Browserless can be configured via environment variables: ```yaml env: PREBOOT_CHROME: "true" ```
Read moreCaveats and Considerations
Check out the [official documentation](https://docs.browserless.io/docs/docker.html) for the available options. ## Values | Key | Type | Default | Description | |-----|------|---------|-------------| | replicaCount | int | `1` | Number of replicas (pods) to launch. | | image.repository | string | `"browserless/chrome"` | Name of the image repository to pull the container image from. | | image.pullPolicy | string | `"IfNotPresent"` | [Image pull policy](https://kubernetes.io/docs/concepts/containers/images/#updating-images) for updating already existing images on a node. | | image.tag | string | `""` | Image tag override for the default value (chart appVersion). | | imagePullSecrets | list | `[]` | Reference to one or more secrets to be used when [pulling images](https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/#create-a-pod-that-uses-your-secret) (from private registries). | | nameOverride | string | `""` | A name in place of the chart name for `app:` labels. | | fullnameOverride | string | `""` | A name to substitute for the full names of resources. | | volumes | list | `[]` | Additional storage [volumes](https://kubernetes.io/docs/concepts/storage/volumes/). See the [API reference](https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-v1/#volumes-1) for details. | | volumeMounts | list | `[]` | Additional [volume mounts](https://kubernetes.io/docs/tasks/configure-pod-container/configure-volume-storage/). See the [API reference](https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-v1/#volumes-1) for details. | | envFrom | list | `[]` | Additional environment variables mounted from [secrets](https://kubernetes.io/docs/concepts/configuration/secret/#using-secrets-as-environment-variables) or [config maps](https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/#configure-all-key-value-pairs-in-a-configmap-as-container-environment-variables). See the [API reference](https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-v1/#environment-variables) for details. | | env | object | `{}` | Additional environment variables passed directly to containers. See the [API reference](https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-v1/#environment-variables) for details. | | serviceAccount.create | bool | `true` | Enable service account creation. | | serviceAccount.annotations | object | `{}` | Annotations to be added to the service account. | | serviceAccount.name | string | `""` | The name of the service account to use. If not set and create is true, a name is generated using the fullname template. | | podAnnotations | object | `{}` | Annotations to be added to pods. | | podSecurityContext | object | `{}` | Pod [security context](https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-pod). See the [API reference](https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-v1/#security-context) for details. | | securityContext | object | `{}` | Container [security context](https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-container). See the [API reference](https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-v1/#security-context-1) for details. | | service.annotations | object | `{}` | Annotations to be added to the service. | | service.type | string | `"ClusterIP"` | Kubernetes [service type](https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types). | | service.loadBalancerIP | string | `nil` | Only applies when the service type is LoadBalancer. Load balancer will get created with the IP specified in this field. | | service.loadBalancerSourceRanges | list | `[]` | If specified (and supported by the cloud provider), traffic through the load balancer will be restricted to the specified client IPs. Valid values are IP CIDR blocks. | | service.port | int | `80` | Service port. | | service.nodePort | int | `nil` | Service node port (when applicable). | | service.externalTrafficPolicy | string | `nil` | Route external traffic to node-local or cluster-wide endoints. Useful for [preserving the client source IP](https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/#preserving-the-client-source-ip). | | resources | object | No requests or limits. | Container resource [requests and limits](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/). See the [API reference](https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-v1/#resources) for details. | | autoscaling | object | Disabled by default. | Autoscaling configuration (see [values.yaml](values.yaml) for details). | | nodeSelector | object | `{}` | [Node selector](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector) configuration. | | tolerations | list | `[]` | [Tolerations](https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/) for node taints. See the [API reference](https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-v1/#scheduling) for details. | | affinity | object | `{}` | [Affinity](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity) configuration. See the [API reference](https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-v1/#scheduling) for details. |
Read moreTechnologies
Related Patterns
Dapr OAuth Authorization to External Service

MESHERY4ce9
Busybox (single)

MESHERY4c98

RELATED PATTERNS
Istio Operator

MESHERY4a76
BUSYBOX (SINGLE)
Description
This design deploys simple busybox app inside Layer5-test namespace
Caveats and Considerations
None
Technologies
Related Patterns
Istio Operator

MESHERY4a76
Busybox (single) (fresh)

MESHERY4db7

RELATED PATTERNS
Istio Operator

MESHERY4a76
BUSYBOX (SINGLE) (FRESH)
Description
This design deploys simple busybox app inside Layer5-test namespace
Caveats and Considerations
None
Technologies
Related Patterns
Istio Operator

MESHERY4a76
Catalog Design2

MESHERY41bb

RELATED PATTERNS
Pod Readiness

MESHERY4b83
CATALOG DESIGN2
Description
""
Caveats and Considerations
""
Technologies
Related Patterns
Pod Readiness

MESHERY4b83
Consul on kubernetes

MESHERY429d

RELATED PATTERNS
Apache Airflow

MESHERY41d4
CONSUL ON KUBERNETES
Description
Consul is a tool for discovering, configuring, and managing services in distributed systems. It provides features like service discovery, health checking, key-value storage, and distributed coordination. In Kubernetes, Consul can be useful in several ways: 1. Service Discovery: Kubernetes already has built-in service discovery through DNS and environment variables. However, Consul provides more advanced features such as service registration, DNS-based service discovery, and health checking. This can be particularly useful if you have services deployed both within and outside of Kubernetes, as Consul can provide a unified service discovery mechanism across your entire infrastructure. 2. Configuration Management: Consul includes a key-value store that can be used to store configuration data. This can be used to configure applications dynamically at runtime, allowing for more flexible and dynamic deployments. 3. Health Checking Consul can perform health checks on services to ensure they are functioning correctly. If a service fails its health check, Consul can automatically remove it from the pool of available instances, preventing traffic from being routed to it until it recovers. 4. Service Mesh: Consul can also be used as a service mesh in Kubernetes, providing features like traffic splitting, encryption, and observability. This can help you to manage communication between services within your Kubernetes cluster more effectively. Overall, Consul can complement Kubernetes by providing additional features and capabilities for managing services in distributed systems. It can help to simplify and streamline the management of complex microservices architectures, providing greater visibility, resilience, and flexibility.
Read moreCaveats and Considerations
customize the design according to your requirements and the image is pulled from docker hub
Technologies
Related Patterns
Apache Airflow

MESHERY41d4
CryptoMB

MESHERY441b

RELATED PATTERNS
Pod Readiness

MESHERY4b83
CRYPTOMB
Description
""
Caveats and Considerations
""
Technologies
Related Patterns
Pod Readiness

MESHERY4b83
CryptoMB-TLS-handshake-acceleration-for-Istio

MESHERY4f96

RELATED PATTERNS
Istio Operator

MESHERY4a76
CRYPTOMB-TLS-HANDSHAKE-ACCELERATION-FOR-ISTIO
Description
Depending upon use cases, when an ingress gateway must handle a large number of incoming TLS and secured service-to-service connections through sidecar proxies, the load on Envoy increases. The potential performance depends on many factors, such as size of the cpuset on which Envoy is running, incoming traffic patterns, and key size. These factors can impact Envoy serving many new incoming TLS requests. To achieve performance improvements and accelerated handshakes, a new feature was introduced in Envoy 1.20 and Istio 1.14. It can be achieved with 3rd Gen Intel® Xeon® Scalable processors, the Intel® Integrated Performance Primitives (Intel® IPP) crypto library, CryptoMB Private Key Provider Method support in Envoy, and Private Key Provider configuration in Istio using ProxyConfig.\\\\\\\\\\\\\\
\\\\\\\\\\\\\\
Envoy uses BoringSSL as the default TLS library. BoringSSL supports setting private key methods for offloading asynchronous private key operations, and Envoy implements a private key provider framework to allow creation of Envoy extensions which handle TLS handshakes private key operations (signing and decryption) using the BoringSSL hooks.\\\\\\\\\\\\\\
\\\\\\\\\\\\\\
CryptoMB private key provider is an Envoy extension which handles BoringSSL TLS RSA operations using Intel AVX-512 multi-buffer acceleration. When a new handshake happens, BoringSSL invokes the private key provider to request the cryptographic operation, and then the control returns to Envoy. The RSA requests are gathered in a buffer. When the buffer is full or the timer expires, the private key provider invokes Intel AVX-512 processing of the buffer. When processing is done, Envoy is notified that the cryptographic operation is done and that it may continue with the handshakes.
Caveats and Considerations
None
Technologies
Related Patterns
Istio Operator

MESHERY4a76
CryptoMB-TLS-handshake-acceleration-for-Istio

MESHERY42b7

RELATED PATTERNS
Istio Operator

MESHERY4a76
CRYPTOMB-TLS-HANDSHAKE-ACCELERATION-FOR-ISTIO
Description
Envoy uses BoringSSL as the default TLS library. BoringSSL supports setting private key methods for offloading asynchronous private key operations, and Envoy implements a private key provider framework to allow creation of Envoy extensions which handle TLS handshakes private key operations (signing and decryption) using the BoringSSL hooks.\\
\\
CryptoMB private key provider is an Envoy extension which handles BoringSSL TLS RSA operations using Intel AVX-512 multi-buffer acceleration. When a new handshake happens, BoringSSL invokes the private key provider to request the cryptographic operation, and then the control returns to Envoy. The RSA requests are gathered in a buffer. When the buffer is full or the timer expires, the private key provider invokes Intel AVX-512 processing of the buffer. When processing is done, Envoy is notified that the cryptographic operation is done and that it may continue with the handshakes.\\
Envoy uses BoringSSL as the default TLS library. BoringSSL supports setting private key methods for offloading asynchronous private key operations, and Envoy implements a private key provider framework to allow creation of Envoy extensions which handle TLS handshakes private key operations (signing and decryption) using the BoringSSL hooks.\\
\\
CryptoMB private key provider is an Envoy extension which handles BoringSSL TLS RSA operations using Intel AVX-512 multi-buffer acceleration. When a new handshake happens, BoringSSL invokes the private key provider to request the cryptographic operation, and then the control returns to Envoy. The RSA requests are gathered in a buffer. When the buffer is full or the timer expires, the private key provider invokes Intel AVX-512 processing of the buffer. When processing is done, Envoy is notified that the cryptographic operation is done and that it may continue with the handshakes.\\
\\
\\
Caveats and Considerations
None
Technologies
Related Patterns
Istio Operator

MESHERY4a76
CryptoMB.yml

MESHERY4c1f

RELATED PATTERNS
Pod Readiness

MESHERY4b83
CRYPTOMB.YML
Description
Cryptographic operations are among the most compute-intensive and critical operations when it comes to secured connections. Istio uses Envoy as the “gateways/sidecar” to handle secure connections and intercept the traffic. Depending upon use cases, when an ingress gateway must handle a large number of incoming TLS and secured service-to-service connections through sidecar proxies, the load on Envoy increases. The potential performance depends on many factors, such as size of the cpuset on which Envoy is running, incoming traffic patterns, and key size. These factors can impact Envoy serving many new incoming TLS requests. To achieve performance improvements and accelerated handshakes, a new feature was introduced in Envoy 1.20 and Istio 1.14. It can be achieved with 3rd Gen Intel® Xeon® Scalable processors, the Intel® Integrated Performance Primitives (Intel® IPP) crypto library, CryptoMB Private Key Provider Method support in Envoy, and Private Key Provider configuration in Istio using ProxyConfig.
Read moreCaveats and Considerations
Ensure networking is setup properly and correct annotation are applied to each resource for custom Intel configuration
Technologies
Related Patterns
Pod Readiness

MESHERY4b83
Dapr OAuth Authorization to External Service

MESHERY4ce9

RELATED PATTERNS
Browerless Chrome

MESHERY4c4b
DAPR OAUTH AUTHORIZATION TO EXTERNAL SERVICE
Description
This design walks you through the steps of setting up the OAuth middleware to enable a service to interact with external services requiring authentication. This design seperates the authentication/authorization concerns from the application. checkout this https://github.com/dapr/samples/tree/master/middleware-oauth-microsoftazure for more inoformation and try out in your own environment.
Read moreCaveats and Considerations
Certainly! Here's how you would replace the placeholders with actual values and apply the configuration to your Kubernetes cluster: 1. Replace `"YOUR_APPLICATION_ID"`, `"YOUR_CLIENT_SECRET"`, and `"YOUR_TENANT_ID"` with your actual values in the `msgraphsp` component metadata: ```yaml metadata: # OAuth2 ClientID, for Microsoft Identity Platform it is the AAD Application ID - name: clientId value: "your_actual_application_id" # OAuth2 Client Secret - name: clientSecret value: "your_actual_client_secret" # Application Scope for Microsoft Graph API (vs. User Scope) - name: scopes value: "https://graph.microsoft.com/.default" # Token URL for Microsoft Identity Platform, TenantID is the Tenant (also sometimes called Directory) ID of the AAD - name: tokenURL value: "https://login.microsoftonline.com/your_actual_tenant_id/oauth2/v2.0/token" ``` 2. Apply the modified YAML configuration to your Kubernetes cluster using `kubectl apply -f your_file.yaml`. Ensure you've replaced `"your_actual_application_id"`, `"your_actual_client_secret"`, and `"your_actual_tenant_id"` with the appropriate values corresponding to your Microsoft Graph application and Azure Active Directory configuration before applying the configuration to your cluster.
Read moreTechnologies
Related Patterns
Browerless Chrome

MESHERY4c4b
Dapr with Kubernetes events

MESHERY4215

RELATED PATTERNS
Robot Shop Sample App

MESHERY4c4e
DAPR WITH KUBERNETES EVENTS
Description
This design will show an example of running Dapr with a Kubernetes events input binding. You'll be deploying the Node application and will require a component definition with a Kubernetes event binding component. checkout this https://github.com/dapr/samples/tree/master/read-kubernetes-events#read-kubernetes-events for more info .
Read moreCaveats and Considerations
make sure to replace some things like docker images ,credentials to try out on your local cluster .
Technologies
Related Patterns
Robot Shop Sample App

MESHERY4c4e
Datadog agent on k8's

MESHERY465c

RELATED PATTERNS
Robot Shop Sample App

MESHERY4c4e
DATADOG AGENT ON K8'S
Description
The Datadog Agent is a lightweight software component deployed within Kubernetes clusters to collect metrics, traces, and logs. It automatically monitors Kubernetes resources, including pods and nodes, providing visibility into system performance and application behavior. With features like autodiscovery, tracing, log collection, and extensive integrations, the Datadog Agent helps teams efficiently monitor, troubleshoot, and optimize their Kubernetes-based applications and infrastructure.
Read moreCaveats and Considerations
This is an basic example to deploy datadog agent on kubernetes for more please refer offical docs https://docs.datadoghq.com/containers/kubernetes/installation/?tab=operator
Technologies
Related Patterns
Robot Shop Sample App

MESHERY4c4e
Delay Action for Chaos Mesh

MESHERY4dcc

RELATED PATTERNS
Postgres Deployment

MESHERY49ba
DELAY ACTION FOR CHAOS MESH
Description
A simple example
Caveats and Considerations
An example the delay action
Technologies
Related Patterns
Postgres Deployment

MESHERY49ba
Deployment Web

MESHERY477c

RELATED PATTERNS
Pod Readiness

MESHERY4b83
DEPLOYMENT WEB
Description
""
Caveats and Considerations
""
Technologies
Related Patterns
Pod Readiness

MESHERY4b83
Distributed Database w/ Shardingshpere

MESHERY4ba3

RELATED PATTERNS
Pod Readiness

MESHERY4b83
DISTRIBUTED DATABASE W/ SHARDINGSHPERE
Description
""
Caveats and Considerations
""
Technologies
Related Patterns
Pod Readiness

MESHERY4b83
ELK stack

MESHERY4b16

RELATED PATTERNS
Robot Shop Sample App

MESHERY4c4e
ELK STACK
Description
ELK stack in kubernetes deployed with simple python app using logstash ,kibana , filebeat ,elastic search.
Caveats and Considerations
here technologies included are kubernetes , elastic search ,log stash ,log stash ,kibana ,python etc
Technologies
Related Patterns
Robot Shop Sample App

MESHERY4c4e
Edge Permission Relationship

MESHERY4ce5

RELATED PATTERNS
Istio Operator

MESHERY4a76
EDGE PERMISSION RELATIONSHIP
Description
A relationship that binds permission between components. Eg: ClusterRole defines a set of permissions, ClusterRoleBinding binds those permissions to subjects like service accounts.
Caveats and Considerations
NA
Technologies
Related Patterns
Istio Operator

MESHERY4a76
ElasticSearch

MESHERY4654

RELATED PATTERNS
ELASTICSEARCH
Description
Kubernetes makes it trivial for anyone to easily build and scale Elasticsearch clusters. Here, you'll find how to do so. Current Elasticsearch version is 5.6.2.
Caveats and Considerations
Elasticsearch for Kubernetes: Current pod descriptors use an emptyDir for storing data in each data node container. This is meant to be for the sake of simplicity and should be adapted according to your storage needs.
Technologies
Related Patterns
Emojivoto Application

MESHERY4c01

RELATED PATTERNS
Pod Readiness

MESHERY4b83
EMOJIVOTO APPLICATION
Description
""
Caveats and Considerations
""
Technologies
Related Patterns
Pod Readiness

MESHERY4b83
Envoy using BoringSSL

MESHERY447c
ENVOY USING BORINGSSL
Description
Envoy uses BoringSSL as the default TLS library. BoringSSL supports setting private key methods for offloading asynchronous private key operations, and Envoy implements a private key provider framework to allow creation of Envoy extensions which handle TLS handshakes private key operations (signing and decryption) using the BoringSSL hooks.
CryptoMB private key provider is an Envoy extension which handles BoringSSL TLS RSA operations using Intel AVX-512 multi-buffer acceleration. When a new handshake happens, BoringSSL invokes the private key provider to request the cryptographic operation, and then the control returns to Envoy. The RSA requests are gathered in a buffer. When the buffer is full or the timer expires, the private key provider invokes Intel AVX-512 processing of the buffer. When processing is done, Envoy is notified that the cryptographic operation is done and that it may continue with the handshakes.
Envoy uses BoringSSL as the default TLS library. BoringSSL supports setting private key methods for offloading asynchronous private key operations, and Envoy implements a private key provider framework to allow creation of Envoy extensions which handle TLS handshakes private key operations (signing and decryption) using the BoringSSL hooks.
CryptoMB private key provider is an Envoy extension which handles BoringSSL TLS RSA operations using Intel AVX-512 multi-buffer acceleration. When a new handshake happens, BoringSSL invokes the private key provider to request the cryptographic operation, and then the control returns to Envoy. The RSA requests are gathered in a buffer. When the buffer is full or the timer expires, the private key provider invokes Intel AVX-512 processing of the buffer. When processing is done, Envoy is notified that the cryptographic operation is done and that it may continue with the handshakes.
Caveats and Considerations
test
Technologies
Example Edge-Firewall Relationship

MESHERY490f

RELATED PATTERNS
Istio Operator

MESHERY4a76
EXAMPLE EDGE-FIREWALL RELATIONSHIP
Description
A relationship that act as a firewall for ingress and egress traffic from Pods.
Caveats and Considerations
NA
Technologies
Related Patterns
Istio Operator

MESHERY4a76
Example Edge-Network Relationship

MESHERY4ee9

RELATED PATTERNS
Istio Operator

MESHERY4a76
EXAMPLE EDGE-NETWORK RELATIONSHIP
Description
The design showcases the operational dynamics of the Edge-Network Relationship. There are two ways you can use this design in your architecture design. 1. Cloning this design by clicking the clone button. 2. Start from scratch by creating an edge-network relationship on your own. How to create an Edge-Network relationship on your own? 1. Navigate to MeshMap. 2. Click on the Kubernetes icon inside the dock it will open a Kubernetes drawer from where you can select any component that Kubernetes supports. 3. Search for the Ingress and Service component from the search bar provided in the drawer. 4. Drag-n-drop both the components on the canvas. 5. Hover over the Ingress component, Some handlebars will show up on four sides of the component. 6. Move the cursor close to either of the handlebars, an arrow will show up, click on that arrow. This will open up two options: 1. Question mark: Opens the Help Center 2. Arrow (Edge handle): This edge handle is used for creating the edge relationship 7. Click on the Edge handle and move your cursor close to the Service component. An edge will appear going from the Ingress to Service component which represents the edge relationship between the two components. 8. Congratulations! You just created a relationship between Ingress and Service.
Read moreCaveats and Considerations
No Caveats or Considerations
Technologies
Related Patterns
Istio Operator

MESHERY4a76
Example Edge-Permission Relationship

MESHERY4f9f

RELATED PATTERNS
Istio Operator

MESHERY4a76
EXAMPLE EDGE-PERMISSION RELATIONSHIP
Description
The design showcases the operational dynamics of the Edge-Permission relationship. To engage with its functionality, adhere to the sequential steps below: 1. Duplicate this design by cloning it. 2. Modify the name of the service account. Upon completion, you'll notice that the connection visually represented by the edge vanishes, and the ClusterRoleBinding (CRB) is disassociated from both the ClusterRole (CR) and Service Account (SA). To restore this relationship, you can either, 1. Drag the CRB from the CR to the SA, then release the mouse click. This action triggers the recreation of the relationship, as the relationship constraints get satisfied. 2. Or, revert the name of the SA. This automatically recreates the relationship, as the relationship constraints get satisfied. These are a few of the ways to experience this relationship.
Read moreCaveats and Considerations
NA
Technologies
Related Patterns
Istio Operator

MESHERY4a76
Example Labels and Annotations

MESHERY4649

RELATED PATTERNS
Pod Readiness

MESHERY4b83
EXAMPLE LABELS AND ANNOTATIONS
Description
""
Caveats and Considerations
""
Technologies
Related Patterns
Pod Readiness

MESHERY4b83
Exploring Kubernetes Pods With Meshery

MESHERY44f5

RELATED PATTERNS
Istio Operator

MESHERY4a76
EXPLORING KUBERNETES PODS WITH MESHERY
Description
This design maps to the "Exploring Kubernetes Pods with Meshery" tutorial and is the end result of the design. It can be used to quickly deploy an nginx pod exposed through a service.
Caveats and Considerations
Service type is NodePort.
Technologies
Related Patterns
Istio Operator

MESHERY4a76
External-Dns for Kubernetes

MESHERY4db0

RELATED PATTERNS
Service Internal Traffic Policy

MESHERY41b6
EXTERNAL-DNS FOR KUBERNETES
Description
ExternalDNS synchronizes exposed Kubernetes Services and Ingresses with DNS providers. Kubernetes' cluster-internal DNS server, ExternalDNS makes Kubernetes resources discoverable via public DNS servers. Like KubeDNS, it retrieves a list of resources (Services, Ingresses, etc.) from the Kubernetes API to determine a desired list of DNS records. Unlike KubeDNS, however, it's not a DNS server itself, but merely configures other DNS providers accordingly—e.g. AWS Route 53 or Google Cloud DNS. In a broader sense, ExternalDNS allows you to control DNS records dynamically via Kubernetes resources in a DNS provider-agnostic way.
Read moreCaveats and Considerations
For more information and considerations checkout this repo https://github.com/kubernetes-sigs/external-dns/?tab=readme-ov-file
Technologies
Related Patterns
Service Internal Traffic Policy

MESHERY41b6
Fault-tolerant batch workloads on GKE

MESHERY4b55

RELATED PATTERNS
Istio Operator

MESHERY4a76
FAULT-TOLERANT BATCH WORKLOADS ON GKE
Description
A batch workload is a process typically designed to have a start and a completion point. You should consider batch workloads on GKE if your architecture involves ingesting, processing, and outputting data instead of using raw data. Areas like machine learning, artificial intelligence, and high performance computing (HPC) feature different kinds of batch workloads, such as offline model training, batched prediction, data analytics, simulation of physical systems, and video processing. By designing containerized batch workloads, you can leverage the following GKE benefits: An open standard, broad community, and managed service. Cost efficiency from effective workload and infrastructure orchestration and specialized compute resources. Isolation and portability of containerization, allowing the use of cloud as overflow capacity while maintaining data security. Availability of burst capacity, followed by rapid scale down of GKE clusters.
Read moreCaveats and Considerations
Ensure proper networking of components for efficient functioning
Technologies
Related Patterns
Istio Operator

MESHERY4a76
Fortio Server

MESHERY4614

RELATED PATTERNS
Istio Operator

MESHERY4a76
FORTIO SERVER
Description
This infrastructure design defines a service and a deployment for a component called Fortio-server **Service: fortio-server-service**- Type: Kubernetes Service - Namespace: Default - Port: Exposes port 8080 - Selector: Routes traffic to pods with the label app: fortio-server - Session Affinity: None - Service Type: ClusterIP - MeshMap Metadata: Describes its relationship with Kubernetes and its category as Scheduling & Orchestration. - Position: Positioned within a graphical representation of infrastructure. **Deployment: fortio-server-deployment** - Type: Kubernetes Deployment - Namespace: Default - Replicas: 1 - Selector: Matches pods with the label app: fortio-server - Pod Template: Specifies a container image for Fortio-server, its resource requests, and a service account. - Container Image: Uses the fortio/fortio:1.32.1 image - MeshMap Metadata: Specifies its parent-child relationship with the fortio-server-service and provides styling information. - Position: Positioned relative to the service within the infrastructure diagram. This configuration sets up a service and a corresponding deployment for Fortio-server in a Kubernetes environment. The service exposes port 8080, while the deployment runs a container with the Fortio-server image. These components are visualized using MeshMap for tracking and visualization purposes.
Read moreCaveats and Considerations
Ensure networking is setup properly and enuough resources are available
Technologies
Related Patterns
Istio Operator

MESHERY4a76
Gerrit operator

MESHERY4f6c

RELATED PATTERNS
Istio Operator

MESHERY4a76
GERRIT OPERATOR
Description
This YAML configuration defines a Kubernetes Deployment named "gerrit-operator-deployment" for managing a containerized application called "gerrit-operator". It specifies that one replica of the application should be deployed. The Deployment ensures that the application is always running by managing pod replicas based on the provided selector labels. The template section describes the pod specification, including labels, service account, security context, and container configuration. The container named "gerrit-operator-container" is configured with an image from a container registry, with resource limits and requests defined for CPU and memory. Environment variables are set for various parameters like the namespace, pod name, and platform type. Additionally, specific intervals for syncing Gerrit projects and group members are defined. Further configuration options can be added as needed, such as volumes and initContainers.
Read moreCaveats and Considerations
1. Resource Requirements: Ensure that the resource requests and limits specified for CPU and memory are appropriate for the workload and the cluster's capacity to prevent performance issues or resource contention. 2. Image Pull Policy: The imagePullPolicy set to "Always" ensures that the latest image version is always pulled from the container registry. This may increase deployment time and consume more network bandwidth, so consider the trade-offs based on your deployment requirements. 3. Security Configuration: The security context settings, such as runAsNonRoot and allowPrivilegeEscalation: false, enhance pod security by enforcing non-root user execution and preventing privilege escalation. Verify that these settings align with your organization's security policies. 4. Environment Variables: Review the environment variables set for WATCH_NAMESPACE, POD_NAME, PLATFORM_TYPE, GERRIT_PROJECT_SYNC_INTERVAL, and GERRIT_GROUP_MEMBER_SYNC_INTERVAL to ensure they are correctly configured for your deployment environment and application requirements.
Read moreTechnologies
Related Patterns
Istio Operator

MESHERY4a76
GlusterFS Service

MESHERY4aa9

RELATED PATTERNS
Pod Readiness

MESHERY4b83
GLUSTERFS SERVICE
Description
""
Caveats and Considerations
""
Technologies
Related Patterns
Pod Readiness

MESHERY4b83
GuestBook App

MESHERY4a54

RELATED PATTERNS
Istio Operator

MESHERY4a76
GUESTBOOK APP
Description
The GuestBook App is a cloud-native application designed using Kubernetes as the underlying orchestration and management system. It consists of various services and components deployed within Kubernetes namespaces. The default namespace represents the main environment where the application operates. The frontend-cyrdx service is responsible for handling frontend traffic and is deployed as a Kubernetes service with a selector for the guestbook application and frontend tier. The frontend-fsfct deployment runs multiple replicas of the frontend component, which utilizes the gb-frontend image and exposes port 80. The guestbook namespace serves as a logical grouping for components related to the GuestBook App. The redis-follower-armov service handles follower Redis instances for the backend, while the redis-follower-nwlew deployment manages multiple replicas of the follower Redis container. The redis-leader-fhxla deployment represents the leader Redis container, and the redis-leader-vjtmi service exposes it as a Kubernetes service. These components work together to create a distributed and scalable architecture for the GuestBook App, leveraging Kubernetes for container orchestration and management.
Read moreCaveats and Considerations
Networking should be properly configured to enable communication between the frontend and backend components of the app.
Technologies
Related Patterns
Istio Operator

MESHERY4a76
GuestBook App

MESHERY4b31

RELATED PATTERNS
Istio Operator

MESHERY4a76
GUESTBOOK APP
Description
The GuestBook App is a cloud-native application designed using Kubernetes as the underlying orchestration and management system. It consists of various services and components deployed within Kubernetes namespaces. The default namespace represents the main environment where the application operates. The frontend-cyrdx service is responsible for handling frontend traffic and is deployed as a Kubernetes service with a selector for the guestbook application and frontend tier. The frontend-fsfct deployment runs multiple replicas of the frontend component, which utilizes the gb-frontend image and exposes port 80. The guestbook namespace serves as a logical grouping for components related to the GuestBook App. The redis-follower-armov service handles follower Redis instances for the backend, while the redis-follower-nwlew deployment manages multiple replicas of the follower Redis container. The redis-leader-fhxla deployment represents the leader Redis container, and the redis-leader-vjtmi service exposes it as a Kubernetes service. These components work together to create a distributed and scalable architecture for the GuestBook App, leveraging Kubernetes for container orchestration and management.
Read moreCaveats and Considerations
Networking should be properly configured to enable communication between the frontend and backend components of the app.
Technologies
Related Patterns
Istio Operator

MESHERY4a76
GuestBook App (Copy)

MESHERY4263

RELATED PATTERNS
Istio Operator

MESHERY4a76
GUESTBOOK APP (COPY)
Description
The GuestBook App is a cloud-native application designed using Kubernetes as the underlying orchestration and management system. It consists of various services and components deployed within Kubernetes namespaces. The default namespace represents the main environment where the application operates. The frontend-cyrdx service is responsible for handling frontend traffic and is deployed as a Kubernetes service with a selector for the guestbook application and frontend tier. The frontend-fsfct deployment runs multiple replicas of the frontend component, which utilizes the gb-frontend image and exposes port 80. The guestbook namespace serves as a logical grouping for components related to the GuestBook App. The redis-follower-armov service handles follower Redis instances for the backend, while the redis-follower-nwlew deployment manages multiple replicas of the follower Redis container. The redis-leader-fhxla deployment represents the leader Redis container, and the redis-leader-vjtmi service exposes it as a Kubernetes service. These components work together to create a distributed and scalable architecture for the GuestBook App, leveraging Kubernetes for container orchestration and management.
Read moreCaveats and Considerations
Networking should be properly configured to enable communication between the frontend and backend components of the app.
Technologies
Related Patterns
Istio Operator

MESHERY4a76
Guestbook App (All-in-One)

MESHERY4b20

RELATED PATTERNS
Pod Readiness

MESHERY4b83
GUESTBOOK APP (ALL-IN-ONE)
Description
This is a sample guestbook app to demonstrate distributed systems
Caveats and Considerations
1. Ensure networking is setup properly. 2. Ensure enough disk space is available
Technologies
Related Patterns
Pod Readiness

MESHERY4b83
HAProxy_Ingress_Controller

MESHERY45dd

RELATED PATTERNS
Service Internal Traffic Policy

MESHERY41b6
HAPROXY_INGRESS_CONTROLLER
Description
HAProxy Ingress is a Kubernetes ingress controller: it configures a HAProxy instance to route incoming requests from an external network to the in-cluster applications. The routing configurations are built reading specs from the Kubernetes cluster. Updates made to the cluster are applied on the fly to the HAProxy instance.
Read moreCaveats and Considerations
Make sure that paths in ingress are configured correctly and for more Caveats And Considerations checkout this docs https://haproxy-ingress.github.io/docs/
Technologies
Related Patterns
Service Internal Traffic Policy

MESHERY41b6
Hello WASM

MESHERY4255

RELATED PATTERNS
Pod Readiness

MESHERY4b83
HELLO WASM
Description
""
Caveats and Considerations
""
Technologies
Related Patterns
Pod Readiness

MESHERY4b83
Hierarchical Parent Relationship

MESHERY4a65

RELATED PATTERNS
Istio Operator

MESHERY4a76
HIERARCHICAL PARENT RELATIONSHIP
Description
A relationship that defines whether a component can be a parent of other components. Eg: Namespace is Parent and Role, ConfigMap are children.
Caveats and Considerations
""
Technologies
Related Patterns
Istio Operator

MESHERY4a76
Hierarchical Inventory Relationship

MESHERY4c5a

RELATED PATTERNS
Istio Operator

MESHERY4a76
HIERARCHICAL INVENTORY RELATIONSHIP
Description
A hierarchical inventory relationship in which the configuration of (parent) component is patched with the configuration of child component. Eg: The configuration of the Deployment (parent) component is patched with the configuration as received from ConfigMap (child) component.
Read moreCaveats and Considerations
NA
Technologies
Related Patterns
Istio Operator

MESHERY4a76
HorizontalPodAutoscaler

MESHERY41d1

RELATED PATTERNS
Autoscaling based on Metrics in GKE

MESHERY400b
HORIZONTALPODAUTOSCALER
Description
A HorizontalPodAutoscaler (HPA for short) automatically updates a workload resource (such as a Deployment or StatefulSet), with the aim of automatically scaling the workload to match demand Horizontal scaling means that the response to increased load is to deploy more Pods. This is different from vertical scaling, which for Kubernetes would mean assigning more resources (for example: memory or CPU) to the Pods that are already running for the workload. If the load decreases, and the number of Pods is above the configured minimum, the HorizontalPodAutoscaler instructs the workload resource (the Deployment, StatefulSet, or other similar resource) to scale back down.
Read moreCaveats and Considerations
Modify deployments and names according to requirement
Technologies
Related Patterns
Autoscaling based on Metrics in GKE

MESHERY400b
Install-Traefik-as-ingress-controller

MESHERY4796

RELATED PATTERNS
Service Internal Traffic Policy

MESHERY41b6
INSTALL-TRAEFIK-AS-INGRESS-CONTROLLER
Description
This design creates a ServiceAccount, DaemonSet, Service, ClusterRole, and ClusterRoleBinding resources for Traefik. The DaemonSet ensures that a single Traefik instance is deployed on each node in the cluster, facilitating load balancing and routing of incoming traffic. The Service allows external traffic to reach Traefik, while the ClusterRole and ClusterRoleBinding provide the necessary permissions for Traefik to interact with Kubernetes resources such as services, endpoints, and ingresses. Overall, this setup enables Traefik to efficiently manage ingress traffic within the Kubernetes environment, providing features like routing, load balancing, and SSL termination.
Read moreCaveats and Considerations
-Resource Utilization: Ensure monitoring and scalability to manage resource consumption across nodes, especially in large clusters. -Security Measures: Implement strict access controls and firewall rules to protect Traefik's admin port (8080) from unauthorized access. -Configuration Complexity: Understand Traefik's configuration intricacies for routing rules and SSL termination to avoid misconfigurations. -Compatibility Testing: Regularly test Traefik's compatibility with Kubernetes and other cluster components before upgrading versions. -High Availability Setup: Employ strategies like pod anti-affinity rules to ensure Traefik's availability and uptime. -Performance Optimization: Conduct performance tests to minimize latency and overhead introduced by Traefik in the data path.
Read moreTechnologies
Related Patterns
Service Internal Traffic Policy

MESHERY41b6
Istio BookInfo Application

MESHERY4bda

RELATED PATTERNS
Pod Readiness

MESHERY4b83
ISTIO BOOKINFO APPLICATION
Description
""
Caveats and Considerations
""
Technologies
Related Patterns
Pod Readiness

MESHERY4b83
Istio Control Plane

MESHERY4a09

RELATED PATTERNS
Istio Operator

MESHERY4a76
ISTIO CONTROL PLANE
Description
This design includes an Istio control plane, which will deploy to the istio-system namespace by default.
Caveats and Considerations
No namespaces are annotated for sidecar provisioning in this design.
Technologies
Related Patterns
Istio Operator

MESHERY4a76
Istio HTTP Header Filter (Clone)

MESHERY4bfd

RELATED PATTERNS
Pod Readiness

MESHERY4b83
ISTIO HTTP HEADER FILTER (CLONE)
Description
This is a test design
Caveats and Considerations
NA
Technologies
Related Patterns
Pod Readiness

MESHERY4b83
Istio Operator

MESHERY4a76

RELATED PATTERNS
Fault-tolerant batch workloads on GKE

MESHERY4b55
ISTIO OPERATOR
Description
This YAML defines a Kubernetes Deployment for the Istio Operator within the istio-operator namespace. The deployment ensures a single replica of the Istio Operator pod is always running, which is managed by a service account named istio-operator. The deployment's metadata includes the namespace and the deployment name. The pod selector matches pods with the label name: istio-operator, ensuring the correct pods are managed. The pod template specifies metadata and details for the containers, including the container name istio-operator and the image gcr.io/istio-testing/operator:1.5-dev, which runs the istio-operator command with the server argument.
Read moreCaveats and Considerations
1. Namespace Configuration: Ensure that the istio-operator namespace exists before applying this deployment. If the namespace is not present, the deployment will fail. 2. Image Version: The image specified (gcr.io/istio-testing/operator:1.5-dev) is a development version. It is crucial to verify the stability and compatibility of this version for production environments. Using a stable release version is generally recommended. 3. Resource Allocation: The resource limits and requests are set to specific values (200m CPU, 256Mi memory for limits; 50m CPU, 128Mi memory for requests). These values should be reviewed and adjusted based on the actual resource availability and requirements of your Kubernetes cluster to prevent resource contention or overallocation. 4. Leader Election: The environment variables include LEADER_ELECTION_NAMESPACE which is derived from the pod's namespace. Ensure that the leader election mechanism is properly configured and that only one instance of the operator becomes the leader to avoid conflicts. 5. Security Context: The deployment does not specify a security context for the container. It is advisable to review and define appropriate security contexts to enhance the security posture of the deployment, such as running the container as a non-root user.
Read moreTechnologies
Related Patterns
Fault-tolerant batch workloads on GKE

MESHERY4b55
JAX 'Hello World' using NVIDIA GPUs A100-80GB on GKE

MESHERY4cfd

RELATED PATTERNS
Istio Operator

MESHERY4a76
JAX 'HELLO WORLD' USING NVIDIA GPUS A100-80GB ON GKE
Description
JAX is a rapidly growing Python library for high-performance numerical computing and machine learning (ML) research. With applications in large language models, drug discovery, physics ML, reinforcement learning, and neural graphics, JAX has seen incredible adoption in the past few years. JAX offers numerous benefits for developers and researchers, including an easy-to-use NumPy API, auto differentiation and optimization. JAX also includes support for distributed processing across multi-node and multi-GPU systems in a few lines of code, with accelerated performance through XLA-optimized kernels on NVIDIA GPUs. We show how to run JAX multi-GPU-multi-node applications on GKE (Google Kubernetes Engine) using the A2 ultra machine series, powered by NVIDIA A100 80GB Tensor Core GPUs. It runs a simple Hello World application on 4 nodes with 8 processes and 8 GPUs each.
Read moreCaveats and Considerations
Ensure networking is setup properly and correct annotation are applied to each resource
Technologies
Related Patterns
Istio Operator

MESHERY4a76
Jaeger operator

MESHERY4ab9

RELATED PATTERNS
Istio Operator

MESHERY4a76
JAEGER OPERATOR
Description
This YAML configuration defines a Kubernetes Deployment for the Jaeger Operator. This Deployment, named "jaeger-operator," specifies that a container will be created using the jaegertracing/jaeger-operator:master image. The container runs with the argument "start," which initiates the operator's main process. Additionally, the container is configured with an environment variable, LOG-LEVEL, set to "debug," enabling detailed logging for troubleshooting and monitoring purposes. This setup allows the Jaeger Operator to manage Jaeger tracing instances within the Kubernetes cluster, ensuring efficient deployment, scaling, and maintenance of distributed tracing components.
Read moreCaveats and Considerations
1. Image Tag: The image tag master indicates that the latest, potentially unstable version of the Jaeger Operator is being used. For production environments, it's safer to use a specific, stable version to avoid unexpected issues. 2. Resource Limits and Requests: The deployment does not specify resource requests and limits for the container. It's crucial to define these to ensure that the Jaeger Operator has enough CPU and memory to function correctly, while also preventing it from consuming excessive resources on the cluster. 3. Replica Count: The spec section does not specify the number of replicas for the deployment. By default, Kubernetes will create one replica, which might not provide high availability. Consider increasing the replica count for redundancy. 4. Namespace: The deployment does not specify a namespace. Ensure that the deployment is applied to the appropriate namespace, particularly if you have a multi-tenant cluster. 5. Security Context: There is no security context defined. Adding a security context can enhance the security posture of the container by restricting permissions and enforcing best practices like running as a non-root user.
Read moreTechnologies
Related Patterns
Istio Operator

MESHERY4a76
Jenkins operator

MESHERY42f3

RELATED PATTERNS
Istio Operator

MESHERY4a76
JENKINS OPERATOR
Description
This YAML configuration defines a Kubernetes Deployment for the Jenkins Operator, ensuring the deployment of a single instance within the cluster. It specifies metadata including labels and annotations for identification and description purposes. The deployment is set to run one replica of the Jenkins Operator container, configured with security settings to run as a non-root user and disallow privilege escalation. Environment variables are provided for dynamic configuration within the container, such as the namespace and Pod name. Resource requests and limits are also defined to manage CPU and memory allocation effectively. Overall, this Deployment aims to ensure the smooth and secure operation of the Jenkins Operator within the Kubernetes environment.
Read moreCaveats and Considerations
1. Resource Allocation: The CPU and memory requests and limits defined in the configuration should be carefully adjusted based on the workload and available resources in the Kubernetes cluster to avoid resource contention and potential performance issues. 2. Image Repository Access: Ensure that the container image specified in the configuration (myregistry/jenkins-operator:latest) is accessible from the Kubernetes cluster. Proper image pull policies and authentication mechanisms should be configured to allow the Kubernetes nodes to pull the image from the specified registry. 3. Security Context: The security settings configured in the security context of the container (runAsNonRoot, allowPrivilegeEscalation) are essential for maintaining the security posture of the Kubernetes cluster. Ensure that these settings align with your organization's security policies and best practices. 4. Environment Variables: The environment variables defined in the configuration, such as WATCH_NAMESPACE, POD_NAME, OPERATOR_NAME, and PLATFORM_TYPE, are used to dynamically configure the Jenkins Operator container. Ensure that these variables are correctly set to provide the necessary context and functionality to the operator.
Read moreTechnologies
Related Patterns
Istio Operator

MESHERY4a76
Key cloak operator

MESHERY4f70

RELATED PATTERNS
Istio Operator

MESHERY4a76
KEY CLOAK OPERATOR
Description
This YAML snippet describes a Kubernetes Deployment for a Keycloak operator, ensuring a single replica. It specifies labels and annotations for metadata, including a service account. The pod template defines a container running the Keycloak operator image, with environment variables set for namespace and pod name retrieval. Security context settings prevent privilege escalation. Probes are configured for liveness and readiness checks on port 8081, with resource requests and limits ensuring proper resource allocation for the container.
Read moreCaveats and Considerations
1. Single Replica: The configuration specifies only one replica, which means there's no built-in redundancy or high availability. Consider adjusting the replica count based on your availability requirements. 2. Resource Allocation: Resource requests and limits are set for CPU and memory. Ensure these values are appropriate for your workload and cluster capacity to avoid performance issues or resource contention. 3. Security Context: The security context is configured to run the container as a non-root user and disallow privilege escalation. Ensure these settings align with your security policies and container requirements. 4. Probes Configuration: Liveness and readiness probes are set up to check the health of the container on port 8081. Ensure that the specified endpoints (/healthz and /readyz) are correctly implemented in the application code. 5. Namespace Configuration: The WATCH_NAMESPACE environment variable is set to an empty string, potentially causing the operator to watch all namespaces. Ensure this behavior aligns with your intended scope of operation and namespace isolation requirements.
Read moreTechnologies
Related Patterns
Istio Operator

MESHERY4a76
Kubernetes Deployment with Azure File Storage

MESHERY487a

RELATED PATTERNS
Pod Readiness

MESHERY4b83
KUBERNETES DEPLOYMENT WITH AZURE FILE STORAGE
Description
This design sets up a Kubernetes Deployment deploying two NGINX containers. Each container utilizes an Azure File storage volume for shared data. The NGINX instances serve web content while accessing an Azure File share, enabling scalable and shared storage for the web servers.
Caveats and Considerations
1. Azure Configuration: Ensure that your Azure configuration, including secrets, is correctly set up to access the Azure File share.
2. Data Sharing: Multiple NGINX containers share the same storage. Be cautious when handling write operations to avoid conflicts or data corruption.
3. Scalability: Consider the scalability of both NGINX and Azure File storage to meet your application's demands.
4. Security: Safeguard the secrets used to access Azure resources and limit access to only authorized entities.
5. Pod Recovery: Ensure that the pod recovery strategy is well-defined to handle disruptions or node failures.
6. Azure Costs: Monitor and manage costs associated with Azure File storage, as it may incur charges based on usage.
7. Maintenance: Plan for regular maintenance and updates of both NGINX and Azure configurations to address security and performance improvements.
8. Monitoring: Implement monitoring and alerts for both the NGINX containers and Azure File storage to proactively detect and address issues.
9. Backup and Disaster Recovery: Establish a backup and disaster recovery plan to safeguard data stored in Azure File storage.
Technologies
Related Patterns
Pod Readiness

MESHERY4b83
Kubernetes Engine Training Example

MESHERY40f1

RELATED PATTERNS
Pod Readiness

MESHERY4b83
KUBERNETES ENGINE TRAINING EXAMPLE
Description
""
Caveats and Considerations
""
Technologies
Related Patterns
Pod Readiness

MESHERY4b83
Kubernetes Metrics Server Configuration

MESHERY4892

RELATED PATTERNS
Kubernetes cronjob

MESHERY4483
KUBERNETES METRICS SERVER CONFIGURATION
Description
This design configures the Kubernetes Metrics Server for monitoring cluster-wide resource metrics. It defines a Kubernetes Deployment, Role-Based Access Control (RBAC) rules, and other resources for the Metrics Server's deployment and operation.
Caveats and Considerations
This design configures the Kubernetes Metrics Server for resource monitoring. Ensure that RBAC and ServiceAccount configurations are secure to prevent unauthorized access. Adjust Metrics Server settings for specific metrics and monitor resource usage regularly to prevent resource overuse. Implement probes for reliability and maintain correct API service settings. Plan for scalability and choose the appropriate namespace. Set up monitoring for issue detection and establish data backup and recovery plans. Regularly update components for improved security and performance.
Read moreTechnologies
Related Patterns
Kubernetes cronjob

MESHERY4483
Kubernetes Service for Product Page App

MESHERY4c57

RELATED PATTERNS
Istio Operator

MESHERY4a76
KUBERNETES SERVICE FOR PRODUCT PAGE APP
Description
This design installs a namespace, a deployment and a service. Both deployment and service are deployed in my-bookinfo namespace. Service is exposed at port 9081.
Caveats and Considerations
Ensure sufficient resources are available in the cluster and networking is exopsed properly.
Technologies
Related Patterns
Istio Operator

MESHERY4a76
Kubernetes cronjob

MESHERY4483

RELATED PATTERNS
Kubernetes Metrics Server Configuration

MESHERY4892
KUBERNETES CRONJOB
Description
This design contains a single Kubernetes Cronjob.
Caveats and Considerations
This design is for learning purposes and may be freely copied and distributed.
Technologies
Related Patterns
Kubernetes Metrics Server Configuration

MESHERY4892
Limit Range

MESHERY4cb9

RELATED PATTERNS
Pod Readiness

MESHERY4b83
LIMIT RANGE
Description
""
Caveats and Considerations
""
Technologies
Related Patterns
Pod Readiness

MESHERY4b83
Litmus Chaos Operator

MESHERY4844

RELATED PATTERNS
Istio Operator

MESHERY4a76
LITMUS CHAOS OPERATOR
Description
This YAML file defines a Kubernetes Deployment for the Litmus Chaos Operator. It creates a single replica of the chaos-operator pod within the litmus namespace. The deployment is labeled for organization and management purposes, specifying details like the version and component. The container runs the litmuschaos/chaos-operator:ci image with a command to enable leader election and sets various environment variables for operation. Additionally, it uses the litmus service account to manage permissions, ensuring the operator runs with the necessary access rights within the Kubernetes cluster.
Read moreCaveats and Considerations
1. Namespace Watch: The WATCH_NAMESPACE environment variable is set to an empty string, which means the operator will watch all namespaces. This can have security implications and might require broader permissions. Consider restricting it to specific namespaces if not required. 2. Image Tag: The image is set to litmuschaos/chaos-operator:ci, which uses the latest code from the continuous integration pipeline. This might include unstable or untested features. For production environments, it's recommended to use a stable and tagged version of the image. 3. Leader Election: The -leader-elect=true argument ensures high availability by allowing only one active instance of the operator at a time. Ensure that this behavior aligns with your high-availability requirements. 4. Resource Limits and Requests: There are no resource requests or limits defined for the chaos-operator container. It's good practice to specify these to ensure the container has the necessary resources and to prevent it from consuming excessive resources.
Read moreTechnologies
Related Patterns
Istio Operator

MESHERY4a76
Load Balanced AWS Architecture

MESHERY4079

RELATED PATTERNS
Pod Readiness

MESHERY4b83
LOAD BALANCED AWS ARCHITECTURE
Description
""
Caveats and Considerations
""
Technologies
Related Patterns
Pod Readiness

MESHERY4b83
Mattermost Cluster Install

MESHERY41c2

RELATED PATTERNS
Istio Operator

MESHERY4a76
MATTERMOST CLUSTER INSTALL
Description
The cluster-installation service is based on the Mattermost Operator model and operates at version 0.3.3. It is responsible for managing the installation and configuration of the Mattermost operator in default namespace
Caveats and Considerations
Ensure sufficient resources are available in the cluster
Technologies
Related Patterns
Istio Operator

MESHERY4a76
Meshery v0.6.73

MESHERY4b52

RELATED PATTERNS
Istio Operator

MESHERY4a76
MESHERY V0.6.73
Description
A self-service engineering platform, Meshery, is the open source, cloud native manager that enables the design and management of all Kubernetes-based infrastructure and applications. Among other features, As an extensible platform, Meshery offers visual and collaborative GitOps, freeing you from the chains of YAML while managing Kubernetes multi-cluster deployments.
Read moreCaveats and Considerations
Not for Production deployment. Does not include Meshery Cloud.
Technologies
Related Patterns
Istio Operator

MESHERY4a76
Minecraft App

MESHERY48dd

RELATED PATTERNS
Pod Readiness

MESHERY4b83
MINECRAFT APP
Description
""
Caveats and Considerations
""
Technologies
Related Patterns
Pod Readiness

MESHERY4b83
Minimal Nginx Ingress

MESHERY4d2c

RELATED PATTERNS
Pod Readiness

MESHERY4b83
MINIMAL NGINX INGRESS
Description
""
Caveats and Considerations
""
Technologies
Related Patterns
Pod Readiness

MESHERY4b83
Mount(Pod -> PersistentVolume)

MESHERY429b

RELATED PATTERNS
Istio Operator

MESHERY4a76
MOUNT(POD -> PERSISTENTVOLUME)
Description
A relationship that represents volume mounts between components. Eg: The Pod component is binded to the PersistentVolume component via the PersistentVolumeClaim component.
Caveats and Considerations
NA
Technologies
Related Patterns
Istio Operator

MESHERY4a76
My first k8s app

MESHERY496d

RELATED PATTERNS
Apache Airflow

MESHERY41d4
MY FIRST K8S APP
Description
This is a simple kubernetes workflow application that has deployment, pods and service. This is first design used for eexploring Meshery Cloud platform
Caveats and Considerations
No caveats; Free to reuse
Technologies
Related Patterns
Apache Airflow

MESHERY41d4
MySQL Deployment

MESHERY492d

RELATED PATTERNS
Kubernetes cronjob

MESHERY4483
MYSQL DEPLOYMENT
Description
This is a simple SQL deployment that would install a k8s deployment, volume and a service.
Caveats and Considerations
No caveats. Ensure the ports are exposed accurately.
Technologies
Related Patterns
Kubernetes cronjob

MESHERY4483
MySQL installation with cinder volume plugin

MESHERY4693

RELATED PATTERNS
Istio Operator

MESHERY4a76
MYSQL INSTALLATION WITH CINDER VOLUME PLUGIN
Description
Cinder is a Block Storage service for OpenStack. It can be used as an attachment mounted to a pod in Kubernetes.
Caveats and Considerations
Currently the cinder volume plugin is designed to work only on linux hosts and offers ext4 and ext3 as supported fs types Make sure that kubelet host machine has the following executables
Technologies
Related Patterns
Istio Operator

MESHERY4a76
NGINX deployment

MESHERY4981

RELATED PATTERNS
Pod Readiness

MESHERY4b83
NGINX DEPLOYMENT
Description
This design is for learning purposes and may be freely copied and distributed.
Caveats and Considerations
This design contains nginx deployment
Technologies
Related Patterns
Pod Readiness

MESHERY4b83
NGINX with init container and vhost

MESHERY4135

RELATED PATTERNS
Pod Readiness

MESHERY4b83
NGINX WITH INIT CONTAINER AND VHOST
Description
""
Caveats and Considerations
""
Technologies
Related Patterns
Pod Readiness

MESHERY4b83
Namespace

MESHERY4f8c

RELATED PATTERNS
Pod Readiness

MESHERY4b83
NAMESPACE
Description
""
Caveats and Considerations
""
Technologies
Related Patterns
Pod Readiness

MESHERY4b83
Network policy

MESHERY4da3

RELATED PATTERNS
Service Internal Traffic Policy

MESHERY41b6
NETWORK POLICY
Description
If you want to control traffic flow at the IP address or port level for TCP, UDP, and SCTP protocols, then you might consider using Kubernetes NetworkPolicies for particular applications in your cluster. NetworkPolicies are an application-centric construct which allow you to specify how a pod is allowed to communicate with various network "entities" (we use the word "entity" here to avoid overloading the more common terms such as "endpoints" and "services", which have specific Kubernetes connotations) over the network. NetworkPolicies apply to a connection with a pod on one or both ends, and are not relevant to other connections.
Read moreCaveats and Considerations
This is an sample network policy with ingress,egress defined , change according to your requirements
Technologies
Related Patterns
Service Internal Traffic Policy

MESHERY41b6
Network(Service -> Endpoint)

MESHERY440f

RELATED PATTERNS
Istio Operator

MESHERY4a76
NETWORK(SERVICE -> ENDPOINT)
Description
A relationship that defines network edges between components. In the design Edge network relationship defines a network configuration for managing services and endpoints in a Kubernetes environment. This design shows the relationship between two Kubernetes components Endpoint and Service.
Read moreCaveats and Considerations
NA
Technologies
Related Patterns
Istio Operator

MESHERY4a76
Nginx Deployment

MESHERY4c89

RELATED PATTERNS
Pod Readiness

MESHERY4b83
NGINX DEPLOYMENT
Description
""
Caveats and Considerations
""
Technologies
Related Patterns
Pod Readiness

MESHERY4b83
Nodejs-kubernetes-microservices

MESHERY496f

RELATED PATTERNS
Pod Readiness

MESHERY4b83
NODEJS-KUBERNETES-MICROSERVICES
Description
""
Caveats and Considerations
""
Technologies
Related Patterns
Pod Readiness

MESHERY4b83
Online Boutique

MESHERY498b

RELATED PATTERNS
Pod Readiness

MESHERY4b83
ONLINE BOUTIQUE
Description
Google's Microservices sample app is named Online Boutique. Docs - https://docs.meshery.io/guides/sample-apps#online-boutique Source - https://github.com/GoogleCloudPlatform/microservices-demo
Caveats and Considerations
N/A
Technologies
Related Patterns
Pod Readiness

MESHERY4b83
Persistence-volume-claim

MESHERY4671

RELATED PATTERNS
Pod Readiness

MESHERY4b83
PERSISTENCE-VOLUME-CLAIM
Description
Defines a Kubernetes PersistentVolumeClaim (PVC) requesting 10Gi storage with 'manual' storage class. Supports both ReadWriteMany and ReadWriteOnce access modes, with optional label-based PV selection. Carefully adjust storage size for specific storage solutions, and consider annotations, security, monitoring, and scalability needs.
Read moreCaveats and Considerations
Ensure that the chosen storageClassName is properly configured and available in your cluster. Be cautious about the ReadWriteMany and ReadWriteOnce access modes, as they impact compatibility with PersistentVolumes (PVs). The selector should match existing PVs in your cluster if used. Adjust the storage size to align with your storage solution, keeping in mind the AWS EFS special case. Review the need for annotations, confirm the namespace, and implement security measures. Monitor and set up alerts for your PVC, and plan for backup and disaster recovery. Lastly, ensure scalability to meet your application's storage requirements.
Read moreTechnologies
Related Patterns
Pod Readiness

MESHERY4b83
Persistent Volume

MESHERY4f33

RELATED PATTERNS
Pod Readiness

MESHERY4b83
PERSISTENT VOLUME
Description
""
Caveats and Considerations
""
Technologies
Related Patterns
Pod Readiness

MESHERY4b83
Persistent Volume Claims

MESHERY4a28

RELATED PATTERNS
Pod Readiness

MESHERY4b83
PERSISTENT VOLUME CLAIMS
Description
""
Caveats and Considerations
""
Technologies
Related Patterns
Pod Readiness

MESHERY4b83
Pod Life Cycle

MESHERY437a

RELATED PATTERNS
Pod Readiness

MESHERY4b83
POD LIFE CYCLE
Description
""
Caveats and Considerations
""
Technologies
Related Patterns
Pod Readiness

MESHERY4b83
Pod Liveness

MESHERY4a7e

RELATED PATTERNS
Pod Readiness

MESHERY4b83
POD LIVENESS
Description
""
Caveats and Considerations
""
Technologies
Related Patterns
Pod Readiness

MESHERY4b83
Pod Multi Containers

MESHERY436c

RELATED PATTERNS
Pod Readiness

MESHERY4b83
POD MULTI CONTAINERS
Description
""
Caveats and Considerations
""
Technologies
Related Patterns
Pod Readiness

MESHERY4b83
Pod Node Affinity

MESHERY4134

RELATED PATTERNS
Pod Readiness

MESHERY4b83
POD NODE AFFINITY
Description
""
Caveats and Considerations
""
Technologies
Related Patterns
Pod Readiness

MESHERY4b83
Pod Priviledged Simple

MESHERY4568

RELATED PATTERNS
Pod Readiness

MESHERY4b83
POD PRIVILEDGED SIMPLE
Description
""
Caveats and Considerations
""
Technologies
Related Patterns
Pod Readiness

MESHERY4b83
Pod Readiness

MESHERY4b83

RELATED PATTERNS
Example Labels and Annotations

MESHERY4649
POD READINESS
Description
""
Caveats and Considerations
""
Technologies
Related Patterns
Example Labels and Annotations

MESHERY4649
Pod Resource Limit

MESHERY41a3

RELATED PATTERNS
Pod Readiness

MESHERY4b83
POD RESOURCE LIMIT
Description
""
Caveats and Considerations
""
Technologies
Related Patterns
Pod Readiness

MESHERY4b83
Pod Resource Memory Request Limit

MESHERY45d9

RELATED PATTERNS
Pod Readiness

MESHERY4b83
POD RESOURCE MEMORY REQUEST LIMIT
Description
""
Caveats and Considerations
""
Technologies
Related Patterns
Pod Readiness

MESHERY4b83
Pod Resource Request

MESHERY4a23

RELATED PATTERNS
Pod Readiness

MESHERY4b83
POD RESOURCE REQUEST
Description
""
Caveats and Considerations
""
Technologies
Related Patterns
Pod Readiness

MESHERY4b83
Pod Service Account Token

MESHERY4756

RELATED PATTERNS
Pod Readiness

MESHERY4b83
POD SERVICE ACCOUNT TOKEN
Description
""
Caveats and Considerations
""
Technologies
Related Patterns
Pod Readiness

MESHERY4b83
Pod Volume Mount SubPath

MESHERY4e52

RELATED PATTERNS
Pod Readiness

MESHERY4b83
POD VOLUME MOUNT SUBPATH
Description
""
Caveats and Considerations
""
Technologies
Related Patterns
Pod Readiness

MESHERY4b83
Pod Volume Mount SubPath-expr

MESHERY4fde

RELATED PATTERNS
Pod Readiness

MESHERY4b83
POD VOLUME MOUNT SUBPATH-EXPR
Description
""
Caveats and Considerations
""
Technologies
Related Patterns
Pod Readiness

MESHERY4b83
Pod Volumes Projected

MESHERY4d18

RELATED PATTERNS
Pod Readiness

MESHERY4b83
POD VOLUMES PROJECTED
Description
""
Caveats and Considerations
""
Technologies
Related Patterns
Pod Readiness

MESHERY4b83
Pods Image Pull Policy

MESHERY4c85

RELATED PATTERNS
Pod Readiness

MESHERY4b83
PODS IMAGE PULL POLICY
Description
""
Caveats and Considerations
""
Technologies
Related Patterns
Pod Readiness

MESHERY4b83
Postgres Deployment

MESHERY49ba

RELATED PATTERNS
Delay Action for Chaos Mesh

MESHERY4dcc
POSTGRES DEPLOYMENT
Description
The combination of PostgreSQL and Kubernetes provides a scalable and highly available (HA) database solution that’s well suited for modern application development and deployment practices. While creating a HA solution is out of the scope of this article, you’ll learn how to set up a simple container with PostgreSQL, which offers a number of benefits.
Read moreCaveats and Considerations
It’s important to remember that this need to be configured to store data in node-local memory.
Technologies
Related Patterns
Delay Action for Chaos Mesh

MESHERY4dcc
Prometheus Sample

MESHERY4bea

RELATED PATTERNS
Thanos Query Design

MESHERY4034
PROMETHEUS SAMPLE
Description
This is a simple prometheus montioring design
Caveats and Considerations
Networking should be properly configured to enable communication between the frontend and backend components of the app.
Technologies
Related Patterns
Thanos Query Design

MESHERY4034
Prometheus adapter

MESHERY406f

RELATED PATTERNS
Istio Operator

MESHERY4a76
PROMETHEUS ADAPTER
Description
This YAML configuration defines a Kubernetes Deployment for the prometheus-adapter, a component of the kube-prometheus stack within the monitoring namespace. The deployment manages two replicas of the prometheus-adapter pod to ensure high availability. Each pod runs a container using the prometheus-adapter image from the Kubernetes registry, configured with various command-line arguments to specify settings like the configuration file path, metrics re-list interval, and Prometheus URL.
Read moreCaveats and Considerations
1. Namespace: Ensure that the monitoring namespace exists before deploying this configuration. 2. ConfigMap: Verify that the adapter-config ConfigMap is created and contains the correct configuration data required by the prometheus-adapter. 3. TLS Configuration: The deployment includes TLS settings with specific cipher suites; ensure these align with your security policies and requirements. 4. Resource Allocation: The specified CPU and memory limits and requests should be reviewed to match the expected load and cluster capacity. 5. Service Account: Ensure that the prometheus-adapter service account has the necessary permissions to operate correctly within the cluster
Read moreTechnologies
Related Patterns
Istio Operator

MESHERY4a76
Prometheus dummy exporter

MESHERY487c

RELATED PATTERNS
Robot Shop Sample App

MESHERY4c4e
PROMETHEUS DUMMY EXPORTER
Description
A simple prometheus-dummy-exporter container exposes a single Prometheus metric with a constant value. The metric name, value and port on which it will be served can be passed by flags. This container is then deployed in the same pod with another container, prometheus-to-sd, configured to use the same port. It scrapes the metric and publishes it to Stackdriver. This adapter isn't part of the sample code, but a standard component used by many Kubernetes applications. You can learn more about it from given below link https://github.com/GoogleCloudPlatform/k8s-stackdriver/tree/master/prometheus-to-sd
Read moreCaveats and Considerations
It is only developed for Google Kubernetes Engine to collect metrics from system services in order to support Kubernetes users. We designed the tool to be lean when deployed as a sidecar in your pod. It's intended to support only the metrics the Kubernetes team at Google needs and is not meant for end-users.
Read moreTechnologies
Related Patterns
Robot Shop Sample App

MESHERY4c4e
Prometheus-monitoring-ns

MESHERY420f

RELATED PATTERNS
Istio Operator

MESHERY4a76
PROMETHEUS-MONITORING-NS
Description
This is a simple prometheus montioring design
Caveats and Considerations
Networking should be properly configured to enable communication between the frontend and backend components of the app.
Technologies
Related Patterns
Istio Operator

MESHERY4a76
QAT-TLS-handshake-acceleration-for-Istio.yaml

MESHERY4baf

RELATED PATTERNS
Pod Readiness

MESHERY4b83
QAT-TLS-HANDSHAKE-ACCELERATION-FOR-ISTIO.YAML
Description
Cryptographic operations are among the most compute-intensive and critical operations when it comes to secured connections. Istio uses Envoy as the “gateways/sidecar” to handle secure connections and intercept the traffic. Depending upon use cases, when an ingress gateway must handle a large number of incoming TLS and secured service-to-service connections through sidecar proxies, the load on Envoy increases. The potential performance depends on many factors, such as size of the cpuset on which Envoy is running, incoming traffic patterns, and key size. These factors can impact Envoy serving many new incoming TLS requests. To achieve performance improvements and accelerated handshakes, a new feature was introduced in Envoy 1.20 and Istio 1.14. It can be achieved with 3rd Gen Intel® Xeon® Scalable processors, the Intel® Integrated Performance Primitives (Intel® IPP) crypto library, CryptoMB Private Key Provider Method support in Envoy, and Private Key Provider configuration in Istio using ProxyConfig.
Read moreCaveats and Considerations
Ensure networking is setup properly and correct annotation are applied to each resource for custom Intel configuration
Technologies
Related Patterns
Pod Readiness

MESHERY4b83
RBAC for ElasticSearch

MESHERY4af2

RELATED PATTERNS
Istio Operator

MESHERY4a76
RBAC FOR ELASTICSEARCH
Description
This infrastructure design defines resources related to Role-Based Access Control (RBAC) for Elasticsearch in a Kubernetes environment. Here's a brief description of the components: 1.) zk (ZooKeeper StatefulSet): - A StatefulSet named zk with 3 replicas is defined to manage ZooKeeper instances. - It uses ordered pod management policy, ensuring that pods are started in order. - ZooKeeper is configured with specific settings, including ports, data directories, and resource requests. - It has affinity settings to avoid running multiple ZooKeeper instances on the same node. - The configuration includes liveness and readiness probes to ensure the health of the pods. 2.) zk-cs (ZooKeeper Service): - A Kubernetes Service named zk-cs is defined to provide access to the ZooKeeper instances. - It exposes the client port (port 2181) used to connect to ZooKeeper. 3.) zk-hs (ZooKeeper Headless Service): - Another Kubernetes Service named `zk-hs` is defined as headless (with cluster IP set to None). - It exposes ports for ZooKeeper server (port 2888) and leader election (port 3888). - This headless service is typically used for direct communication with individual ZooKeeper instances. 4.) **zk-pdb (ZooKeeper PodDisruptionBudget):** - A PodDisruptionBudget named `zk-pdb` is defined to control the maximum number of unavailable ZooKeeper pods to 1. - This ensures that at least one ZooKeeper instance remains available during disruptions.
Read moreCaveats and Considerations
Networking should be properly configured to enable communication between pod and services. Ensure sufficient resources are available in the cluster.
Technologies
Related Patterns
Istio Operator

MESHERY4a76
Redis Leader Deployment

MESHERY48cc

RELATED PATTERNS
Istio Operator

MESHERY4a76
REDIS LEADER DEPLOYMENT
Description
This is a simple deployment of redis leader app. Its deployment includes 1 replica that uses image:docker.io/redis:6.0.5, cpu: 100m, memory: 100Mi and exposes containerPort: 6379
Caveats and Considerations
None
Technologies
Related Patterns
Istio Operator

MESHERY4a76
Redis master deployment

MESHERY4357

RELATED PATTERNS
Pod Readiness

MESHERY4b83
REDIS MASTER DEPLOYMENT
Description
""
Caveats and Considerations
""
Technologies
Related Patterns
Pod Readiness

MESHERY4b83
Redis_using_configmap

MESHERY447e

RELATED PATTERNS
Pod Readiness

MESHERY4b83
REDIS_USING_CONFIGMAP
Description
""
Caveats and Considerations
""
Technologies
Related Patterns
Pod Readiness

MESHERY4b83
Relationship Master Design

MESHERY43e0

RELATED PATTERNS
Pod Readiness

MESHERY4b83
RELATIONSHIP MASTER DESIGN
Description
""
Caveats and Considerations
""
Technologies
Related Patterns
Pod Readiness

MESHERY4b83
Resilient Web App

MESHERY4e64

RELATED PATTERNS
Istio Operator

MESHERY4a76
RESILIENT WEB APP
Description
This is a simple app that uses nginx as a web proxy for improving the resiliency of web app
Caveats and Considerations
Networking should be properly configured to enable communication between the frontend and backend components of the app.
Technologies
Related Patterns
Istio Operator

MESHERY4a76
Robot Shop Sample App

MESHERY4c4e

RELATED PATTERNS
Prometheus dummy exporter

MESHERY487c
ROBOT SHOP SAMPLE APP
Description
Stans Robot Shop is a sample microservice application you can use as a sandbox to test and learn containerised application orchestration and monitoring techniques. It is not intended to be a comprehensive reference example of how to write a microservices application, although you will better understand some of those concepts by playing with Stans Robot Shop. To be clear, the error handling is patchy and there is not any security built into the application.
Read moreCaveats and Considerations
This sample microservice application has been built using these technologies: NodeJS (Express), Java (Spring Boot), Python (Flask), Golang, PHP (Apache), MongoDB, Redis, MySQL (Maxmind data), RabbitMQ, Nginx, AngularJS (1.x)
Technologies
Related Patterns
Prometheus dummy exporter

MESHERY487c
Run DaemonSet on GKE Autopilot

MESHERY4bf8

RELATED PATTERNS
Istio Operator

MESHERY4a76
RUN DAEMONSET ON GKE AUTOPILOT
Description
GKE uses the total size of your deployed workloads to determine the size of the nodes that Autopilot provisions for the cluster. If you add or resize a DaemonSet after Autopilot provisions a node, GKE won't resize existing nodes to accommodate the new total workload size. DaemonSets with resource requests larger than the allocatable capacity of existing nodes, after accounting for system pods, also won't get scheduled on those nodes. Starting in GKE version 1.27.6-gke.1248000, clusters in Autopilot mode detect nodes that can't fit all DaemonSets and, over time, migrate workloads to larger nodes that can fit all DaemonSets. This process takes some time, especially if the nodes run system Pods, which need extra time to gracefully terminate so that there's no disruption to core cluster capabilities. In GKE version 1.27.5-gke.200 or earlier, we recommend cordoning and draining nodes that can't accommodate DaemonSet Pods.
Read moreCaveats and Considerations
For all GKE versions, we recommend the following best practices when deploying DaemonSets on Autopilot: Deploy DaemonSets before any other workloads. Set a higher PriorityClass on DaemonSets than regular Pods. The higher PriorityClass lets GKE evict lower-priority Pods to accommodate DaemonSet pods if the node can accommodate those pods. This helps to ensure that the DaemonSet is present on each node without triggering node recreation.
Read moreTechnologies
Related Patterns
Istio Operator

MESHERY4a76
Running ZooKeeper, A Distributed System Coordinator

MESHERY4339

RELATED PATTERNS
Istio Operator

MESHERY4a76
RUNNING ZOOKEEPER, A DISTRIBUTED SYSTEM COORDINATOR
Description
This cloud native design defines a Kubernetes configuration for a ZooKeeper deployment. It includes a Service, PodDisruptionBudget, and StatefulSet. It defines a Service named zk-hs with labels indicating it is part of the zk application. It exposes two ports, 2888 and 3888, and has a clusterIP of None meaning it is only accessible within the cluster. The Service selects Pods with the zk label. The next part defines another Service named zk-cs with similar labels and a single port, 2181, used for client connections. It also selects Pods with the zk label. Following that, a PodDisruptionBudget named zk-pdb is defined. It sets the selector to match Pods with the zk label and allows a maximum of 1 Pod to be unavailable during disruptions. Finally, a StatefulSet named zk is defined. It selects Pods with the zk label and uses the zk-hs Service for the headless service. It specifies 3 replicas, a RollingUpdate update strategy, and OrderedReady pod management policy. The Pod template includes affinity rules for pod anti-affinity, resource requests for CPU and memory, container ports for ZooKeeper, a command to start ZooKeeper with specific configurations, and readiness and liveness probes. It also defines a volume claim template for data storage
Read moreCaveats and Considerations
You must have a cluster with at least four nodes, and each node requires at least 2 CPUs and 4 GiB of memory.
Technologies
Related Patterns
Istio Operator

MESHERY4a76
RuntimeClass

MESHERY4c6c

RELATED PATTERNS
Istio Operator

MESHERY4a76
RUNTIMECLASS
Description
This pattern establishes and visualizes the relationship between Runtime Class(a Kubernetes component) and other Kubernetes components
Caveats and Considerations
The name of the Runtime Class is referenced by the other Kubernetes Components
Technologies
Related Patterns
Istio Operator

MESHERY4a76
Serve an LLM using multi-host TPUs on GKE

MESHERY4813

RELATED PATTERNS
Pod Readiness

MESHERY4b83
SERVE AN LLM USING MULTI-HOST TPUS ON GKE
Description
""
Caveats and Considerations
""
Technologies
Related Patterns
Pod Readiness

MESHERY4b83
Serve an LLM with multiple GPUs in GKE

MESHERY4d06

RELATED PATTERNS
Istio Operator

MESHERY4a76
SERVE AN LLM WITH MULTIPLE GPUS IN GKE
Description
Serve a large language model (LLM) with GPUs in Google Kubernetes Engine (GKE) mode. Create a GKE Standard cluster that uses multiple L4 GPUs and prepares the GKE infrastructure to serve any of the following models: 1. Falcon 40b. 2. Llama 2 70b
Read moreCaveats and Considerations
Depending on the data format of the model, the number of GPUs varies. In this design, each model uses two L4 GPUs.
Technologies
Related Patterns
Istio Operator

MESHERY4a76
Service Internal Traffic Policy

MESHERY41b6

RELATED PATTERNS
Install-Traefik-as-ingress-controller

MESHERY4796
SERVICE INTERNAL TRAFFIC POLICY
Description
Service Internal Traffic Policy enables internal traffic restrictions to only route internal traffic to endpoints within the node the traffic originated from. The "internal" traffic here refers to traffic originated from Pods in the current cluster. This can help to reduce costs and improve performance. How it works ?? The kube-proxy filters the endpoints it routes to based on the spec.internalTrafficPolicy setting. When it's set to Local, only node local endpoints are considered. When it's Cluster (the default), or is not set, Kubernetes considers all endpoints.
Read moreCaveats and Considerations
Note: For pods on nodes with no endpoints for a given Service, the Service behaves as if it has zero endpoints (for Pods on this node) even if the service does have endpoints on other nodes.
Technologies
Related Patterns
Install-Traefik-as-ingress-controller

MESHERY4796
Serving T5 Large Language Model with TorchServe

MESHERY40e7

RELATED PATTERNS
Istio Operator

MESHERY4a76
SERVING T5 LARGE LANGUAGE MODEL WITH TORCHSERVE
Description
Deploy torchserve inference server with prepared T5 model and Client Application. Manifests were tested against GKE Autopilot Kubernetes cluster.
Caveats and Considerations
To configure HPA base on metrics from torchserve you need to: Enable Google Manager Prometheus or install OSS Prometheus. Install Custom Metrics Adapter. Apply pod-monitoring.yaml and hpa.yaml
Technologies
Related Patterns
Istio Operator

MESHERY4a76
Simple DaemonSet

MESHERY40f0

RELATED PATTERNS
Pod Readiness

MESHERY4b83
SIMPLE DAEMONSET
Description
""
Caveats and Considerations
""
Technologies
Related Patterns
Pod Readiness

MESHERY4b83
Simple Kubernetes Pod

MESHERY4e04

RELATED PATTERNS
Istio Operator

MESHERY4a76
SIMPLE KUBERNETES POD
Description
This cloud-native design consists of a Kubernetes Pod running an Nginx container and a Kubernetes Service named service. The Pod uses the image nginx with an image pull policy of Always. The Service defines two ports: one with port 80 and target port 8080, and another with port 80. The Service allows communication between the Pod and external clients on port 80.
Read moreCaveats and Considerations
Networking should be properly configured to enable communication between pod and services. Ensure sufficient resources are available in the cluster.
Technologies
Related Patterns
Istio Operator

MESHERY4a76
Simple Kubernetes Pod

MESHERY4fc5

RELATED PATTERNS
Istio Operator

MESHERY4a76
SIMPLE KUBERNETES POD
Description
This cloud-native design consists of a Kubernetes Pod running an Nginx container and a Kubernetes Service named service. The Pod uses the image nginx with an image pull policy of Always. The Service defines two ports: one with port 80 and target port 8080, and another with port 80. The Service allows communication between the Pod and external clients on port 80.
Read moreCaveats and Considerations
Networking should be properly configured to enable communication between pod and services. Ensure sufficient resources are available in the cluster.
Technologies
Related Patterns
Istio Operator

MESHERY4a76
Simple Kubernetes Pod

MESHERY454a

RELATED PATTERNS
Apache Airflow

MESHERY41d4
SIMPLE KUBERNETES POD
Description
Just an example of how to use a Kubernetes Pod.
Caveats and Considerations
None
Technologies
Related Patterns
Apache Airflow

MESHERY41d4
Simple Kubernetes Pod and Service

MESHERY4266

RELATED PATTERNS
Pod Readiness

MESHERY4b83
SIMPLE KUBERNETES POD AND SERVICE
Description
""
Caveats and Considerations
""
Technologies
Related Patterns
Pod Readiness

MESHERY4b83
Simple Kubernetes Pod and Service

MESHERY4200

RELATED PATTERNS
Istio Operator

MESHERY4a76
SIMPLE KUBERNETES POD AND SERVICE
Description
This cloud-native design consists of a Kubernetes Pod running an Nginx container and a Kubernetes Service named service. The Pod uses the image nginx with an image pull policy of Always. The Service defines two ports: one with port 80 and target port 8080, and another with port 80. The Service allows communication between the Pod and external clients on port 80.
Read moreCaveats and Considerations
Networking should be properly configured to enable communication between pod and services. Ensure sufficient resources are available in the cluster.
Technologies
Related Patterns
Istio Operator

MESHERY4a76
Simple MySQL Pod

MESHERY472e

RELATED PATTERNS
Istio Operator

MESHERY4a76
SIMPLE MYSQL POD
Description
Testing patterns
Caveats and Considerations
None
Technologies
Related Patterns
Istio Operator

MESHERY4a76
Single Pods

MESHERY4aea

RELATED PATTERNS
Pod Readiness

MESHERY4b83
SINGLE PODS
Description
""
Caveats and Considerations
""
Technologies
Related Patterns
Pod Readiness

MESHERY4b83
Thanos Query Design

MESHERY4034

RELATED PATTERNS
Prometheus Sample

MESHERY4bea
THANOS QUERY DESIGN
Description
This is sample app for testing k8s deployment and thanos
Caveats and Considerations
Ensure networking is setup properly and correct annotation are applied to each resource
Technologies
Related Patterns
Prometheus Sample

MESHERY4bea
Untitled Design

MESHERY411e

RELATED PATTERNS
Pod Readiness

MESHERY4b83
UNTITLED DESIGN
Description
""
Caveats and Considerations
""
Technologies
Related Patterns
Pod Readiness

MESHERY4b83
Vault operator

MESHERY4c0a

RELATED PATTERNS
Istio Operator

MESHERY4a76
VAULT OPERATOR
Description
This YAML configuration defines a Kubernetes Deployment for the vault-operator using the apps/v1 API version. It specifies that a single replica of the vault-operator pod should be maintained by Kubernetes. The deployment's metadata sets the name of the deployment to vault-operator. The pod template within the deployment includes metadata labels that tag the pod with name: vault-operator, which helps in identifying and managing the pod. The pod specification details a single container named vault-operator that uses the image quay.io/coreos/vault-operator:latest. This container is configured with two environment variables: MY_POD_NAMESPACE and MY_POD_NAME, which derive their values from the pod's namespace and name respectively using the Kubernetes downward API. This setup ensures that the vault-operator container is aware of its deployment context within the Kubernetes cluster.
Read moreCaveats and Considerations
1. Single Replica: The deployment is configured with a single replica. This might be a single point of failure. Consider increasing the number of replicas for high availability and fault tolerance. 2. Image Tagging: The container image is specified as latest, which can lead to unpredictable deployments because latest may change over time. It's recommended to use a specific version tag to ensure consistency and repeatability in deployments. 3. Environment Variables: The deployment uses environment variables (MY_POD_NAMESPACE and MY_POD_NAME) obtained from the downward API. Ensure these variables are correctly referenced and required by your application. 4. Resource Requests and Limits: The deployment does not specify resource requests and limits for CPU and memory. This could lead to resource contention or overcommitment issues. It’s good practice to define these to ensure predictable performance and resource usage.
Read moreTechnologies
Related Patterns
Istio Operator

MESHERY4a76
WordPress and MySQL with Persistent Volume on Kubernetes

MESHERY4d8b

RELATED PATTERNS
Istio Operator

MESHERY4a76
WORDPRESS AND MYSQL WITH PERSISTENT VOLUME ON KUBERNETES
Description
This design includes a WordPress site and a MySQL database using Minikube. Both applications use PersistentVolumes and PersistentVolumeClaims to store data.
Caveats and Considerations
Warning: This deployment is not suitable for production use cases, as it uses single instance WordPress and MySQL Pods. Consider using WordPress Helm Chart to deploy WordPress in production.
Technologies
Related Patterns
Istio Operator

MESHERY4a76
Wordpress Deployment

MESHERY4c81

RELATED PATTERNS
Pod Readiness

MESHERY4b83
WORDPRESS DEPLOYMENT
Description
This is a sample WordPress deployment.
Caveats and Considerations
No caveats. Feel free to reuse or distrubute.
Technologies
Related Patterns
Pod Readiness

MESHERY4b83
Wordpress and MySql on Kubernetes

MESHERY4c7e

RELATED PATTERNS
Pod Readiness

MESHERY4b83
WORDPRESS AND MYSQL ON KUBERNETES
Description
""
Caveats and Considerations
""
Technologies
Related Patterns
Pod Readiness

MESHERY4b83
ZooKeeper Cluster

MESHERY4f53

RELATED PATTERNS
Kubernetes cronjob

MESHERY4483
ZOOKEEPER CLUSTER
Description
This StatefulSet will create three Pods, each running a ZooKeeper server container. The Pods will be named my-zookeeper-cluster-0, my-zookeeper-cluster-1, and my-zookeeper-cluster-2. The volumeMounts section of the spec tells the Pods to mount the PersistentVolumeClaim my-zookeeper-cluster-pvc to the /zookeeper/data directory. This will ensure that the ZooKeeper data is persistent and stored across restarts.
Read moreCaveats and Considerations
1. The storage for a given Pod must either be provisioned by a PersistentVolume Provisioner based on the requested storage class, or pre-provisioned by an admin. 2. Deleting and/or scaling a StatefulSet down will not delete the volumes associated with the StatefulSet. This is done to ensure data safety, which is generally more valuable than an automatic purge of all related StatefulSet resources. 3. StatefulSets currently require a Headless Service to be responsible for the network identity of the Pods. You are responsible for creating this Service. 4. StatefulSets do not provide any guarantees on the termination of pods when a StatefulSet is deleted. To achieve ordered and graceful termination of the pods in the StatefulSet, it is possible to scale the StatefulSet down to 0 prior to deletion. 5. When using Rolling Updates with the default Pod Management Policy (OrderedReady), it's possible to get into a broken state that requires manual intervention to repair.
Read moreTechnologies
Related Patterns
Kubernetes cronjob

MESHERY4483
api-backend

MESHERY4e4a

RELATED PATTERNS
Pod Readiness

MESHERY4b83
API-BACKEND
Description
""
Caveats and Considerations
""
Technologies
Related Patterns
Pod Readiness

MESHERY4b83
default-ns

MESHERY490b

RELATED PATTERNS
Pod Readiness

MESHERY4b83
DEFAULT-NS
Description
This is a sample default namespace that can be used for testing.
Caveats and Considerations
No caveats. Feel free to reuse.
Technologies
Related Patterns
Pod Readiness

MESHERY4b83
deployment

MESHERY4579

RELATED PATTERNS
Istio Operator

MESHERY4a76
DEPLOYMENT
Description
This is a sample design used for exploring kubernetes deployment
Caveats and Considerations
No caveats. Free to reuses and distribute
Technologies
Related Patterns
Istio Operator

MESHERY4a76
doks-nginx-deployment

MESHERY4bf7

RELATED PATTERNS
Istio Operator

MESHERY4a76
DOKS-NGINX-DEPLOYMENT
Description
This is a sample design used for exploring kubernetes deployment and service
Caveats and Considerations
No caveats. Free to reuses and distribute
Technologies
Related Patterns
Istio Operator

MESHERY4a76
fluentd deployment

MESHERY4f28

RELATED PATTERNS
Robot Shop Sample App

MESHERY4c4e
FLUENTD DEPLOYMENT
Description
This configuration sets up Fluentd-ES to collect and forward logs from Kubernetes pods to Elasticsearch for storage and analysis. Ensure that Elasticsearch is properly configured and accessible by Fluentd-ES for successful log aggregation and visualization. Additionally, adjust resource requests and limits according to your cluster's capacity and requirements.
Read moreCaveats and Considerations
1. Resource Utilisation: Fluentd can consume significant CPU and memory resources, especially in environments with high log volumes. Monitor resource usage closely and adjust resource requests and limits according to your cluster's capacity and workload requirements. 2. Configuration Complexity: Fluentd's configuration can be complex, particularly when configuring input, filtering, and output plugins. Thoroughly test and validate the Fluentd configuration to ensure it meets your logging requirements and effectively captures relevant log data. 3. Security Considerations: Secure the Fluentd deployment by following best practices for managing secrets and access control. Ensure that sensitive information, such as credentials and configuration details, are properly encrypted and protected.
Read moreTechnologies
Related Patterns
Robot Shop Sample App

MESHERY4c4e
gitlab runner deployment

MESHERY4170

RELATED PATTERNS
Istio Operator

MESHERY4a76
GITLAB RUNNER DEPLOYMENT
Description
This configuration ensures that a single instance of the GitLab Runner is deployed within the gitlab-runner namespace. The GitLab Runner is configured with a specific ServiceAccount, CPU resource requests and limits, and is provided with a ConfigMap containing the configuration file config.toml. The deployment is designed to continuously restart the pod (restartPolicy: Always) to ensure the GitLab Runner remains available for executing jobs.
Read moreCaveats and Considerations
1. Resource Allocation: Ensure that the CPU resource requests and limits specified in the configuration are appropriate for the workload of the GitLab Runner. Monitor resource usage and adjust these values as necessary to prevent resource contention and ensure optimal performance. 2. Image Pull Policy: The configuration specifies imagePullPolicy: Always, which causes Kubernetes to pull the Docker image (gitlab/gitlab-runner:latest) every time the pod is started. While this ensures that the latest image is always used, it may increase deployment time and consume additional network bandwidth. Consider whether this policy aligns with your deployment requirements and constraints. 3. Security: Review the permissions granted to the gitlab-admin ServiceAccount to ensure that it has appropriate access rights within the Kubernetes cluster. Limit the permissions to the minimum required for the GitLab Runner to perform its tasks to reduce the risk of unauthorized access or privilege escalation. 4. ConfigMap Management: Ensure that the gitlab-runner-config ConfigMap referenced in the configuration contains the correct configuration settings for the GitLab Runner. Monitor and manage changes to the ConfigMap to ensure that the GitLab Runner's configuration remains up-to-date and consistent across deployments.
Read moreTechnologies
Related Patterns
Istio Operator

MESHERY4a76
gke-online-serving-single-gpu

MESHERY481f

RELATED PATTERNS
Pod Readiness

MESHERY4b83
GKE-ONLINE-SERVING-SINGLE-GPU
Description
""
Caveats and Considerations
""
Technologies
Related Patterns
Pod Readiness

MESHERY4b83
grafana deployment

MESHERY4f2a

RELATED PATTERNS
Istio Operator

MESHERY4a76
GRAFANA DEPLOYMENT
Description
The provided YAML configuration defines a Kubernetes Deployment named "grafana" within the "monitoring" namespace. This Deployment ensures the availability of one instance of Grafana, a monitoring and visualization tool. It specifies resource requirements, including memory and CPU limits, and mounts volumes for persistent storage and configuration. The container runs the latest version of the Grafana image, exposing port 3000 for access. The configuration also includes a Pod template with labels for Pod identification and a selector to match labels for managing Pods.
Read moreCaveats and Considerations
1. Container Image Version: While the configuration uses grafana/grafana:latest for the container image, it's important to note that relying on the latest tag can introduce instability if Grafana publishes a new version that includes breaking changes or bugs. Consider specifying a specific version tag for more predictable behavior. 2. Resource Limits: Resource limits (memory and cpu) are specified for the container. Ensure that these limits are appropriate for your deployment environment and the expected workload of Grafana. Adjust these limits based on performance testing and monitoring. 3. Storage: The configuration uses an emptyDir volume for Grafana's storage. This volume is ephemeral and will be deleted if the Pod restarts or is rescheduled to a different node. Consider using a persistent volume (e.g., PersistentVolumeClaim) for storing Grafana data to ensure data persistence across Pod restarts. 4. Configurations: Configuration for Grafana's data sources is mounted using a ConfigMap. Ensure that the ConfigMap (grafana-datasources) is properly configured with the required data source configurations. Verify that changes to the ConfigMap are propagated to the Grafana Pod without downtime.
Read moreTechnologies
Related Patterns
Istio Operator

MESHERY4a76
guest_book

MESHERY4b71

RELATED PATTERNS
Pod Readiness

MESHERY4b83
GUEST_BOOK
Description
""
Caveats and Considerations
""
Technologies
Related Patterns
Pod Readiness

MESHERY4b83
hello-app

MESHERY4089

RELATED PATTERNS
Pod Readiness

MESHERY4b83
HELLO-APP
Description
""
Caveats and Considerations
""
Technologies
Related Patterns
Pod Readiness

MESHERY4b83
istio-ingress-service-web-api-v1-only

MESHERY48d4

RELATED PATTERNS
Service Internal Traffic Policy

MESHERY41b6
ISTIO-INGRESS-SERVICE-WEB-API-V1-ONLY
Description
Requests with the URI prefix kiali are routed to the kiali.istio-system.svc.cluster.local service on port 20001. Requests with URI prefixes like /web-api/v1/getmultiple, /web-api/v1/create, and /web-api/v1/manage are routed to the web-api service with the subset v1. Requests with URI prefixes openapi/ui/ and /openapi are routed to the web-api service on port 9080. Requests with URI prefixes like /loginwithtoken, /login, and /callback are routed to different services, including web-app and authentication. Requests with any other URI prefix are routed to the web-app service on port 80.
Read moreCaveats and Considerations
Ensure Istio control plane is up and running
Technologies
Related Patterns
Service Internal Traffic Policy

MESHERY41b6
jaegar

MESHERY4186

RELATED PATTERNS
Robot Shop Sample App

MESHERY4c4e
JAEGAR
Description
Distributed tracing observability platforms, such as Jaeger, are essential for modern software applications that are architected as microservices. Jaeger maps the flow of requests and data as they traverse a distributed system. These requests may make calls to multiple services, which may introduce their own delays or errors. Jaeger connects the dots between these disparate components, helping to identify performance bottlenecks, troubleshoot errors, and improve overall application reliability.
Read moreCaveats and Considerations
technologies used in this design is jaegar for distributed tracing ,sample services ,deployments to show distributed tracing in kubernetes
Technologies
Related Patterns
Robot Shop Sample App

MESHERY4c4e
k8s Deployment-2

MESHERY4d32

RELATED PATTERNS
Pod Readiness

MESHERY4b83
K8S DEPLOYMENT-2
Description
""
Caveats and Considerations
""
Technologies
Related Patterns
Pod Readiness

MESHERY4b83
knative-service

MESHERY4ce7

RELATED PATTERNS
Istio Operator

MESHERY4a76
KNATIVE-SERVICE
Description
This YAML configuration defines a Kubernetes Deployment for a Knative service. This Deployment, named "knative-service," specifies that a container will be created using a specified container image, which should be replaced with the actual image name. The container is configured to listen on port 8080. The Deployment ensures that a single replica of the container is maintained within the "knative-serving" namespace. The Deployment uses labels to identify the pods it manages. Additionally, a Kubernetes Service is defined to expose the Deployment. This Service, named "knative-service," is also created within the "knative-serving" namespace. It uses a selector to match the pods labeled with "app: knative-service" and maps the Service port 80 to the container port 8080, facilitating external access to the deployed application. Furthermore, a Knative Service resource is configured to manage the Knative service. This Knative Service, also named "knative-service" and located in the "knative-serving" namespace, is configured with the same container image and port settings. The Knative Service template includes metadata labels and container specifications, ensuring consistent deployment and management within the Knative environment. This setup allows the Knative service to handle HTTP requests efficiently and leverage Knative's autoscaling capabilities.
Read moreCaveats and Considerations
Image Pull Policy:Ensure the image pull policy is appropriately set, especially if using a custom or private container image. You may need to configure Kubernetes to access private image repositories by setting up image pull secrets. Resource Requests and Limits: Define resource requests and limits for CPU and memory to ensure that the Knative service runs efficiently without exhausting cluster resources. This helps in resource allocation and autoscaling. Namespace Management: Deploying to the knative-serving namespace is typical for Knative components, but for user applications, consider using a separate namespace for better organization and access control. Autoscaling Configuration: Knative supports autoscaling based on metrics like concurrency or CPU usage. Configure autoscaling settings to match your application's load characteristics. Networking and Ingress: Ensure your Knative service is properly exposed via an ingress or gateway if external access is required. Configure DNS settings and TLS for secure access. Monitoring and Logging: Implement monitoring and logging to track the performance and health of your Knative service. Use tools like Prometheus, Grafana, and Elasticsearch for this purpose.
Read moreTechnologies
Related Patterns
Istio Operator

MESHERY4a76
mTLS-handshake-acceleration-for-Istio

MESHERY4d09

RELATED PATTERNS
Pod Readiness

MESHERY4b83
MTLS-HANDSHAKE-ACCELERATION-FOR-ISTIO
Description
Cryptographic operations are among the most compute-intensive and critical operations when it comes to secured connections. Istio uses Envoy as the “gateways/sidecar” to handle secure connections and intercept the traffic. Depending upon use cases, when an ingress gateway must handle a large number of incoming TLS and secured service-to-service connections through sidecar proxies, the load on Envoy increases. The potential performance depends on many factors, such as size of the cpuset on which Envoy is running, incoming traffic patterns, and key size. These factors can impact Envoy serving many new incoming TLS requests. To achieve performance improvements and accelerated handshakes, a new feature was introduced in Envoy 1.20 and Istio 1.14. It can be achieved with 3rd Gen Intel® Xeon® Scalable processors, the Intel® Integrated Performance Primitives (Intel® IPP) crypto library, CryptoMB Private Key Provider Method support in Envoy, and Private Key Provider configuration in Istio using ProxyConfig.
Read moreCaveats and Considerations
Ensure networking is setup properly and correct annotation are applied to each resource for custom Intel configuration
Technologies
Related Patterns
Pod Readiness

MESHERY4b83
mattermost operator

MESHERY4eab

RELATED PATTERNS
Istio Operator

MESHERY4a76
MATTERMOST OPERATOR
Description
This YAML file defines a Kubernetes Deployment for the mattermost-operator in the mattermost-operator namespace. The deployment is configured to run a single replica of the Mattermost operator, which manages Mattermost instances within the Kubernetes cluster. The pod template specifies the container details for the operator. The container, named mattermost-operator, uses the image mattermost/mattermost-operator:latest and is set to pull the image if it is not already present (IfNotPresent). The container runs the /mattermost-operator command with arguments to enable leader election and set the metrics address to 0.0.0.0:8383. Several environment variables are defined to configure the operator's behaviour, such as MAX_RECONCILING_INSTALLATIONS (set to 20), REQUEUE_ON_LIMIT_DELAY (set to 20 seconds), and MAX_RECONCILE_CONCURRENCY (set to 10). These settings control how the operator handles the reconciliation process for Mattermost installations. The container also exposes a port (8383) for metrics, allowing monitoring and observation of the operator's performance. The deployment specifies that the pods should use the mattermost-operator service account, ensuring they have the appropriate permissions to interact with the Kubernetes API and manage Mattermost resources.
Read moreCaveats and Considerations
1. Resource Allocation: The deployment specifies no resource limits or requests for the mattermost-operator container. It is crucial to define these to ensure the operator has sufficient CPU and memory to function correctly without affecting other workloads in the cluster. 2. Image Tag: The latest tag is used for the Mattermost operator image. This practice can lead to unpredictability in deployments, as the latest tag may change and introduce unexpected changes or issues. It is recommended to use a specific version tag to ensure consistency. 3. Security Context: The deployment does not specify a detailed security context for the container. Adding constraints such as runAsNonRoot, readOnlyRootFilesystem, and dropCapabilities can enhance security by limiting the container’s privileges. 4. Environment Variables: The environment variables like MAX_RECONCILING_INSTALLATIONS, REQUEUE_ON_LIMIT_DELAY, and MAX_RECONCILE_CONCURRENCY are set directly in the deployment. If these values need to be adjusted frequently, consider using a ConfigMap to manage them externally. 5. Metrics and Monitoring: The metrics address is exposed on port 8383. Ensure that appropriate monitoring tools are in place to capture and analyse these metrics for performance tuning and troubleshooting.
Read moreTechnologies
Related Patterns
Istio Operator

MESHERY4a76
meshery-cilium-deployment

MESHERY4267

RELATED PATTERNS
Pod Readiness

MESHERY4b83
MESHERY-CILIUM-DEPLOYMENT
Description
This is sample app for testing k8s deployment and cilium
Caveats and Considerations
Ensure networking is setup properly and correct annotation are applied to each resource for custom Intel configuration
Technologies
Related Patterns
Pod Readiness

MESHERY4b83
minIO Deployment

MESHERY4c90

RELATED PATTERNS
Istio Operator

MESHERY4a76
MINIO DEPLOYMENT
Description
This configuration sets up a single MinIO instance with specific environment variables, health checks, and life cycle actions, utilising a PersistentVolumeClaim for data storage within a Kubernetes cluster. It ensures that MinIO is deployed and managed according to the specified parameters.
Read moreCaveats and Considerations
1. Replication and High Availability: The configuration specifies only one replica (replicas: For production environments requiring high availability and fault tolerance, consider increasing the number of replicas and configuring MinIO for distributed mode to ensure data redundancy and availability. 2. Security Considerations: The provided configuration includes hard-coded access and secret keys (MINIO_ACCESS_KEY and MINIO_SECRET_KEY) within the YAML file. It is crucial to follow best practices for secret management in Kubernetes, such as using Kubernetes Secrets or external secret management solutions, to securely manage sensitive information. 3. Resource Requirements: Resource requests and limits for CPU, memory, and storage are not defined in the configuration. Assess and adjust these resource specifications according to the expected workload and performance requirements to ensure optimal resource utilisation and avoid resource contention. 4. Storage Provisioning: The configuration relies on a PersistentVolumeClaim (PVC) named minio to provide storage for MinIO. Ensure that the underlying storage provisioner and PersistentVolume (PV) configuration meet the performance, capacity, and durability requirements of the MinIO workload.
Read moreTechnologies
Related Patterns
Istio Operator

MESHERY4a76
minimalistiobookinfo.yaml

MESHERY4377

RELATED PATTERNS
Pod Readiness

MESHERY4b83
MINIMALISTIOBOOKINFO.YAML
Description
""
Caveats and Considerations
""
Technologies
Related Patterns
Pod Readiness

MESHERY4b83
my first app

MESHERY4191

RELATED PATTERNS
Pod Readiness

MESHERY4b83
MY FIRST APP
Description
""
Caveats and Considerations
""
Technologies
Related Patterns
Pod Readiness

MESHERY4b83
my first app design

MESHERY46a2

RELATED PATTERNS
Istio Operator

MESHERY4a76
MY FIRST APP DESIGN
Description
This infrastructure design defines two services within a system: 1. **Customer Service**: - Type: Customer - Version: 0.0.50 - Model: Jira Service Desk Operator - Attributes: This service is configured with specific settings, including an email address, legacy customer mode, and a name. It is categorized as a tool within the system.2. **Notebook Service**: - Type: Notebook - Version: 1.6.1 - Model: Kubeflow - Attributes: This service is categorized as a machine learning tool. It has metadata related to its source URI and appearance. These services are components within a larger system or design, each serving a distinct purpose. The Customer Service is associated with customer-related operations, while the Notebook Service is related to machine learning tasks.
Read moreCaveats and Considerations
Make sure to use correct credentials for Jira service operator
Technologies
Related Patterns
Istio Operator

MESHERY4a76
my-sql-with-cinder-vol-plugin

MESHERY40de

RELATED PATTERNS
Pod Readiness

MESHERY4b83
MY-SQL-WITH-CINDER-VOL-PLUGIN
Description
Cinder is a Block Storage service for OpenStack. This example shows how it can be used as an attachment mounted to a pod in Kubernetes. Start kubelet with cloud provider as openstack with a valid cloud config Sample cloud_config [Global] auth-url=https://os-identity.vip.foo.bar.com:5443/v2.0 username=user password=pass region=region1 tenant-id=0c331a1df18571594d49fe68asa4e Create a cinder volume Ex cinder create --display-name=test-repo 2Use the id of the cinder volume created to create a pod definition Create a new pod with the definition cluster/kubectl.sh create -f examples/mysql-cinder-pd/mysql.yaml This should now 1. Attach the specified volume to the kubelet's host machine\\
2. Format the volume if required (only if the volume specified is not already formatted to the fstype specified) 3. Mount it on the kubelet's host machine 4. Spin up a container with this volume mounted to the path specified in the pod definition
Caveats and Considerations
Currently the cinder volume plugin is designed to work only on linux hosts and offers ext4 and ext3 as supported fs types Make sure that kubelet host machine has the following executables.\\
Ensure cinder is installed and configured properly in the region in which kubelet is spun up
Technologies
Related Patterns
Pod Readiness

MESHERY4b83
mysql operator

MESHERY4367

RELATED PATTERNS
Istio Operator

MESHERY4a76
MYSQL OPERATOR
Description
This YAML file defines a Kubernetes Deployment for the mysql-operator in the mysql-operator namespace. The deployment specifies a single replica of the operator to manage MySQL instances within the cluster. The operator container uses the image container-registry.oracle.com/mysql/community-operator:8.4.0-2.1.3 and runs the mysqlsh command with specific arguments for the MySQL operator.
Read moreCaveats and Considerations
1. Single Replica: Running a single replica of the operator can be a single point of failure. Consider increasing the number of replicas for high availability if supported. 2. Image Version: The image version 8.4.0-2.1.3 is specified, ensuring consistent deployments. Be mindful of updating this version in accordance with operator updates and testing compatibility. 3. Security Context: The security context is configured to run as a non-root user (runAsUser: 2), with no privilege escalation (allowPrivilegeEscalation: false), and a read-only root filesystem (readOnlyRootFilesystem: true). This enhances the security posture of the deployment. 4. Environment Variables: Sensitive information should be handled securely. Environment variables such as credentials should be managed using Kubernetes Secrets if necessary. 5. Readiness Probe: The readiness probe uses a file-based check, which is simple but ensure that the mechanism creating the /tmp/mysql-operator-ready file is reliable.
Read moreTechnologies
Related Patterns
Istio Operator

MESHERY4a76
nginx ingress

MESHERY4d83

RELATED PATTERNS
Istio Operator

MESHERY4a76
NGINX INGRESS
Description
Creates a Kubernetes deployment with two replicas running NGINX containers and a service to expose these pods internally within the Kubernetes cluster. The NGINX containers are configured to listen on port 80, and the service routes traffic to these containers.
Read moreCaveats and Considerations
ImagePullPolicy: In the Deployment spec, the imagePullPolicy is set to Never. This means that Kubernetes will never attempt to pull the NGINX image from a container registry, assuming it's already present on the node where the pod is scheduled. This can be problematic if the image is not present or if you need to update to a newer version. Consider setting the imagePullPolicy to Always or IfNotPresent depending on your deployment requirements. Resource Allocation: The provided manifest doesn't specify resource requests and limits for the NGINX container. Without resource limits, the container can consume excessive resources, impacting other workloads on the same node. It's recommended to define resource requests and limits based on the expected workload characteristics to ensure stability and resource efficiency.
Read moreTechnologies
Related Patterns
Istio Operator

MESHERY4a76
nginx-deployment

MESHERY4817

RELATED PATTERNS
Pod Readiness

MESHERY4b83
NGINX-DEPLOYMENT
Description
""
Caveats and Considerations
""
Technologies
Related Patterns
Pod Readiness

MESHERY4b83
node-feature-discovery

MESHERY4481

RELATED PATTERNS
Apache Airflow

MESHERY41d4
NODE-FEATURE-DISCOVERY
Description
Node Feature Discovery (NFD) is a Kubernetes add-on for detecting hardware features and system configuration. Detected features are advertised as node labels. NFD provides flexible configuration and extension points for a wide range of vendor and application specific node labeling needs.
Read moreCaveats and Considerations
Checkout this docs for Caveats And Considerations https://kubernetes-sigs.github.io/node-feature-discovery/v0.16/get-started/introduction.html
Technologies
Related Patterns
Apache Airflow

MESHERY41d4
postgreSQL cluster

MESHERY4d4f

RELATED PATTERNS
Istio Operator

MESHERY4a76
POSTGRESQL CLUSTER
Description
This YAML configuration defines a PostgreSQL cluster deployment tailored for Google Kubernetes Engine (GKE) utilizing the Cloud Native PostgreSQL (CNPG) operator. The cluster, named "gke-pg-cluster," is designed to offer a standard PostgreSQL environment, featuring three instances for redundancy and high availability. Each instance is provisioned with 2Gi of premium storage, ensuring robust data persistence. Resource allocations are specified, with each instance requesting 1Gi of memory and 1000m (milliCPU) of CPU, and limits set to the same values. Additionally, the cluster is configured with pod anti-affinity, promoting distribution across nodes for fault tolerance. Host-based authentication is enabled for security, permitting access from IP range 10.48.0.0/20 using the "md5" method. Monitoring capabilities are integrated, facilitated by enabling pod monitoring. The configuration also includes tolerations and additional pod affinity rules, enhancing scheduling flexibility and optimizing resource utilization within the Kubernetes environment. This deployment exemplifies a robust and scalable PostgreSQL infrastructure optimized for cloud-native environments, aligning with best practices for reliability, performance, and security.
Read moreCaveats and Considerations
1. Resource Requirements: The specified resource requests and limits (memory and CPU) should be carefully evaluated to ensure they align with the expected workload demands. Adjustments may be necessary based on actual usage patterns and performance requirements. 2. Storage Class: The choice of storage class ("premium-rwo" in this case) should be reviewed to ensure it meets performance, availability, and cost requirements. Depending on the workload characteristics, other storage classes may be more suitable. 3. Networking Configuration: The configured host-based authentication rules may need adjustment based on the network environment and security policies in place. Ensure that only authorized entities have access to the PostgreSQL cluster.
Read moreTechnologies
Related Patterns
Istio Operator

MESHERY4a76
prometheus-operator-crd-cluster-roles

MESHERY4571
PROMETHEUS-OPERATOR-CRD-CLUSTER-ROLES
Description
prometheus operator crd cluster roles
Caveats and Considerations
prometheus operator crd cluster roles
Technologies
prometheus-postgres-exporter

MESHERY4dd9

RELATED PATTERNS
Pod Readiness

MESHERY4b83
PROMETHEUS-POSTGRES-EXPORTER
Description
""
Caveats and Considerations
""
Technologies
Related Patterns
Pod Readiness

MESHERY4b83
prometheus-versus-3

MESHERY48bb

RELATED PATTERNS
Istio Operator

MESHERY4a76
PROMETHEUS-VERSUS-3
Description
This is a simple prometheus montioring design
Caveats and Considerations
Networking should be properly configured to enable communication between the frontend and backend components of the app.
Technologies
Related Patterns
Istio Operator

MESHERY4a76
prometheus.yaml

MESHERY46c3

RELATED PATTERNS
Pod Readiness

MESHERY4b83
PROMETHEUS.YAML
Description
prometheus
Caveats and Considerations
prometheus
Technologies
Related Patterns
Pod Readiness

MESHERY4b83
prometheus_kubernetes

MESHERY4a71

RELATED PATTERNS
Pod Readiness

MESHERY4b83
PROMETHEUS_KUBERNETES
Description
""
Caveats and Considerations
""
Technologies
Related Patterns
Pod Readiness

MESHERY4b83
rabbitmq-cluster-operator

MESHERY4f09

RELATED PATTERNS
Pod Readiness

MESHERY4b83
RABBITMQ-CLUSTER-OPERATOR
Description
""
Caveats and Considerations
""
Technologies
Related Patterns
Pod Readiness

MESHERY4b83
replication controller

MESHERY4849

RELATED PATTERNS
Apache Airflow

MESHERY41d4
REPLICATION CONTROLLER
Description
A ReplicationController ensures that a specified number of pod replicas are running at any one time. In other words, a ReplicationController makes sure that a pod or a homogeneous set of pods is always up and available. If there are too many pods, the ReplicationController terminates the extra pods. If there are too few, the ReplicationController starts more pods. Unlike manually created pods, the pods maintained by a ReplicationController are automatically replaced if they fail, are deleted, or are terminated. For example, your pods are re-created on a node after disruptive maintenance such as a kernel upgrade. For this reason, you should use a ReplicationController even if your application requires only a single pod. A ReplicationController is similar to a process supervisor, but instead of supervising individual processes on a single node, the ReplicationController supervises multiple pods across multiple nodes.
Read moreCaveats and Considerations
This example ReplicationController config runs three copies of the nginx web server. u can add deployments , config maps , services to this design as per requirements .
Technologies
Related Patterns
Apache Airflow

MESHERY41d4
the-new-stack

MESHERY4705

RELATED PATTERNS
Pod Readiness

MESHERY4b83
THE-NEW-STACK
Description
The New Stack (TNS) is a simple three-tier demo application, fully instrumented with the 3 pillars of observability: metrics, logs, and traces. It offers an insight on what a modern observability stack looks like and experience what it's like to pivot among different types of observability data. The TNS app is an example three-tier web app built by Weaveworks. It consists of a data layer, application logic layer, and load-balancing layer. To learn more about it, see How To Detect, Map and Monitor Docker Containers with Weave Scope from Weaveworks. The instrumentation for the TNS app is as follows: Metrics: Each tier of the TNS app exposes metrics on /metrics endpoints, which are scraped by the Grafana Agent. Additionally, these metrics are tagged with exemplar information. The Grafana Agent then writes these metrics to Mimir for storage.Logs: Each tier of the TNS app writes logs to standard output or standard error. It is captured by Kubernetes, which are then collected by the Grafana Agent. Finally, the Agent forwards them to Loki for storage. Traces: Each tier of the TNS app sends traces in Jaeger format to the Grafana Agent, which then converts them to OTel format and forwards them to Tempo for storage. Visualization: A Grafana instance configured to talk to the Mimir, Loki, and Tempo instances makes it possible to query and visualize the metrics, logs, and traces data.
Read moreCaveats and Considerations
Ensure enough resources are available on the k8s cluster
Technologies
Related Patterns
Pod Readiness

MESHERY4b83
voting_app

MESHERY49d5

RELATED PATTERNS
Pod Readiness

MESHERY4b83
VOTING_APP
Description
""
Caveats and Considerations
""
Technologies
Related Patterns
Pod Readiness

MESHERY4b83
webserver

MESHERY457a

RELATED PATTERNS
Pod Readiness

MESHERY4b83
WEBSERVER
Description
This designs runs a simple python webserver at port 8000. It also containers k8s service which connects to the deployment
Caveats and Considerations
Ensure port are not pre-occupied.
Technologies
Related Patterns
Pod Readiness

MESHERY4b83
HTTP Auth

FILTER001

RELATED PATTERNS
auth2

MESHERY47f2
HTTP AUTH
What this filter does
Simulates handling authentication of requests at proxy level. Requests with a header token with value hello are accepted as authorized while the rest unauthorized. The actual authentication is handled by the Upstream server. Whenever the proxy recieves a request it extracts the token header and makes a request to the Upstream server which validates the token and returns a response.
...read moreCaveats and Considerations
Test:
curl -H "token":"hello" 0.0.0.0:18000 -v # Authorized
curl -H "token":"world" 0.0.0.0:18000 -v # Unauthorized
Technologies
Related Patterns
auth2

MESHERY47f2
TCP Metrics

FILTER002

RELATED PATTERNS
auth2

MESHERY47f2
TCP METRICS
What this filter does
Collects simple metrics for every TCP packet and logs it.
...read moreCaveats and Considerations
Test:curl 0.0.0.0:18000 -v -d "request body"
Check the logs for the metrics.
Technologies
Related Patterns
auth2

MESHERY47f2
TCP Packet Parse

FILTER003

RELATED PATTERNS
auth2

MESHERY47f2
TCP PACKET PARSE
What this filter does
Parses the contents of every TCP packet the proxy receives and logs it.
...read moreCaveats and Considerations
Test:curl 0.0.0.0:18000 -v -d "request body"
Check the logs for the packet contents.
Technologies
Related Patterns
auth2

MESHERY47f2
Singleton HTTP Call

FILTER004

RELATED PATTERNS
auth2

MESHERY47f2
SINGLETON HTTP CALL
What this filter does
The filter is responsible for intercepting HTTP requests, authorizing them based on the stored cache, and performing rate limiting. In the context of the envoy, this component is an HTTP filter and gets executed in the worker threads. For each request, a context object gets created.
...read moreCaveats and Considerations
llam tristique tristique condimentum. Maecenas sollicitudin scelerisque egestas. Suspendisse aliquet elit quis dolor gravida, et auctor ligula ornare. Nullam et sodales ante, quis varius elit. Nullam cursus, orci eleifend tristique semper, neque nisl tincidunt purus, sed ultricies felis arcu vel metus.
Technologies
Related Patterns
auth2

MESHERY47f2
Metrics Store

FILTER005

RELATED PATTERNS
auth2

MESHERY47f2
METRICS STORE
What this filter does
This example showcases communication between a WASM filter and a service via shared queue. It combines the `Singleton-HTTP-Call` and `TCP-Metrics` examples. The filter collects metrics and enqueues it onto the queue while the service dequeues it and sends it to upstream server where it is stored.
...read moreCaveats and Considerations
Test:curl 0.0.0.0:18000 -v -d "request body" # make a few of these calls
curl 0.0.0.0:8080/retrieve -v # Retrieves the stored stats
# x | y | z === x : downstream bytes, y : upstream bytes, z: the latency for application server to respond
Technologies
Related Patterns
auth2

MESHERY47f2
Singleton Queue

FILTER006

RELATED PATTERNS
auth2

MESHERY47f2
SINGLETON QUEUE
What this filter does
An example which depicts an singleton HTTP WASM service which does an HTTP call once every 2 seconds.
...read moreCaveats and Considerations
Check the logs for the response of the request.
Technologies
Related Patterns
auth2

MESHERY47f2
JWT Filter

FILTER007

RELATED PATTERNS
auth2

MESHERY47f2
JWT FILTER
What this filter does
Sample configuration to be passed:{
"add_header": [
["header1","value1"],
["header2","value2"]
],
"del_header":[
"header1"
],
"add_payload": [
["payload1","value1"],
["payload2","value2"],
],
"del_payload":[
"payload1"
],
"payload_to_header": [
"payload2"
],
"header_to_payload": [
"header2"
]
}
Caveats and Considerations
DISCLAIMER: This filter doesn't regenerate the signature of the modified JWT, and provides no protections. Proceed with caution.
Technologies
Related Patterns
auth2

MESHERY47f2
Using Envoy metrics
Coming Soon...
