Architecture

Basic Install Architecture

After installing Workload Manager from the Suite Admin, if your CloudCenter Suite Kubernetes cluster can receive connections from public internet addresses, you have what you need to use all of Workload Manager's core features with VM-based public clouds. This includes deploying and managing workloads, and importing and managing VMs launched outside of Workload Manager.  Once you deploy a workload in a public cloud, or import VMs, you will have deployed all of the component needed to create the basic install architecture as shown in the figure below.

The basic install architecture consists of four components

  • Manager
  • Agent
  • Cisco-hosted bundle store
  • Cisco-hosted package store

The manager component is the main component of Workload Manager. It consists of services running within pods in the CloudCenter Suite cluster. Some of these services are common framework services used by all modules, some are specific to Workload Manager alone, and some are shared between Workload Manager and Cost Optimizer.

One function of the manager is communicating with the API endpoint of the target cloud region where your workloads will be launched. This communication is used to launch and control the VMs or pods running your workloads, and to extract data regarding cloud resource consumption.

A second function of the manager is to communicate directly with the VMs running your workloads. This is only possible when those VMs have the second Workload Manager component installed: the agentVMs with the agent installed are called worker VMs.

The agent gives you additional control of your VMs by allowing you to execute commands or run scripts from within the VM. These can be scripts that run at certain points in the VM's lifecycle, such as a script to install and launch a service at startup, or they can be actions that are executed on demand via the Workload Manager UI. See Actions Library for more info. 

Prior to the agent being downloaded and started on a VM, a set of scripts and installer packages must be installed on the VM first. This set of prerequisite software is collectively know as the worker. Once installation of the worker is complete, the first time the VM is started, a script in the worker downloads and starts the agent executable file.

The worker can be installed on a VM in one of three ways:

  • When you launch a VM-based workload to a public cloud using Workload Manager, installation of the worker happens automatically in a process called dynamic bootstrapping.  When Workload Manager issues the API call to the cloud endpoint to launch the VM, it passes as user data a bootstrap script that downloads and installs prerequisite software for the agent, and then downloads and starts the agent executable.
  • If your target VM-based cloud does not support dynamic bootstrapping, or if you prefer not to use dynamic bootstrapping, the alternative is to use "pre-bootstrapped" images for your VM-based services. These are OS images with worker software pre-installed. See Management Agent (Worker) for more details. 
  • You can also install the agent on VMs in your cloud that are were not launched through the Workload Manager in a process called brownfield VM import. See Virtual Machine Management for more details. 

The last two components of the basic install architecture are the Cisco-hosted software repositories: the bundle store at http://cdn.cliqr.com/cloudcenter-<version>/bundle (do not add a slash at the end of the URL) and the package store at http://repo.cliqrtech.com. The bundle store contains the scripts used to to install the worker software, the latest version of the agent, and scripts that run within the worker VM for launching and controlling the service that should run in that VM. The package store contains the install packages for the worker software and install packages for the Workload Manager OOB services, as well as the public cloud instance types, storage types and image mapping.

For Kubernetes target clouds, there are no worker VMs and all control of the container-based workloads is through the Kubernetes API. The basic install architecture relative to Kubernetes target clouds is summarized in the figure below. 

Since your workloads are deployed in Kubernetes containers, there are no workers and no need to access the Cisco-hosted bundle store and package store. Instead, your target Kubernetes cloud must allow access to the public Docker hub for downloading the public Docker image files referenced in your containerized workloads.


Full Install Architecture

The basic install architecture has a key limitation: it assumes that the manager and all of the target cloud regions can initiate connections to or receive connections from public internet addresses. If either of these cases is not true, or you want to restrict internet access for security reasons, you will need to install additional components to ensure full functionality of Workload Manager. For VM-based clouds you will need to install two additional components: 

The full install architecture for VM-based cloud regions is shown in the figure below.

The Cloud Remote component is delivered as a virtual appliance that you import to your target VM-based cloud region. It a CentOS 7 image with Docker Swarm which manages a collection of containerized services. As such, it can be deployed as a single VM and later scaled to a cluster of VMs. 

For VM-based cloud regions, Cloud Remote performs the following functions:

  • Proxies communications between the manager and the cloud API endpoint (also used by Cost Optimizer).
  • Executes external scripts on the workload VMs (even those without the management agent) to support external lifecycles actions.
  • Proxies communications for user SSH/RDP sessions with worker VMs. 
  • Proxies communication between the manager and the worker VMs to support internal lifecycle actions, internal on demand actions, and reporting of workload status.

Note: If the manager component cannot accept inbound connections from public addresses, you will need to install Cloud Remote in all VM-based target regions that are not within the same network as your manager.

The local repo appliance is also delivered as a virtual appliance that you import to your target VM-based cloud region. The local repo appliance can be configured to support both a local bundle store and a local package store. The local repo appliance must have periodic internet access in order to sync with the master bundle store and package store hosted by Cisco. Cisco also provides scripts for creating your local repo appliance on the Linux 

The full install architecture for Kubernetes target clouds is shown in the following figure.

For Kubernetes target clouds, you would install the Cloud Remote appliance in an environment in the same network as the target Kubernetes cloud. In this case, Cloud Remote perform two functions:

  • Proxies the API calls from the manager to the cloud API endpoint.
  • Executes external scripts on the workload pods to support external lifecycle actions.

For Kubernetes clouds that do not have outbound internet access, you will also need to install your own Docker registry on a VM in the same network as your target Kubernetes cloud. You will need to populate that registry with all of the public and private Docker images used by the containerized services in your workload. (See Docker.io user documentation for more on setting up your own Docker registry).



  • No labels
© 2017-2019 Cisco Systems, Inc. All rights reserved