Changes

no edit summary
Line 55: Line 55:  
   <h2>Business Brief</h2>
 
   <h2>Business Brief</h2>
 
   <p>Application containers are increasingly being used in the infrastructure for cloud-native and microservice applications. Specifically, Docker is the engine most commonly used to create containers. Many application developers have noted the value of containers in dependency management as it packages the application and its dependencies, libraries, other binaries into a container to abstract away the differences in the OS distribution and underlying infrastructure. This solves many issues caused by differences in runtime environments when a software moves from one environment to another one.</p>
 
   <p>Application containers are increasingly being used in the infrastructure for cloud-native and microservice applications. Specifically, Docker is the engine most commonly used to create containers. Many application developers have noted the value of containers in dependency management as it packages the application and its dependencies, libraries, other binaries into a container to abstract away the differences in the OS distribution and underlying infrastructure. This solves many issues caused by differences in runtime environments when a software moves from one environment to another one.</p>
   <p class="inline">Containers improve applications’ portability and scalability that enable applications to be released and updated in an easy and fast way without downtime. However, there still exits a demand  in the management of containers when the services provided need to be deployed across a cluster of host servers to achieve high-availability and disaster recovery. This is where container deployment in cluster and management  tools like Kubernetes provide their value. Developers can now begin to  deploy and orchestrate services  as a collection of containers across a cluster of servers. Container resource requirements can be explicitly declared that allows  developers to bundle application code with an environment configurations. </p><p class="expand inline mw-collapsible-content">Also by increasing container density, resources can be used more efficiently and thus it in turn improves hardware usage  [1]. Containers provide applications with isolations, so that  a development team can be made responsible for specific containers.</p>
+
   <p class="inline">Containers improve applications’ portability and scalability that enable applications to be released and updated in an easy and fast way without downtime. However, there still exits a demand  in the management of containers when the services provided need to be deployed across a cluster of host servers to achieve high-availability and disaster recovery. This is where container deployment in cluster and management  tools like Kubernetes provide their value. Developers can now begin to  deploy and orchestrate services  as a collection of containers across a cluster of servers. Container resource requirements can be explicitly declared that allows  developers to bundle application code with an environment configurations. </p><p class="expand inline mw-collapsible-content">Also by increasing container density, resources can be used more efficiently and thus it in turn improves hardware usage  <ref>Clayton, T. and Watson, R. (2018). Using Kubernetes to Orchestrate Container-Based Cloud and Microservices Applications. [online] Gartner.com. Available at: <i>[https://www.gartner.com/doc/3873073/using-kubernetes-orchestrate-containerbased-cloud] </i></ref>. Containers provide applications with isolations, so that  a development team can be made responsible for specific containers.</p>
   <p>Kubernetes also known as K8s, is a portable, extensible open-source platform for managing containerized workloads and services that facilitates both declarative configuration and automation.  Kubernetes provides a container-centric management environment. It orchestrates computing, networking, and storage infrastructure on behalf of user workloads. Kubernetes also offers self-healing, automatized rollout and rollback features, which greatly improve operation high-availability and flexibility.  One of the biggest advantages of Kubernetes is the flexibility it provides. Many PaaS packaging dictate specific frameworks, are catered towards specific workloads, or impose limitations which language runtimes can be used [1]. These issues are all eliminated with Kubernetes. Therefore, if an organization’s application is capable of being run on a container, Kubernetes is a viable option for container orchestration.</p>
+
   <p> Kubernetes also offers self-healing, automatized rollout and rollback features, which greatly improve operation high-availability and flexibility.  One of the biggest advantages of Kubernetes is the flexibility it provides. Many PaaS packaging dictate specific frameworks, are catered towards specific workloads, or impose limitations which language runtimes can be used <ref>Clayton, T. and Watson, R. (2018). Using Kubernetes to Orchestrate Container-Based Cloud and Microservices Applications. [online] Gartner.com. Available at: <i>[https://www.gartner.com/doc/3873073/using-kubernetes-orchestrate-containerbased-cloud] </i></ref>. These issues are all eliminated with Kubernetes. Therefore, if an organization’s application is capable of being run on a container, Kubernetes is a viable option for container orchestration.</p>
    
   <h2>Technology Brief</h2>
 
   <h2>Technology Brief</h2>
<p>The Kubernetes cluster or deployment can be broken down into several components. The Kubernetes “master” is the machine in charge of managing other “node” machines. The “node is the machine in charge of actually running tasks fed to it via the user or the “master”.  The master and nodes can be either a physical or virtual machines. In each Kubernetes cluster, there is one master and multiple nodes machines. The main goal of Kubernetes is to achieve “Desired State Management”. The “master” is fed a specific configuration through its RESTful API which it exposes to the user,  and the “master” is then responsible for running this configuration across its set of “node”. The nodes can be thought of as  host of containers. They communicate with the “master” through the agent in each node --“Kubelet” process. To establish a specific configuration in Kubernetes, the “master is fed a deployment file with the “.yaml” extension. This file contains a variety of configuration information. Within this information are “Pods” and “replicas”. There is a concept of Pod in Kubernetes and it can be described as a logic collection of containers which are managed as a single application. Resources can be shared within a Pod, these resources include shared storage (Volumes), a unique cluster of IP addresses, and information about how to run each container. A Pod can be thought of as the basic unit of the Kubernetes object model, it represents the deployment of a single instance of an application in Kubernetes [8]. A Pod can encapsulate one or more application containers. Two models exist for how Pods are deployed within a cluster. The “one-Pod-per-container” means a single pod will be associated with a single container. There can also be multiple containers that run within a single Pod, where these containers may need to communicate with one another as they share resources. In either model, the Pod can be thought of as a wrapper around the application containers. Kubernetes manages the Pod instances rather than managing the containers directly. The Pods are run on the Node machines to perform tasks. Replicas are simply instances of the Pods. Within the “.yaml” deployment file, specifications are instructing the “master” machine how many instances/replicas of each Pod to run, which is handled by a replication controller [8]. When a node dies or a running Pod  experiences an unexpected termination, the replication controller will take note take care of this by creating  the appropriate number of Pods [8].</p>
+
<p>The Kubernetes cluster or deployment can be broken down into several components. The Kubernetes “master” is the machine in charge of managing other “node” machines. The “node is the machine in charge of actually running tasks fed to it via the user or the “master”.  The master and nodes can be either a physical or virtual machines. In each Kubernetes cluster, there is one master and multiple nodes machines. The main goal of Kubernetes is to achieve “Desired State Management”. The “master” is fed a specific configuration through its RESTful API which it exposes to the user,  and the “master” is then responsible for running this configuration across its set of “node”. The nodes can be thought of as  host of containers. They communicate with the “master” through the agent in each node --“Kubelet” process. To establish a specific configuration in Kubernetes, the “master is fed a deployment file with the “.yaml” extension. This file contains a variety of configuration information. Within this information are “Pods” and “replicas”. There is a concept of Pod in Kubernetes and it can be described as a logic collection of containers which are managed as a single application. Resources can be shared within a Pod, these resources include shared storage (Volumes), a unique cluster of IP addresses, and information about how to run each container. A Pod can be thought of as the basic unit of the Kubernetes object model, it represents the deployment of a single instance of an application in Kubernetes <ref>Kubernetes.io. (2018). Kubernetes Basics - Kubernetes. [online] Available at: <i>[https://kubernetes.io/docs/tutorials/kubernetes-basics/] </i></ref>. A Pod can encapsulate one or more application containers. Two models exist for how Pods are deployed within a cluster. The “one-Pod-per-container” means a single pod will be associated with a single container. There can also be multiple containers that run within a single Pod, where these containers may need to communicate with one another as they share resources. In either model, the Pod can be thought of as a wrapper around the application containers. Kubernetes manages the Pod instances rather than managing the containers directly. The Pods are run on the Node machines to perform tasks. Replicas are simply instances of the Pods. Within the “.yaml” deployment file, specifications are instructing the “master” machine how many instances/replicas of each Pod to run, which is handled by a replication controller <ref>Kubernetes.io. (2018). Kubernetes Basics - Kubernetes. [online] Available at: <i>[https://kubernetes.io/docs/tutorials/kubernetes-basics/] </i></ref>. When a node dies or a running Pod  experiences an unexpected termination, the replication controller will take note take care of this by creating  the appropriate number of Pods <ref>Kubernetes.io. (2018). Kubernetes Basics - Kubernetes. [online] Available at: <i>[https://kubernetes.io/docs/tutorials/kubernetes-basics/] </i></ref>.</p>
 
   <h2>Industry Usage</h2>
 
   <h2>Industry Usage</h2>
<p class="inline">Kubernetes is an open source system and many companies have begun to adopt it into their existing architecture as well as adapt it to their specific needs. It was originally developed by Google and was made an open source project in 2014. The Cloud Native Computing Foundation is a project of the Linux Foundation providing a community for different companies who are seeking to develop Kubernetes and other container orchestration projects. Several major cloud providers and platforms including Google Cloud Compute, HP Helion Cloud, RedHat Openshift, VMware Cloud, and Windows Azure all support the use of Kubernetes [7]. A survey, performed by iDatalabs in 2017, found 2,867 companies are currently using Kubernetes. These companies are generally located in the United States and are also most the computer software industry. Companies on the list hire between 50 and 200 employees, and accumulate 1M-100M in revenue per year. Some of the major companies on this list include GoDaddy inc, Pivotal Software inc, Globant SA, and Splunk inc</p><p class="expand inline mw-collapsible-content">. Kubernetes own approximately 8.6% of the market share within the virtualization management software category [9]. </p>
+
<p class="inline">Kubernetes is an open source system and many companies have begun to adopt it into their existing architecture as well as adapt it to their specific needs. It was originally developed by Google and was made an open source project in 2014. The Cloud Native Computing Foundation is a project of the Linux Foundation providing a community for different companies who are seeking to develop Kubernetes and other container orchestration projects. Several major cloud providers and platforms including Google Cloud Compute, HP Helion Cloud, RedHat Openshift, VMware Cloud, and Windows Azure all support the use of Kubernetes<ref>CENGN. (2018). CENGN and CloudOps Collaborate to Train Industry on Docker and Kubernetes.<i>[Available at: https://www.cengn.ca/docker-kubernetes-training-jan18/ ]</i></ref>. A survey, performed by iDatalabs in 2017, found 2,867 companies are currently using Kubernetes. These companies are generally located in the United States and are also most the computer software industry. Companies on the list hire between 50 and 200 employees, and accumulate 1M-100M in revenue per year. Some of the major companies on this list include GoDaddy inc, Pivotal Software inc, Globant SA, and Splunk inc</p><p class="expand inline mw-collapsible-content">. Kubernetes own approximately 8.6% of the market share within the virtualization management software category <ref>Idatalabs.com. (2018). Kubernetes commands 8.62% market share in Virtualization Management Software<i>[https://idatalabs.com/tech/products/kubernetes] </i></ref>. </p>
 
   <h2>Canadian Government Use</h2>
 
   <h2>Canadian Government Use</h2>
 
<p>There is a lack of documented Government of Canada (GC) initiatives and programs promoting the current and future use of Kubernetes technology. As a GC strategic IT item, Kubernetes is absent from both the GC’s Digital Operations Strategic Plan: 2018-2022 and the GC Strategic Plan for Information Management and Information Technology 2017 to 2021. This may be due to the fact that the GC is currently grappling with the implementation of Cloud Services, and the majority of resources and efforts are occupied with implementation challenges, as well as security concerns related to the protection of the information of Canadians.</p>
 
<p>There is a lack of documented Government of Canada (GC) initiatives and programs promoting the current and future use of Kubernetes technology. As a GC strategic IT item, Kubernetes is absent from both the GC’s Digital Operations Strategic Plan: 2018-2022 and the GC Strategic Plan for Information Management and Information Technology 2017 to 2021. This may be due to the fact that the GC is currently grappling with the implementation of Cloud Services, and the majority of resources and efforts are occupied with implementation challenges, as well as security concerns related to the protection of the information of Canadians.</p>
<p class="expand mw-collapsible-content">However, the inception of containers into the market has shown that large-scale organizations, who are involved in cloud-native application development as well as networking, can benefit greatly from the use of containers [7]. Although the infrastructure applications providing cloud services can be based solely on Virtual Machines (VMs), the maintenance costs associated with running different operating systems on individual VMs outweighs the benefit [6]. Containers and Containerization is a replacement and/or complimentary architecture for VMs. As the GC moves toward cloud services and development of cloud-native applications, the use of containers and orchestrating them with Kubernetes can become an integral part the GC IT architecture. </p>
+
<p class="expand mw-collapsible-content">However, the inception of containers into the market has shown that large-scale organizations, who are involved in cloud-native application development as well as networking, can benefit greatly from the use of containers <ref>CENGN. (2018). CENGN and CloudOps Collaborate to Train Industry on Docker and Kubernetes<i>[Available at: https://www.cengn.ca/docker-kubernetes-training-jan18/]</i></ref>. Although the infrastructure applications providing cloud services can be based solely on Virtual Machines (VMs), the maintenance costs associated with running different operating systems on individual VMs outweighs the benefit <ref>Heron, P. (2018). Experimenting with containerised infrastructure for GOV.UK - Inside GOV.UK. [online] Insidegovuk.blog.gov.uk<i>[https://insidegovuk.blog.gov.uk/2017/09/15/experimenting-with-containerised-infrastructure-for-gov-uk/ ]</i></ref>. Containers and Containerization is a replacement and/or complimentary architecture for VMs. As the GC moves toward cloud services and development of cloud-native applications, the use of containers and orchestrating them with Kubernetes can become an integral part the GC IT architecture. </p>
    
   <h2>Implications for Government Agencies</h2>
 
   <h2>Implications for Government Agencies</h2>
105

edits