Difference between revisions of "Annex F: Compute and Storage Services Security"
(Created page with "<div class="center"><div style="float: right; z-index: 10; position: absolute; right: 0; top: 1;">File:JoinusonGCconnex.png|link=http://gcconnex.gc.ca/groups/profile/2785549...") |
(No difference)
|
Revision as of 08:42, 7 April 2021
Overview
The CSS ESFA, as depicted in the image, describes the information technology (IT) resources required to provide the GC Enterprise with compute and storage services including compute, enterprise storage, file services, and database capabilities ranging from a static, single purpose server instance to a software-defined environment within a large data centre or cloud environment.
The CSS ESFA technical components consist of:
- Compute Services
- Enterprise Storage
- File Services
- Database Services
Context
The CSS ESFA interacts with other ESFAs, as well as the external entities, identified in the image. The CSS software-defined architecture interacts with APP to provide CSS services for application compute and data services. The CSS architecture relies on OPS, DAT, END, and ICA services in order to secure CSS services and resources.
Perspectives
This section provides perspectives, context, and additional information to support the CSS target and transitional architectures presented in sections 3 and 4. The CSS perspectives are developed with regard to the final CSS target architecture which is composed of virtualized CSS services operating within a Software-Defined Environment as described in the next section.
The Software-Defined Environment
The CSS target architecture transition is focused on the virtualization of compute, storage, file, and database services within the framework of a Software-Defined Environment. The image below provides a description of the SDE wherein all the virtual compute, storage, server, networking and security resources required by an application or data service can be defined by software and automatically provisioned based on user demand.
And the image below depicts the SDE operating across a unified fabric supported by software-defined compute, storage (including file and database services), and network services across a virtualized fabric of IT/IS environments. The SDE does not formally define environmental borders as the SDE border is malleable and driven by user demand and the sourcing of the services required to meet user needs in the most cost effective manner.
Software-Defined CSS Service Architecture
In a software-defined environment CSS services are managed by software. Software components are logically and securely separated into three distinct layers; a CSS service delivery plane, a control plane, and a resource management plane. The image depicts the logical separation of CSS service across a software-defined environment. Security policies and controls are implemented in order to protect and isolate each CSS management plane. In particular, the control plane consists of orchestration and resource management services that must be protected within a restricted zone. Control plane services, if compromised, may allow an attacker to exploit CSS services regardless of where the service is located.
CSS Service Plane
The service plane provides the interface to users and software developers. Users request services and the service plane directs users to those services of interest. From a user perspective services are presented through application portals and the user is unaware of where these services are located. From the service delivery perspective services are provided based on user identity credentials and authorizations to access the requested services.
In order to meet the secure continuous software delivery expectations, the service plane provides standard templates for defining how software services are delivered (monolithic, SOA, microservices), the orchestration of the services and the resources available for services at runtime. Once a service is deployed the control plane manages the service delivery and associated resources to ensure the software is available to all users.
CSS Control Plane
The control plane has two primary responsibilities. The first is to manage the orchestration of software services and ensure the availability of services to users and developers. The second responsibility is to ensure that pooled CSS resources are managed in terms of resource security, performance and utilization.
Orchestration services launch compute resources and application objects. It uses template scripts that identify a collection of resources, their dependencies, and runtime parameters. Orchestration uses these templates to collectively create, delete and manage the dependencies of required resources. Templates allow infrastructure to be defined once and instantiated in multiple copies as well as to be shared between sites (infrastructure as code). Other orchestration functions defined by templates include the ability to define the characteristics of elasticity services, and to tune performance settings and thresholds based on utilization and performance metrics.
CSS Resource Plane
The resource plane is logically separated from the control plane by secure CSS resource controllers. Resource controllers mediate the requests for resources and report back the performance, utilization, and the health and status of physical and virtualized CSS resources.
CSS Services and DevOps
Once DevOps requests the instantiation of an application (consisting of SOA services, microservices, API’s, etc.) for the user community the orchestrator must decide whether to use already deployed application services or deploy the application services required by the application. As the orchestrator is working in a dynamic provisioning environment the orchestrator uses the service registry to discover the needed services and determine where those services are deployed. Utilization and performance metrics are evaluated if the current deployed service can handle the load or make the decision to deploy another instance of the service.
Additionally, DevOps and container orchestration services may be compromised providing opportunities for exploitation. Sophisticated attackers will look to exploit new container services and the tools that support those services.
Software-Defined Compute Infrastructure Transformation
The software-defined computing infrastructure is transforming from a hardware based set of standardized and consolidated infrastructures to virtualized services with full lifecycle management of these services to an on-demand infrastructure focused on service delivery independent of the location of the underlying services and physical assets. As the control plane infrastructure is isolated from service requests and hardware assets the computing infrastructure is implementable in software across cloud provider solutions, and is well-suited to a hybrid cloud implementation. The image illustrates the increasing convergence of infrastructure services and the establishment of on-demand computing and data services.
Virtualized computing platforms became increasingly popular due to increasing server processing power that was not being efficiently utilized. A collection of server platforms dedicated to single applications may run at 40% of capacity. Before virtualization there was no way to make effective use of the remaining 60% of processing capacity.
The image below depicts the two types of hypervisor technologies that are currently available. A type 1 hypervisor runs on top of the hardware and directly interfaces with the underlying hardware. Type 2 hypervisors run on top of a host OS and indirectly interface with the underlying hardware. Type 1 hypervisors normally outperform Type 2 hypervisors as the pathway to the hardware is shorter. The primary benefit derived from virtualized platforms is the ability for applications to share processing power and thereby ensure compute power optimized.
Each application (in some cases applications may share a guest OS services) on a virtualized platform is supported by a guest OS that interacts with the hypervisor.
With the introduction of cloud services and the adoption of “continuous deployment” of software services the movement of applications from one environment to another (Data Centre <-> Public Cloud) and within an environment was required to be agile and predictable. Container technology (OS virtualization) enables software to deploy quickly and run predictably when moved from one environment to another.
AS depicted in the image above, containers sit on top of a physical or virtualized server and its OS. Each container shares the host OS kernel and the OS binaries and libraries. Shared components are read-only, with each container able to be written to through a unique mount. This makes containers exceptionally “light” – containers are only (megabytes in size) and take just seconds to start, versus minutes for a VM. The table provides a list of quality attributes associated with virtualization and container technologies in a modern data center environment.
Quality Attributes | Virtualization Technology | Container Technology |
---|---|---|
Technology code base size | 2-3 Gigabytes | 20-90 MB’s |
Provisioning | 2-3 minutes | 2-3 seconds |
Cost | More costly than container technology | Less servers, Less Staff |
Resource utilization | High | Low |
DevOps Integration | Difficult, time consuming | Easy |
Microservices | No advantage | Lightweight containers are suitable for microservices |
Continuous Deployment | Difficult, time consuminge | Containers deploy in seconds |
The benefits of containers often derive from their speed and lightweight nature; many more containers can be put onto a server than onto a traditional VM. Containers are “shareable” and can be used on a variety of public and private cloud deployments, accelerating DevOps by quickly packaging software services along with their dependencies. Additionally, containers reduce management overhead. Because they share a common operating system, only a single operating system needs care and feeding (bug fixes, patches, etc.). Although you cannot run a container with a guest operating system that differs from the host OS because of the shared kernel – no Windows guest OS containers interacting with a Linux-based host.
VMs and containers differ on quite a few dimensions, but primarily because containers provide a way to virtualize an OS in order for multiple workloads to run on a single OS instance, whereas with VMs, the hardware is being virtualized to run multiple OS instances. Containers’ speed, agility and portability make them yet another tool to help streamline software development and continuous deployment. Distinguishing characteristics include;
- Virtual machines contain a complete operating system and applications.
- Virtual machines use hypervisors to share and manage hardware while containers share the kernel of the host OS to access the hardware.
- Virtual machines have their own kernel and VM’s don’t use and share the kernel of the host OS, hence VM’s are isolated from each other at a deep level.
- Virtual machines residing on the same server can run different operating systems. One VM can run Windows while the VM next door might be running Ubuntu.
- Containers are bound by the host OS, containers on the same server use the same OS.
- Containers are virtualizing the underlying operating system while virtual machines are virtualizing the underlying hardware.
OS containers are virtual environments that share the kernel of the host operating system but provide user space isolation. For all practical purposes, you can think of OS containers as VMs. You can install, configure and run different applications, libraries, etc., just as you would on any OS. Just as a VM, anything running inside a container can only see resources that have been assigned to that container.
OS containers are beneficial when a fleet of identical or different flavors of software distributions are required. Containers are created from templates (or images) that determine the structure and contents of the container. It thus allows you to create containers that have identical environments with the same package versions and configurations across all containers.
Container Security
The introduction of container technology adds another layer of abstraction that must be secured. Malicious container images must not be introduced to the operational environment. As an example, a Docker vulnerability was identified that allowed remote attackers to execute arbitrary code with root privileges via a crafted image in a Docker file. The Common Vulnerability Enumeration (CVE) currently lists 45 Docker vulnerabilities. Container technology is relatively immature and security lags behind technology development. From Information Week;
“Not much research has been published about the security of running, say, 1,200 containers side-by-side on a single server. One running container can't intrude upon or snoop on another's assigned memory space.
But what if two containers were allowed to talk to each other, and one of them was loaded with malicious code that snoops for encryption keys in the data that it's allowed to see? With so many things going on around it in shared memory, it might be only a matter of time before something valuable -- a user ID, a password, an encryption key -- fell into the malware's net.
Malicious code could also build up a general picture of what the linked container or containers were up to. Theoretically, this can't happen, because containers are designed to ensure the isolation of each application. But no one is sure whether computer scientists have envisioned and eliminated every circumstance where some form of malware snooping can occur.
Containers share CPU, memory, and disk in close proximity to each other, and that sort of thing worries security pros. It's likely, even though no one has done so on the record yet, that someone will find a way for code in one container to snoop on or steal data from another container.”
Additionally, DevOps and container orchestration services may be compromised providing opportunities for exploitation. Sophisticated attackers will look to exploit new container services and the tools that support those services. The bulk of vulnerabilities exist within the base operating system. Accordingly, a patch to one base O/S can support as many containers as run on that O/S without impacting the containers. This benefit makes vulnerability management significantly easier in container based deployments.
Software Defined (SDS) Storage
Software defined storage in a SDE provides abstraction of the software that controls storage solutions from the hardware itself. While virtualized storage solutions in use today provide separating capacity, software defined storage involves separating storage capabilities from storage services. Separating the hardware from the software that controls the storage heterogeneous, commodity-level storage solutions may be used. The following principles for developing a software-defined storage within a SDE.
- Leverage commodity storage hardware
- Run on commodity server hardware
- Unify disparate storage technologies
- Pool storage resources
- Automate core storage functions
- Expose open API’s for storage access
- Augment storage architecture without disruption
- Improve data availability
The image illustrates a conceptual SDS architecture in a SDE. DevOps developers concerned with developing a new application source data and storage capabilities by selecting the appropriate API’s to meet the application needs. DevOps developers include these API’s in container templates. These templates may include authentication (user or administrator), backup scheduling, and other automated data services that the orchestrator will commit either to the data services or the SDS controller.
Pooled Resource Management
Resource pooling in an SDE provides for abstraction from the hardware based components by providing Resource Management Controllers (RMC’s). A RMC accepts resource requests with pre-defined security orchestration policies. The RMC provisions the resources based on policies for data isolation, availability, cyber resiliency, etc. The RMC at runtime controls;
- Resource Pooling. Resources are pooled to serve multiple consumers using a multi-tenant model, with different physical and virtual resources dynamically assigned and reassigned according to consumer demand and security policies. There is a sense of location independence in that the consumer generally has no control or knowledge over the exact location of the provided resources but may be able to specify location at a higher level of abstraction (e.g., country, state, or datacenter). Examples of resources include storage, processing, memory, and network bandwidth.
- Rapid Elasticity. Resource capabilities are elastically provisioned and released, in some cases automatically, to scale rapidly outward and inward commensurate with demand. To the consumer, the capabilities available for provisioning often appear to be unlimited and can be appropriated in any quantity at any time.
- Measured Service. Resource controllers automatically control and optimize resource usage by leveraging a metering capability at some level of abstraction appropriate to the type of service (e.g., storage, processing, bandwidth, and active user accounts). Resource usage can be monitored, controlled, and reported, providing transparency for both the provider and consumer of the utilized service.
Security in a Software-Defined Environment
One of the major benefits derived from transitioning to a Software-Defined Environment is the capability to manage network and software components in an agile manner while getting the maximum utilization from each available resource. As depicted in this image the control plane manages the software orchestration and resource management. These software-defined capabilities will provide a target for attackers. Attackers will probe for vulnerabilities in software that provides new capabilities. Listed below are some of the security strengths and weaknesses in a software-defined environment.
- Security Strengths
- By decomposing applications into SOA services and microservices which have small code footprints the code can be readily evaluated for code vulnerabilities
- High availability can be achieved by replicating software services and automating fail over
- Business Continuity and Disaster recovery timelines can be significantly shortened by using pooled resources across geographic locations
- The security objective of continuous monitoring can be realized by actively monitoring each service container and its resources
- Configuration management is improved as each software component (application, SOA service, microservice) is registered in a service registry documenting its’ secure configuration state
- Security Weaknesses
- The control plane will be a target for attackers as it provides an opportunity for launching exploits across all pooled resources
- Software defined controllers to manage networks, perimeters, orchestration of software and data services that, if compromised, can cause significant harm
- If controller communications with CSS resources are compromised the resource can be attacked and that attack may not be confined to only that resource
- System administrators will be responsible for tools and capabilities that can cause significant damage to information and information systems. Security training must be updated and oversight of system administrators’ actions must be audited
- Continuously deploying and moving short lived software services (apps, SOA services, microservices) broadens the potential for integrity compromise with out of date services or corrupt services
- Significantly more objects (apps, SOA services, microservice) to authenticate and authorize within the environment
Software-Defined Security (SDSec)
Software-Defined Security (SDSec) is a policy-driven security model in which information security in software-defined environments is implemented, controlled, and managed by security software. SDSec polices must assure that the appropriate security controls automatically remain enforceable regardless of where a software service is executed or where data is moved and stored
SDSec policies automatically enforce regardless of where software or CSS resources are instantiated. The image above depicts the process flow of instantiating a software service, such as a microservice, from DevOps development, orchestration, resource-management, and continuously monitored until the software is removed from service. The critical aspect is the fact that each policy is bound to the subject of the policy. The image below depicts a microservice bounded and encapsulated by SDSec policies. Each of these policies is automated in order to meet the security demands of delivering services in an on-demand infrastructure. SDSec policies may be dynamically assigned and/or updated at any time as each subject of a security policy is registered and discoverable independent of its location.
Implementation of SDSec policy-driven security models provides the following benefits;
- Enabling security to protect work flows and information regardless of location,
- Aligning security controls to the risk profile of the data and information,
- Enterprise-wide insight into security reporting and notification,
- Enabling automated provisioning and orchestration of security controls by policy,
- Removing time- and error-prone human interactions via higher levels of automation and secure API’s,
- Enabling security to scale to protect dynamic and dispersed workloads,
- And re-focus security administration from configuration and management of IT devices to automated security policies and detecting advanced threats through abnormal container activity.
In addition to the benefits derived from a SDSec model, continuous monitoring must be agile enough to match or outpace the delivery of CSS resources. Recent research has focused on the concept of an Adaptive Security Architecture (ASA). The image below depicts the continuous monitoring and analytics cycle of Predict, Prevent, Detect, and Respond activities performed to enable cyber resiliency.
From a cyber resiliency perspective, the introduction of software defined services through agile containers provides an opportunity to adopt advanced cyber resiliency techniques. Cyber resiliency techniques are designed to make the cyber resiliency goals of anticipate, withstand, recover, and evolve realizable. The software defined data centre and containerized software services provide agility, composability, and fine-grained management of pooled resources that make the techniques described in in the table achievable.
The image below provides an example of an adaptive security response to the detection of malicious activity. The software-defined environment allows for legitimate users to be redirected via dynamic provisioning to secured software services while the malicious connection is maintained in an isolated network segment. Incident response teams are then able to observe the malicious behavior. The malicious code may reveal unknown vulnerabilities and/or zero-day exploits. Subsequently, security policies can be implemented and/or security controls enhanced to prevent the reoccurrence of the malicious intrusion.
Additional information concerning cyber resiliency functions, goals, and techniques can be found in Enterprise Security Architecture Description Document (ESADD) ANNEX D – Security Operations (OPS).
CSS Target Security Architecture
The CSS target architecture is defined as a composable set of CSS resources that is implementable within a Software-Defined Environment (SDE) independent of the deployment-level architecture (GC data centre, hybrid cloud, public cloud, etc.). The image below depicts Software-Defined Environment instances within three deployment level architectures supported by unified orchestration and resource management capabilities. CSS services are resilient, scalable, load balanced, secure, and available on-demand to all GC users independent of the location of the SDE instance.
Each SDE instance provides virtualized compute, storage, and data services (database, file services, etc.) supported by data warehouse capabilities. SDE instances collectively represent a distributed set of CSS services. Table (Virtualization and Container Quality Attributes) provides a list of CSS quality attributes realized by unified management, compute, and storage services supported by a unified fabric. The unified fabric may itself be software defined using SDN, SDP, and SD-WAN technologies.
Service portals manage software and data requests that directly map to underlying IT hardware. Virtual resources are predictively allocated based on business/mission needs. Compute configurations are dynamically allocated based on changing business needs and pre-defined Service Level Agreements (SLAs). Resources are lifecycle managed by policy and cost justification. These capabilities and the relationships among the components of CSS provide the basis for the patterns and use cases described in section 5.
CSS Architecture Transition Strategy
CSS services have traditionally been treated as individual IT/IS components connected by a local network infrastructure. As new mission/business objectives require additional CSS services more IT/IS components are deployed and managed on local networks in order to meet the user demand. The process of adding additional CSS resources to meet user demand leads to resource inefficiencies and the stove-piping of data within network silos. The image below depicts the current state of CSS resource utilization as opposed to the desired state of a unified software-defined environment for CSS services.
The transition to software-defined CSS services is inevitable as cost and resource inefficiencies increase. The transition is being technology-driven by the introduction of continuous integration and continuous delivery of software and data services, software applications that require increasingly unconstrained resources and the demand for software and data services capable of responding quickly to evolving mission/business needs. The image below depicts the future state of CSS resource utilization including unified management capabilities spread across a unified fabric.
Current CSS Architecture Description
The current GC CSS architecture (across data centers and isolated networks) is composed of traditional hardware-based platforms and virtualized compute and storage platforms. Hardware based platforms rely on dedicated CSS resources for each application. The hardware-based approach leads to the stove-piping of CSS resources as each application enters service. This has led to severe underutilization of some CSS resources while others may suffer performance issues due to the overloading of service requests.
The advent of virtualized compute (type 1 and 2 hypervisor platforms) and storage has provided a means to share compute and storage services with other applications increasing the utilization of compute and storage services. The image below depicts the typical hardware-based and virtualized CSS platforms.
Although virtualized platforms greatly increase the utilization of compute and storage services applications are not completely abstracted from the compute services hampering portability and the re-allocation of CSS services based on service demands. An important differentiator between virtualized platforms and the target CSS architecture is the use of containers to mount software and data services. Containers can be mounted without shutting down the underlying compute services. Mounting an application service on a virtualized platform requires the O/S to re-boot which may take 2-3 minutes. Mounting an application within a container is near instantaneous. This agility allows for the;
- Dynamic positioning of resources (cyber resiliency factor),
- Elasticity and scalability of CSS resources,
- Re-allocation of resources based on utilization and performance factors,
- Actionable “courses of action” on intrusion detection,
- Advanced cyber resiliency techniques are implementable (Table of SDE Cyber Resiliency Techniques),
CSS Transition Strategy
The CSS architecture transition is driven by GC business/mission needs. The GC business/mission cases drive compute and/or data services (database, file, storage, etc.). Satisfaction of the business case may be realized through extension of GC data centre(s) capabilities or by the acquisition of cloud services. The decision to extend GC data centre capabilities or the acquisition of cloud services is dependent on applicable GC regulatory requirements and the security level of protection required to protect the data considering its privacy and classification levels. The business case costs must also be considered when determining an approach to satisfy a business case. The cost evaluation may indicate that two cloud providers (one for compute services, one for data storage) is the most economical approach. The next sections describe transitional approaches for extending CSS services based on business needs and how these needs are met in a software-defined environment.
Transition to a Software-Defined Environment
Transitioning to a SDE within GC data centres increases the utilization and availability of CSS resources within and across GC data centres and introduces the software-defined technical capabilities to integrate GC data centre resources with cloud-based CSS resources. The image depicts the four technical capability areas that must be addressed when transitioning to a software-defined environment with a unified fabric.
The image below illustrates the transition of software-defined compute, network, and storage services with the introduction of a defined service, control, and resource planes to manage and deliver software-defined CSS resources. The service plane aggregates service delivery expectations of the user community and provides for policy-based dynamic provisioning and discovery of CSS resources. Aggregating service requests allows service provisioning to maximize the use and availability of CSS resources.
The control plane provides software-defined service orchestration and resource management across the set of data center resources while providing isolation of the pooled resources. SDN and SDP technologies provide the capability to create software-defined networks and secure perimeters in order to create virtualized security zones with controlled access across zones.
Introduction of Cloud Service
The introduction of cloud services allows the GC to utilize CSS resources as commodity-driven resources. Cloud based resources are used to ensure on-demand resources are available when needed and resources may be released when the demand falls. GC data centres can burst CSS resources to the cloud when demand levels increase and release the cloud resources when demand falls. The introduction of cloud based CSS resources extends the SDE fabric with the introduction of software-defined Wide Area Network (SD-WAN) capabilities and secure VPN communications to cloud providers. SD-WAN technologies allows for two or more GC data centres to extend the GC SDE fabric and consolidate CSS resources across data centres. The image depicts an example of the unified fabric with two GC data centres and two cloud providers (public and hybrid).
The image below illustrates the request for CSS services independent of the location of the CSS resources. The service plane accepts requests for CSS services and resources are assigned based on demand and availability. CSS resources are pooled and cloud based services may be added or released based on demand.