Difference between revisions of "Secure Application Development"
(Created page with "<div style="float: right; z-index: 10; position: absolute; right: 0; top: 1;">File:JoinusonGCconnex.png|link=http://gcconnex.gc.ca/groups/profile/2785549/gc-enterprise-secur...") |
(No difference)
|
Revision as of 11:01, 7 April 2021
ESA Program Overview | ESA Foundation | ESA Artifacts | ESA Initiatives | ESA Tools and Templates | ESA Reference Materials | Glossary |
---|
Secure Application Development | Software Assurance | DevSecOps Tools |
---|
Secure Application Development
Software delivery has changed over the years. Initially, software was developed to meet the functional requirements of users and much of the security practice was around ensuring that data was accurate and that controls were in place to secure the data. It didn’t take long for systems to have to accommodate not only users, but misusers. The latter are those that leverage an application or system in an attempt to see what they can make an application or system do. A goal of the GC Enterprise Security Architecture program is to promote the whole-of-government approach and ensure that security is integrated from the outset within current business practices. The GC Enterprise IT/IS Infrastructure exists solely to deliver application functionality to users. This in turn implies that the entire ESA exists to enable application security. Thus, to support the delivery of a consistent approach to security across the GC, the establishment of an application security framework is required.
Secure Coding Principles
Many of the secure coding principles are derived from general security best practices but have been adapted to relate to software development specifically.
Secure Coding Principles:
Principle | Description |
---|---|
Minimize Attack Surface Area | The attack surface area can be minimized in two key ways: reducing or redesigning functionality to remove non-essential or potentially vulnerable components; keeping code small, simple and maintainable |
Establish secure defaults | Access and configurations should be set at the most restrictive levels. Reductions in security should be by exception or by user permitted configuration changes. |
Principle of Least privilege | The principle of least privilege requires the developer to be aware of the minimum privileges required by accounts, components and processes in the systems they deliver and setting privileges accordingly. This same principle applies to user rights and resource permissions such as CPU limits, memory, network, and file system permissions |
Principle of Defense in depth | Build and layer security controls into development efforts that
anticipate the possibility of compromise. For example, a flawed administrative interface is unlikely to be vulnerable to anonymous attack if it correctly gates access to production management networks, checks for administrative user authorization, and logs all access. |
Fail Securely | Failing securely is the principle of ensuring that error conditions or failures do not compromise the integrity of the system or data. Examples of failing securely are rollbacks of transactions, reduced functionality operating modes and ensuring processes or accounts have not been inadvertently left in an escalated privilege mode. |
Don’t trust services | All external systems and services should be treated as untrusted. The interface to external services must be well documented and examination and validation of data obtained from services must performed. |
Separation of duties | Systems should be developed with attention to roles and the separation of privileges and permissions within those roles. As a minimum, developers should code systems with a minimum separation for administrator and user roles |
Avoid security by obscurity | Security is best applied through implementing the practices identified in this guide and by applying appropriate security controls. Covert coding practices and obscure storage locations may be confined to the knowledge of developer, but this should not be relied upon. It is an insecure development technique and makes for a maintenance nightmare. |
Keep security simple | Code simplicity facilitates comprehensibility and traceability. Comprehensible code makes security analysis and code review significantly easier. Simple code also allows for easier traceability to security and functional requirements. |
Fix security issues correctly | “Stovepiped” applications and delivery streams are quickly becoming a thing of the past. Code, object and image reuse are key to rapid and efficient application and system delivery. Upon detecting security issues within a system or application it is quite likely that issue has permeated others as well. |
Use compiler security checking and enforcement | Leverage compilers that support static code analysis and enable compiler warnings and error checking. Different compilers are developed for different purposes. Identifying security issues early in the development process is significantly easier to address than post deployment |
Avoid security conflicts arising between native and non-native, passive and dynamic code | Applications are not typically written in just one language. Developers must ensure that they are aware of vulnerabilities in one language that may impact the other when delivering applications. Additionally, applications that generate dynamic code must be aware of assumptions made on code versions and application states |
Conduct automated testing | Testing should be automated and repeatable to the greatest extent possible. Developers should adopt and explore new testing techniques such a fuzz testing. Regression testing should be performed and should be automated. |
Conduct ongoing code inspection | Even with the use of static code analysis tools ongoing manual code inspections should be performed. |
Use configuration management | Source code generation and maintenance should always be performed under managed configuration |
Secure Software Development LifeCycle
A secure Software Development LifeCycle (SDLC), regardless of the deployment model should strive to follow ISO/IEC 27034. Best practices have been defined regardless of approach (agile or waterfall) and are defined below.
For an explanation of each best practice, click on the "Expand" button below.
|
Secure DevOps Coding Principles and Best Practices
As organizations move to deliver functionality quicker, they can impair stability and security in the process. Unfortunately, if DevOps security principles and associated practices are not followed, vulnerabilities are produced as fast the desired functionality. Secure DevOps is about building security into DevOps tools and practices by mapping out how changes to code and infrastructure are made and finding places to add security checks and tests and gates without introducing unnecessary costs or delays. Because security is applied at code and infrastructure levels, secure DevOps is often also referred to as Security as Code or Infrastructure as Code.
Secure DevOps Coding Principles
For an explanation of each best practice, click on the "Expand" button below.
|
Secure DevOps Practices
|
Rapid/Continuous Development Models
Most large-scale application developers employ various methods of Rapid or Continuous Development in order to maintain a standard of quality and organization. The two main models include: trunk-based development and git flow.
Trunk-Based Development
Trunk-Based Development (TBD) is where all developers (for a particular deployable unit) commit to one shared branch under source-control referred to as the trunk. Deviations to the trunk may be worked on by developers (even in a Git Flow model) within local branches but these branches must be applied back to the trunk before they are “applied”. Formal branches are made for releases. Developers are not allowed to make branches to the trunk (other than local copies). Only release engineers commit to those branches and create each release branch. They may also cherry-pick individual commits to that branch if there is a desire to do so. After a release has been superseded by another, the branch is typically deleted.
Git Flow
In the Git flow development model, you have one main development branch with strict access to it. Developers create feature branches from this main branch and work on them. Once they are done, they create pull requests where other developers provide code review and comment on changes, much like traditional agile development approaches with a reduced number of feedback loops. Upon agreement of the branch, the pull request is accepted and merged to the main branch. Once it’s decided that the main branch has reached enough maturity to be released, a separate branch is created to prepare the final version. The application from this branch is tested and bug fixes are applied up to the moment that it’s ready to be published to final users. The final product is then merged to the master branch and tagged with its release version. In the meantime, new features can be developed on the develop branch.
Secure Application Programming Interfaces
API's are widely used by all development teams regardless of the service or application that is being developed. It is greatly important to maintain the integrity and security of APIs. In order to keep APIs secure consider the following best practices.
- Establish Canonical Data Model - A comprehensive canonical data model is required to define a common syntax and semantics for API functions and data.
- Keep APIs Simple - One means of simplifying APIs is to logically separate services for enterprise, web and presentation.Logical separation enables the presentation services to support different devices without having to modify the back-end services they are accessing.
- API Attack Resistance - APIs must follow the same coding development guidance as previously presented in this guide to resist common API attacks such as the following:
- Cross-site scripting
- Code injection
- Business logic
- Parameter pollution attacks
- API Security Protocols - Choose the right API security protocol to meet integrity and confidentiality needs. Basic API authentication is the easiest to implement but username and passwords are only Base64 encoded and must never be transmitted without TLS encryption. OAuth2 uses TLS for encryption and is easier to implement than OAuth1 but its security is less effective. It’s not suitable for sensitive data exchanges. OAuth 1.0a is the most secure of the three common protocols and is the only of the three protocols that can be safely used without SSL, although that practice is strongly discouraged. OAuth 1.0 also provides a more robust authorization model within the protocol.
- Use API Keys - API keys should be used instead of traditional username/password authentication whenever possible. API keys:
- provide better entropy and make them less vulnerable to brute force and dictionary attacks
- are not subjected to password reset issues
- do not require hashing of password stored in a DB which provides better throughput
Ensure that the API key/secret is adequately protected. Security frameworks such as OAuth 1.0 provide access control enforcements securing the list of APIs that each specific API key can access.
- ID Anonymity - When using IDs, create them using random generators so that they are not easily guessed. This may also increase throughput as the generation can happen without relying on auto-increment functionality of databases.
- Security Framework - Develop APIs within existing, secure API frameworks that include desired security features.
Secure Containers
The increased use of containers for rapid application delivery has changed the security landscape. Even when a development culture changes and includes security from the start, the plethora of containers, their lifecycle, and dependencies between them is changing the security landscape. The NIST SP 800-190 Application Container Security Guide [38] provides guidance on identifying and addressing the issues within the container paradigm across the enterprise. The practices that follow are those that have a primary or shared responsibility within the development teams.
Address Image Vulnerabilities
Vulnerabilities are not addressed within the image alone. Apply code-level scanning during image development and leverage a centralized vulnerability and configuration management capability that integrates visibility into the entire CI/CD pipeline. Incorporate vulnerability scanning checkpoints in the delivery lifecycle to prevent vulnerable containers from being deployed.
Implement Secure Configurations
Images should only be built from trusted sources and applied to container-specific operating systems with security options such as: Alpine Linux; Windows Nano Server; AppArmor; SELinux; grsecurity; Seccomp. All implementations should be hardened following the Center for Internet Security hardening guidelines. For Docker based container deployment, enable Docker Content Trust. Do not create local/personal instances of trusted sources. Use the capabilities and security associated with the trusted sources.
Do not Embed Secrets
Leverage orchestration capabilities to dynamically inject secrets at runtime. The secret(s) should only be accessible to relevant containers and should be removed when the container stops. Secrets should not be stored on disk or be exposed at the host level.
Implement Least Privilege
Containers should be run with the least privileges possible. Docker containers should not be run with the "--privileged" switch and formal approval documenting its use should be obtained before being used by any container. Least privilege applies not only to how containers run, but what privileges and capabilities they use.
Set Container Resource Limits
Reduce the threat of vulnerabilities such as denial-of-service attacks, and performance impacts due to misconfigured or overzealous containers by putting limits on the system resources that individual containers can consume. Leverage orchestration tools to inform when thresholds are being reached rather than just terminating processes or containers when limits are exceeded.
Leverage Tools
Tools such as Docker Bench Security provide easy mechanisms to assist in running checks against containers that support implementing the practices outlined in this guide. For example, Docker Bench Security runs checks against the CIS Docker 1.13 Benchmark in the following six areas:
- Host configuration
- Docker daemon configuration
- Docker daemon configuration files
- Container images and build files
- Container runtime
- Docker security operations
Implement an Audit Trail
The rapid deployment of containers makes it imperative that the developers keep track of their deployments. In lieu of a tool that supports the activity, developers should record the following information as a minimum:
- When an application was deployed
- Who deployed it
- Why it was deployed
- What its intent is
- When it should be deprecated
Sensitivity Isolation
Containers should be deployed within the context of their data sensitivity. Do not co-locate containers for highly sensitive data and applications within the same VM or Node as containers providing nonsensitive data and applications.
Remove unused containers and images
Repositories that contain unused images and containers consume valuable resources and lead to container sprawl. Perform regular audits to identify unused, obsolete containers and images and eliminate them from your systems.
Secure Network-Facing Services
Containers are intended to provide a single purpose and log-ins should be avoided unless necessary. Secure Shell (SSH) should not be run in containers. Maintain isolation between containers by giving each container its own network allowing applications within them to communicate with each other via API only.
Avoid Mounting to the Host Filesystem
Mounting a host directory can be useful for testing but it leaves the container vulnerable to compromise against the hosts file system. The best practice is to avoid mounting the host file system or to mount it as read only. Use data volume containers and mount them if data has to be shared between containers and to provide continuity between them.
Consider Hardware Based Trusted Computing
Hardware-Based trusted computing provides a verified system platform and builds a chain of trust rooted in the hardware for containers. Leverage software-based (virtual) Trusted Based Module (TPM) (called a vTPM) in addition to TPMs where possible.
Implement Lists
Applying white and black lists to the container system calls provides an additional layer of security. Considerations for whitelists should be the type of application hosted in the container, the deployment situation, and the container size. Blacklists should include high-risk calls, such as ones that allow loading LKMs, rebooting the host, and triggering mount operations.
Leverage Operational Resources
Container deployments should not happen in isolation and security impacts can be felt organization wide. Leverage IT operation resources to support efforts around enterprise scalability, security and compliance.