June 1, 2022
DX UIM Team Practices DevSecOps for Secure Development, Delivery, and Deployment
Written by: Seshasai Koduru
DevOps is a composition of enhanced engineering practices that reduce lead time and increase the frequency of delivery. The primary goal of DevOps is to ensure operations team members are engaged and collaborating with development from the very beginning of a project or product development.
Within many enterprises, teams are being compelled to reassess the security of their DevOps implementations. Recent news on vulnerabilities like Sun Burst and Log4j underscore why this is so critical. To ensure the security of their DevOps environments, organizations must quickly remediate these vulnerabilities when they arise.
DevOps + Security = DevSecOps
There’s a growing movement, called DevSecOps, to incorporate security into the coding process. Quite simply, the focus of DevSecOps is on producing more secure software, and it includes processes for secure development and delivery, as well as recommendations for securing deployment. Its primary focus is to ensure loopholes and weaknesses are exposed early on through monitoring and analytics, so that remediation actions can be implemented efficiently.
Given our goal of providing secure, enterprise-grade software, Broadcom’s DX UIM development organization started to transition from DevOps to DevSecOps.
As part of this transition, DX UIM’s product development team reviewed best practices of DevOps in conventional builds as well as in agents that can be extended to emerging container-based approaches, such as Docker
As teams move to DevSecOps, there are a plethora of potential concepts and approaches to apply. In this blog, we highlight several key approaches we have adopted that we believe form a solid foundation for DevSecOps.
Secure Development
Standard and Reliable Build System
Like many enterprise software solutions, DX UIM implementations span multiple languages (including C, C++, Java, and Python) and work on various operating systems (OSs) and hardware platforms. In our move to DevSecOps, we sought to ensure that we have standard and reliable build systems. Following are the core systems we use for DevSecOps:
Jenkins
Jenkins is our build automation system of choice. Jenkins is an open-source server that helps automate various software development efforts, including activities associated with builds, testing, and deployment. With these capabilities, Jenkins facilitates continuous integration and continuous delivery. It is a widely accepted system that enables teams to create freestyle and pipeline-based build jobs. Using pipeline-based build jobs, Jenkins gives developers the ability to customize automation code to their specific needs. To streamline the release process, Jenkins also comes with multiple plugins that support a range of actions, such as parameter-based execution and job sequencing.
These capabilities give us complete control over our builds, which are taking place constantly across multiple feature and mainstream code branches. In addition, our continuous integrations are automated to ensure that various levels of jobs are executed as soon as code changes are introduced.
JFrog Artifactory
JFrog Artifactory is our artifact hosting system. JFrog Artifactory provides end-to-end automation and management of binaries and artifacts through the application delivery process.
This system helps us manage third-party and DX UIM artifacts. We also leverage built-in security features of JFrog, such as Xray, for additional safeguards.
Git and GitHub
We’ve chosen to employ Git and GitHub as our source code control system and central repository. Git and GitHub are the de facto standards for source code management. These tools help us with managing the various development and release states of our source code.
Build Agents
Build agents are the systems in which the actual build of the software happens. These include physical, virtual, and containerized systems. Depending on build needs, the required compilers and development software are installed, managed, and audited on these systems. This can include C/C++ compilers, Java Development Kits, and so on.
To safeguard access, all these systems are configured with applicable permissions and audit capabilities. Build agents are fortified with firewalls, and anti-virus signatures and OS security patches are regularly updated.
Integration with Tools like BlackDuck and Coverity
We have adopted a shift-left approach to find vulnerabilities during the development or build phase. We use the following tools to support these efforts:
- Coverity. Coverity provides static analysis at the source code level and identifies security and code quality issues. Through continuous integration in Jenkins, we can do constant Coverity scans, which helps us meet our goal of eliminating issues before they get to production.
- BlackDuck. BlackDuck is another tool that helps us identify third-party components used in the software and flag risks, including operational, security, and licensing risks. Our regular BlackDuck scans of binaries and signatures help us find vulnerabilities and risks before the software is delivered to production.
Secure Delivery
Build Options and Code Signing
On the Windows platform, we have enabled several options that help secure the code that gets built. Following are a few of these options:
- Address Space Layout Randomization (ASLR). ASLR is a computer security technique that involves randomly positioning the base address of an executable and the position of libraries, heap, and stack in a process' address space. By randomly mixing memory addresses, ASLR helps prevent an attacker from determining which address the required code (such as functions or ROP gadgets) is actually located in. That way, rather than removing vulnerabilities from the system, ASLR attempts to make it more challenging to exploit existing vulnerabilities. We use the linker option /DYNAMICBASE to enable ASLR.
- Authenticode. This is a Microsoft code-signing technology that identifies the publisher of Authenticode-signed software. Authenticode also verifies that the software has not been tampered with since it was signed and published. We use Sign Tool with a digital certificate to certify the binaries that are shipped.
- Safe structured exception handling (SafeSEH). Through this approach, the objective is to collect handlers' entry points in a designated read-only table and verify each entry point against this table before control is passed to the handler. In order for an executable to be created with a safe exception handler table, each object file on the linker command line must contain a special symbol named @feat.00. If any object file passed to the linker does not have this symbol, then the exception handler table is omitted from the executable and thus the run-time checks will not be performed for the application. By default, the table is omitted from the executable silently if this happens and therefore can be easily overlooked. A user can instruct the linker to refuse to produce an executable without this table, thereby bypassing the /SAFESEH command line option. This is applicable for i386 binaries but not for AMD64 binaries. To achieve this, we use the linker option /SAFESEH.
- HighentropyVA. This specifies whether the executable image supports high-entropy 64-bit address space layout randomization (ASLR). This is applicable for AMD64 binaries but not for i386 binaries. Linker option /HIGHENTROPYVA is used to achieve this.
- Control Flow Guard (CFG). CFG is a highly optimized platform security feature that was created to combat memory corruption vulnerabilities. By placing tight restrictions on where an application can execute code from, it makes it much harder for exploits to execute arbitrary code through vulnerabilities such as buffer overflows. CFG extends previous exploit mitigation technologies such as /GS, DEP, and ASLR. To achieve this, we use the /guard:cf code generation option.
For Linux packages like Red Hat Package Manager (RPM), we use GNU Privacy Guard (GPG) signing, which enables the RPMs to be imported into local repositories, validating that the source of the packages is authentic.
Standard and Advanced Penetration Testing
Though we follow security principles at the development and build phases, it is also necessary to ensure that software is tested at run time. Toward this end, we use a tool from Qualys to do penetration testing. With this tool, we can do web application vulnerability scanning that highlights the vulnerabilities present in the deployment. Periodically, we perform advanced penetration testing to identify and fix vulnerabilities that may exist at deeper levels within our implementation.
Secure Deployment
DX UIM offers various secure deployment options, such as Tunnels and Secure Bus. This topic will be discussed in detail in a separate blog.
Summary
With the above secure development and delivery processes, DX UIM was able to address Log4j 2.x and other vulnerabilities very quickly. With our strong DevSecOps processes, we’ve been able to deliver software that gives customers enterprise-grade reliability and security. With these approaches, we can ensure DX UIM remains current with enterprise security guidelines and react quickly when new vulnerabilities are discovered.
Seshasai Koduru
Seshasai is an engineering leader and architect for infrastructure management and has 23 years of experience in building enterprise-grade products. He has led engineering product development and architecture in the domains of fingerprint analysis, DBMS, automation, DevOps, and infrastructure monitoring. His passion...
Other posts you might be interested in
Explore the Catalog
Blog
December 13, 2024
Full-Stack Observability with OpenTelemetry and DX Operational Observability
Read More
Blog
December 6, 2024
Power Up Your Alarms! Enriched UIM Alarms for Added Intelligence
Read More
Blog
November 26, 2024
Topology: Services for Business Observability
Read More
Blog
November 22, 2024
Regular Expressions That I Use Regularly
Read More
Blog
November 22, 2024
Cloud Application Performance: Common Reasons for Slow-Downs
Read More
Blog
November 4, 2024
Unlocking the Power of UIMAPI: Automating Probe Configuration
Read More
Blog
October 4, 2024
Capturing a Complete Topology for AIOps
Read More
Blog
October 4, 2024
Fantastic Universes and How to Use Them
Read More
Blog
September 26, 2024