This is a guest blog post by Christian Pedersen, CTO and co-founder of SD Elements partner OneLogin.
One of the exciting things about being the CTO of OneLogin is that I’m constantly learning about fantastic cloud applications to boost IT performance and security. There are now cloud applications for IT teams of every size and focus.
For example, in the world of software development, there are now cloud-based apps to help build security into your app from day one. Take SD Elements, which is kind of like tax planning software for application security: you describe the technology stack, features and compliance requirements of your system and SD Elements presents you with a series of tailored, prescriptive requirements.
Like many software companies, we rely on the Agile Project Management methodology. Agile is great, but sometimes there are challenges stemming from ‘islands of activities’ and that’s where a cloud-based work and project management solution like Clarizen helps get everyone on the same page. Clarizen centralizes all of your release backlog data, task updates, bugs, documents and communications into a single system.
Once your app goes live, New Relic is the killer app for preventing downtime due to application errors. New Relic helps you make sense of the noise by allowing you to monitor the metrics that really matter. For example, you can quickly identify application errors, fix the underlying problem, and improve your mean time to resolution (MTTR).
For those IT departments juggling dozens of applications and resources, Innotas provides an integrated platform for both Project Portfolio Management (PPM) and Application Portfolio Management (APM). With so many projects competing for resources, Innotas gives you that holistic view so you can prioritize projects, optimize shared resources, govern core processes, and ensure alignment across your entire IT landscape.
And when things go wrong or end-users need help with a new service they often end up at the IT help desk. That’s where BMC Remedyforce can be a real life saver. Remedyforce is a comprehensive IT service management solution built on the force.com platform. It helps you connect with customers to provide fast, accurate services in a way that optimizes IT efficiency. Of course, with so many services moving to the cloud, many help desks find themselves burdened with password-related tickets, and that’s where OneLogin comes in.
OneLogin reduces password sprawl, and automates password resets and user management so you can free up IT resources for higher value activities. Of course, when it’s time to off board members of your team, it’s important to make sure your sensitive IT and project data remains within your company. OneLogin’s real-time Active Directory and LDAP integration acts as an effective “kill switch” for when IT team members move on.
We’re looking to hire great developers. We know you have your choice of employers, and I’d like to explain why being an SD Elements developer may be the right match for you. If you’re unfamiliar with us, SD Elements is a product of Security Compass. It’s a security requirements solution that helps development teams build security into their software from the start. You can learn more about us from our website: www.sdelements.com. You can see some of our open source code such as integration plugins here: https://github.com/sdelements.
We’re not a venture-backed company. We’re boot-strapped, which means all of our money comes from selling products and services to customers rather than outside investors.
We offer our employees the ability to have a meaningful impact on the world. We believe it’s a fantastic time to be in technology, but insecure software threatens to dampen innovation. Major advances in fields like healthcare and utilities can greatly improve our lives, but not if those advances leave us vulnerable. So many of the security incidents we hear about can be traced back to insecure software: vulnerable web applications, malware exploiting vulnerabilities in desktop applications, and software with flaws on network devices and operating systems. Despite over a decade of best practices and tools to improve the security of software, there are still too many incidents because of known, preventable defects like dynamic SQL and use of insecure string operations.
We want to change that. While it’s no silver bullet, we believe security requirements will make a big difference and SD Elements is leading the way here. We’re not alone. Our clients range from large technology companies and global financial institutions, to medical technology companies, utilities, telecom, media, public sector and transportation companies. Last year we grew over 300% and we’re on track to continue that growth rate. It’s an exciting time to be here and we want you to be part of that excitement. Apart from the exciting growth and opportunity to really have an impact on improving the state of security, there are other reasons you may want to work here:
- Developers work with an incredibly talented group of people. I am fortunate and humbled to work with people who are much smarter than I am. There is something exhilarating about being part of a group of people who could collectively solve almost any problem. The chance to work with them is, in my opinion, the most rewarding part of the job.
- Customer satisfaction is paramount. Everyone is focused on goal #1, which is to super-please our customers. We know finances are important, but we really believe that ecstatic customers eventually lead to all other goals including revenue growth. Because we are boot-strapped, we can devote all our attention to customers and not investor requirements.
- We don’t believe in top-down management for software development. While there is some organizational structure in the company, we think the people closest to the ground make the best decisions. Decisions are either made in groups or by the person most qualified to make the decision. Although you may not always get your way I can promise your opinion will always be heard and respected.
- The role of management is to coach, listen, and be compassionate. We sit with team members individually to find out what the company as a whole can be doing better, and how we can reduce impediments to getting work done. In addition, we believe in transparency throughout our operations. We have an open-door policy at our meetings: You have the ability to get involved with any part of the business that you find interesting.
- Team members have flexibility in their schedules. While we ask that everyone comes into the office three days a week to help build a strong team dynamic, nobody monitors when people come into or leave the office. We are only concerned with consistently delivering value to customers.
- Developers have 10% time, which means spending one day every two weeks on a project that helps you learn. We actively encourage open source development during 10% time, and our developers have used the time to build awesome tools like Let’s Chat . In addition, we have a monthly hackathon where developers work together to build up tools that improve our work. These times give developers a chance to learn about new technologies that they find interesting and that can enhance their careers.
- We believe in writing good software. Every commit has testing coverage, followings coding guidelines, and is peer reviewed. We understand what technical debt is and take it seriously. As you might expect, we take software security more seriously than most development teams and regularly eat our own dog food by using SD Elements as part of our development process. Since everyone in management has a development background, we understand the value of good software engineering practices and we don’t believe in recklessly building unmaintainable software.
- We are focused on fixing and solving, not pointing fingers. We believe that blaming people leads to risk aversion, a CYA-attitude, and a culture that breeds politics. Team members feel safe in admitting to mistakes, because they know our focus is on learning from those mistakes rather than penalizing people for them.
Because we’re boot-strapped, we can’t yet offer some of the office perks you’ve come to expect from venture-backed start-ups, like free dry-cleaning, daily catered lunches, or unlimited yoga classes. If you’re judging a workplace primarily by its office perks, this isn’t the right job for you. If, on the other hand, our values speak to you, then drop us a line at firstname.lastname@example.org. Anyone can make claims about their corporate values, so I invite you to ask any of our current employees about their candid opinion as part of the interview process.
When we ask security contacts at our enterprise clients “What software development methodology does your company use?”, they usually pause for a moment and answer “Everything”. Individual development teams tend to adopt processes that work best for them. Heterogeneous development processes wreak havoc on plans for adopting enterprise-wide secure SDLC efforts. There are at least three reasons why development teams within the same company have different development styles, including:
- Business needs: Large companies are often composed of units in different kinds of business. For example, a large media conglomerate could have Internet providers, movie studios, and a retail store division. The customers, employees and supply chain of each business unit also differ, often impacting the way software is developed or procured. Developers at the brokerage department of a bank might work at warp speed to get incremental improvements on trading times, whereas the retail group might be very careful about the pace of change
- Growth through acquisition: Many corporate acquisitions include the acquired company’s software and development teams. Each company likely had a different corporate culture that impacted the way their respective teams worked. For example, a small start-up software shop may value developer autonomy and lack of process while a larger software vendor may value risk management and accountability. The former may lean towards a self-organized team without formal project management while the latter may have a central Project Management Office (PMO)
- Software type: Teams that ship software on embedded devices are often very careful about requirements analysis because the cost of shipping an update is sky-high. On the other hand, teams that build eCommerce web portals may deploy hundreds of changes every day and spend very little time in requirements planning
Security practitioners should keep this in mind when designing a secure SDLC effort. Forcing a security process on development teams that doesn’t take into account the way they develop software is a recipe for disaster. A good goal to have for secure SDLC is to minimize the impact on the team’s existing software development practice, which may mean investing more time up front to give development teams options on how to bake security in a way that works for them.
Every year Verizon, in conjunction with many other organizations such as the United States Secret Service, releases a report that analyzes trends from reported security incidents and verified data breaches over the last year. The report covers many different kinds of data breaches, including everything from ATM skimming to malware to hacking.
The authors note the presence of a sample bias. In particular, several of the organizations that contributed data to the report are national cyber/computer crime agencies who may handle a disproportionate number of espionage-related cases. Perhaps more importantly, several application security vulnerabilities affect a single user at a time (e.g. reflected cross-site scripting) and may not appear at all in an analysis of data breaches.
Still, the report has some important data for software development teams, particularly when for considering the likelihood of certain threats to your system.
- 76% of network intrusions exploited weak or stolen credentials . Basic controls around authentication, and more broadly identity management, are incredibly important.
- 92% of confirmed data breaches were perpetuated by outsiders (i.e. not employees / contractors / other stakeholders who work within an organization). This highlights the need to first secure yourself against external-facing threats.
- However, a majority of incidents – potential security events where a data breach was not confirmed – were led by insiders. This is an important point to keep in mind if your organization ignores high risk vulnerabilities in internal applications.
- 75% of breaches were driven by financial motives. When analyzing attacker motivations (like we do in threat modeling express), it makes sense to focus a disproportionate amount of your time on attack vectors that enable financial gain.
- 75% of breaches were opportunistic, while 25% were targeted. It makes sense to prioritize high impact domain-agnostic vulnerabilities over high impact domain-specific ones. However, ignoring targeted attacks altogether is unwise as they represented 1 in 4 of all breaches in the report.
- 22% of hacking incidents used web applications as a vector. While this number may seem low, the total number of web incidents continued to increase this year. The report also states that the finance and insurance industry saw a much higher proportion of attacks in web applications than all other sectors.
- 80% of vulnerabilities involved user interaction, where the user was deceived (e.g. visited a malicious website) but did not have intent to run an exploit. We often see people dismiss or downgrade the risk of threats that require user interaction, but in fact the vast majority of incidents in this report did require user interaction.
One of the report’s conclusions is very telling:
“The most common threat actions have realized some shifts over the years, but we have failed to see any cutting-edge methods introduced.”
Attackers don’t to need introduce cutting-edge methods because 78% of initial intrusions were rated as low difficulty. They’re succeeding with the basics. Despite the hype around Advanced Persistent Threats, organizations still struggle with getting the basics right. For software development teams, remember that relying exclusively on scanners to find security vulnerabilities is ill-advised. Know your security requirements, prioritize them appropriately, know how to verify them, and apply them consistently.
Methods for assessing software security risk fall into two broad types:
- Modeling: Understanding the risks the application may conceptually be vulnerable to. It includes threat modeling, threat risk analysis, architectural assessments, framework-level analysis, and software security requirements gathering.
- Vulnerability Assessments: Finding specific instances of said risks in their software, often as a mechanism to “prove” the software is secure to an auditor or customer. This usually involves manual and/or automated static / dynamic testing. Craig Wright has a detailed discussion of assessment types in his paper here.
We sometimes hear organizations say they aren’t ready for a software security requirements program because they are still struggling with deploying vulnerability assessments. The thinking goes that an organization needs to first find out where its real vulnerabilities are before it can focus on modeling exercises that will prevent defects later on.
We don’t agree with this logic. Pentests are time consuming and do not necessarily provide sufficient coverage for software weaknesses.
Assessments, and testing in general, are imperative to building secure applications. However, they’re more efficient when they follow a modeling exercise. By knowing potential security weaknesses in your software ahead of time, you have a chance to address the issues early on and use assessments as a validation tool. The de-facto way organizations perform assessments like penetration testing today is to learn about their security issues after building the application, which is inefficient.
One common criticism is the perceived overhead of modeling activities. Development teams are often worried about spending days coming up with generic “best practice” information to secure their applications, when they could instead spend time finding real vulnerabilities to fix. This is a fair criticism.
Security requirements are different. With a good security requirement system, you can determine the relevant threat to your applications in 15 minutes. We’ve had several customers accurately predict the results of manual & automated penetration testing and code review with just 15 minutes of modeling (i.e. answering questions about how their application works). In other words they were able to know all of the vulnerabilities identified by the assessments, and even some the assessments could not catch, with just 15 minutes of work. Over a broad sample of data, we’ve seen that security requirements can accurately predict 97% of application security vulnerabilities found in penetration testing.
If you can spare 15 minutes for each application release to know your security vulnerabilities ahead of time, it’s worth getting started with a security requirements program.
Suppose you hire a consultancy to perform a black-box assessment of your software. After executing the test, the firm produces a report outlining several vulnerabilities with your application. You remediate the vulnerabilities, submit the application for re-testing, and the next report comes back “clean” – i.e. without any vulnerabilities. At best, this simply tells you that your application can’t be broken into by the same testers in the same time frame. On the other hand, it doesn’t tell you:
- What are the potential threats to your application?
- Which threats is your application “not vulnerable” to?
- Which threats did the testers not assess your application for? Which threats were not possible to test from a runtime perspective?
- How did time and other constraints on the test affect the reliability of results? For example, if the testers had 5 more days, what other security tests would they have executed?
- What was the skill level of the testers and would you get the same set of results from a different tester or another consultancy?
In our experience, organizations aren’t able to answer most of these questions. The tester doesn’t understand application internals and the organization requesting the test doesn’t know much about the security posture of their software. We’re not the only ones who acknowledge this issue: Haroon Meer discussed the challenges of penetration testing at 44con. Most of these issues apply to every form of verification: automated dynamic testing, automated static testing, manual penetration testing, and manual code review. In fact a recent paper describes similar challenges in source code review.
The opaque nature of verification means effective management of software security requirements is essential. With requirements listed, testers can specify both whether they have assessed a particular requirement and the techniques they used to do so. Critics argue that penetration testers shouldn’t follow a “checklist approach to auditing” because no checklist can cover the breadth of obscure and domain-specific vulnerabilities. Yet the flexibility to find unique issues does not obviate the need to verify well understood requirements. The situation is very similar for standard software Quality Assurance (QA): good QA testers both verify functional requirements AND think outside the box about creative ways to break functionality. Simply testing blindly and reporting defects without verifying functional requirements would dramatically reduce the utility of quality assurance. Why accept a lower standard from security testing?
Before you perform your next security verification activity, make sure you have software security requirements to measure against and that you define which requirements are in-scope for the verification. If you engage manual penetration testers or source code reviewers, it should be relatively simple for them to specify which requirements they tested for. If you use an automated tool or service, work with your vendor to find out what requirements their tool or service cannot reliably test for. Your tester/product/service is unlikely to guarantee an absence of false negatives (i.e. certify that your application is not vulnerable to SQL injection), but knowing what they did and did not test for can dramatically help increase the confidence that your system does not contain known, preventable security flaws.
Abridged from this article on InfoQ
The March 24th public disclosure of a MongoDB zero-day vulnerability (CVE-2013-1892) has been raising eyebrows and initiating discussion among IT security and developers alike. Here’s why we think it stands out:
- Sometimes developers think that NoSQL databases like MongoDB are more secure because they are not vulnerable to SQL Injection, which is one of the most dangerous web application vulnerabilities
- MongoDB usage has been growing and many people consider it to be the leading NoSQL engine.
- The disclosed vulnerability is extremely high impact, and very much exploitable. The shell exploit templates have been surfacing within days of the disclosure.
The main lesson learned for developers and architects is that out-of-sight/out-of-mind features are still a major source of vulnerabilities.”If it is not documented, it should not be accessible.”
Generic and powerful interfaces such as the exploited nativeHelper interface, are like wild beasts. No matter how confident you get in taming them, they can always come back and bite you. The only way to protect yourself is to lock them down and prevent access except for cases they are needed.
It is worth pointing out that the Rails YAML vulnerability we discussed back in February had very similar root causes. It was due to a generic and powerful interface left accessible by default.
On a side note, check out this interesting post I dug out from October 2011! It seems that the possibility of code execution for this interface was brought up a long time ago. Possibly since there wasn’t a Proof-of-Concept attached to it, it didn’t get the needed traction and nobody picked up on it. One might wonder how many issues are out there without a public disclosure, and what is the destruction potential if it is picked up by a hacker with malicious intent.
We’re really excited about our working integration with Veracode. For the first time, a development team can automatically create a set of tailored security requirements and automatically test the requirements. That’s a huge boost for application security. Here’s how it works:
Start by modeling your application in SD Elements:
Generate a set of tailored tasks (i.e. requirements) in SD Elements:
Use requirements during development:
Run the application through Veracode and import the scanning results:
Review the verification status of requirements in SD Elements:
You now know:
- Which requirements have failed verification: A vulnerability was discovered
- Which requirements have passed verification: A vulnerability was not discovered, and Veracode can generally find this kind of vulnerability in supported languages / frameworks
- Which requirements have partially passed verification: Veracode can find some but not all instances of a vulnerability
- Which requirements were not covered by Veracode: These need to be manually tested
Use SD Elements test cases to manually test areas not covered by Veracode:
For the first time you can have a comprehensive set of potential risks, the countermeasures to protect them, and understand which specific risks need to be manually verified after using an automated tool. The integration substantially improves the ability for development teams to understand application risk and build secure applications.
Development teams rarely define specific software security requirements. This is not surprising: many software teams struggle to define non-functional requirements (NFRs). This problem is particularly severe for agile teams because most agile process guidance does not acknowledge the complexity of NFRs in real production environments.
There are two types of NFRs:
- Non-functional requirement user stories: Blocks of testable functionality written in user story format. The actors in these user stories may be internal IT staff. For example: “As a security analyst I want the system to throttle unsuccessful authentication attempts so that the application is not vulnerable to brute force attacks”.
- Non-functional requirement constraints: These are cross-cutting concerns that may have an effect on several other user stories. They are a sort of “tax” on all relevant development efforts. For example, requiring that all developers validate data from HTTP form fields in a web application is a constraint.
Last year I wrote an article on InfoQ about a generalized method of managing security in agile projects. The process also applies to other non-functional domains: accessibility, scalability, regulatory compliance, etc but not domain-specific requirements. It works by building filterable libraries of reusable non-functional requirements: one library for user stories and another library for constraints. The libraries themselves can be as simple as Excel spreadsheets with filters, or as complex as Sharepoint sites or commercial Secure Application Lifecycle Management systems. Here’s a graphical representation of the process in three steps:
Step 1: Build non-functional requirements libraries
Step 2: Use non-functional requirements user story library in backlog
Step 3: Use non-functional requirements constraint library in iterations
Every IT worker I’ve met in the past month has heard me rave about The Phoenix Project. The book uses an all-too-realistic fictional scenario to discuss the behaviors of a high performing IT organization, with a particular emphasis on the convergence of development and operations (i.e. DevOps). One lesson from the book that really resonated with me was breaking down IT work into four types:
- Business projects
- IT projects
- Unplanned work
For as long as I can remember, most of software security work has fallen under the scope of the last bullet: unplanned work. The book illustrates how unplanned work is one of the most destructive forces in IT. A security defect detected through verification activities such as static analysis or penetration testing is unplanned work by definition: developers need to stop working on some planned piece of functionality in order to remediate the issue. Simply building and then fixing recurring security defects does not hold true to the spirit of continuous process improvement illustrated by high performance IT organizations. It means higher costs for development and often creates tension between security and development teams.
To me, unplanned work is the most important reason to have a repeatable, scalable process for security and other non-functional requirements. Specifying the right requirements up-front – either in a waterfall requirements phase or as part of agile iteration planning – allows project teams to explicitly agree on which security risks they will prevent and which ones they will accept into production. Explicitly identifying security requirements allows developers to hone in on potential false negatives from security scanning solutions. They can then use scanning tool customization or automated front-end / unit-testing to detect potential vulnerabilities not covered by their scanning solution. In addition, they can build controls in development frameworks to eliminate the risk entirely such as context aware output encoding for cross site scripting.
IT teams that practice high performance DevOps activities lean heavily on automation technologies like static analysis, configuration management, and continuous integration. Adding non-functional requirements to the mix will help round out coverage gaps from other automation techniques, and turn unplanned work into planned work.