>_ Unvalidated input

stdin: is not a tty

Agile AppSec Review

An application security review can be performed at any time in the application lifecycle as long as the design is feature complete and a system or a significant feature is nearing completion. You want all new applications or major changes to the existing software to undergo a review. The purpose of the review is to ensure that the proper security controls are present, that they work as intended, and that they have been invoked in all the right places.

Any appsec review should be a timeboxed exercise. The goal of timeboxing is to define and limit the amount of time dedicated to an activity. I never do reviews longer than one hour regardless of the size of the application. Research shows that engagement starts to drop off quite rapidly after about the first 30 minutes. It is vital to have the right people in the room, a lead developer or anyone who understands the architecture and can efficiently navigate through the code is essential. If one hour is not enough the review can be repeated but there is one rule: The next review must be re-scoped and focus on critical parts of the application only. Use the first review to identify the most critical parts.

The agile appsec review aims to effectively use the 80/20 rule and focus on core activities that result in an exponential risk reduction output. Here are my top 10 things check (no it doesn’t contain all pieces of advice for reviewers, but it is an excellent start especially for web apps):

1. Review the high-level architecture and understand the data flows

At the minimum, a high-level architectural diagram is required that clearly defines boundaries between the system or part of the system showing the entities it interacts with it. The components of the application must be identified and have a reason for being in the application. Data flows should be indicated. This diagram must be documented and updated on every significant change. For high-risk applications, a threat model to determine key risks is required.

2. Check whether input validation is being applied whenever input is processed

Look for all input parameters, things like POST parameters, query strings, cookies, HTTP headers, and ensure these go through validation routines. Whitelisting input is the preferred approach. Only accept data that meets specific criteria. Canonicalization of data before performing input validation must be performed. Lack of canonicalization can allow encoded attack strings to bypass the input validation functions you have implemented. When accepting file uploads from the user make sure to validate the size of the file, the file type, and the file contents as well as ensuring that it is not possible to override the destination path for the file. SQL queries should be crafted with user content passed into a bind variable. The input validation must also check minimum and maximum lengths for all data types received and processed.

3. Check that appropriate encoding has been applied to all data being output by the application

Identify all code paths that return data and ensure this data goes through common encoding functions (gatekeepers) before being returned. This is especially important when output is embedded in an HTML page. The type of encoding must be specific to the context of the page where the user-controlled data is inserted. For example, HTML entity encoding is appropriate for data placed into the HTML e.g. <script> is returned as &lt;script&gt; in the body. However, user data placed into a script requires JavaScript specific output encoding and URL requires url encoding. A consistent encoding approach for all the output produced, regardless whether it’s user-controlled or not, reduces the overall risk of issues like Cross-Site Scripting. You should also check that the application sets the response encoding using HTTP headers or meta tags within HTML. This ensures that the browser doesn’t need to determine the encoding on its own.

4. Verify that authentication credentials, session tokens and personal and sensitive data transmitted over secure connections

Find all routines that transmit data and ensure that secure transport encryption is used. The best practice to protect network traffic against eavesdropping attacks is to deploy TLS everywhere regardless of the sensitivity of data transmitted. Review the effective TLS configuration, especially if the app does not specify it. TLS settings may be inherited from the operating system and require hardening. Legacy protocols like SSLv3 or TLSv1 have known weaknesses and are not considered to be secure. Additionally, some ciphers are cryptographically weak and should be disabled. Finally, the encryption keys, certificates and trust stores must be secured appropriately at rest, as explained in the next section.

5. Enumerate all privileged credentials and secrets used by the application, check how these are protected at rest

Compile a list of credentials used by the application; this includes passwords, certificates, API keys, tokens, encryption keys. Check if secrets are not stored in the source code repositories or left sitting unprotected on disk. While it can be convenient to test application code with hardcoded credentials during development, this significantly increases risk and must be avoided. Secrets should be protected at rest, ideally stored in a central vault where strong access controls and complete traceability and auditability is enforced. User passwords must be stored using secure hashing techniques with robust algorithms like Argon2, PBKDF2, scrypt, or SHA-512. Simply hashing the password a single time does not sufficiently protect the password. Use iterative hashing, combined with a random salt for each user to make the hash strong.

6. Review that authentication is implemented for all pages, API calls, resources, except those specifically intended to be public

All authentication controls should be centralised and self-contained, including libraries that call external authentication services. Check that session tokens are regenerated when the user or service authenticates to the application and when the user privilege level changes. All types of interaction both H2M and M2M, internal and external, service to service, frontend to backend, all must be authenticated. A modern application must not rely on trust boundaries defined by firewalls and network segmentation - never trust, always verify.

7. Review how authorisation has been implemented and ensure that its logic gets executed for each request at the first available opportunity and based on clearly defined roles

Check that all authorisation controls are enforced on a trusted system (server-side). Make use of a Mandatory Access Control (MAC) approach. All-access decisions must be based on the principle of least privilege. If not explicitly allowed, then access should be denied. Additionally, after an account is created, rights must be explicitly added to that account to grant access to resources. Establish and utilise standard, tested, authentication and authorisation services whenever possible. Always apply the principle of complete mediation, forcing all requests through a common security “gatekeeper”. This ensures that access control checks are triggered whether or not the user is authenticated. To prevent Cross-Site Request Forgery attacks, you must embed a random value that is not known to third parties into the HTML form. This CSRF protection token must be unique to each request. This prevents a forged CSRF request from being submitted because the attacker does not know the value of the token.

8. Review the application’s logging approach to ensure relevant information is logged, allowing for a detailed investigation of the timeline when an event happens

The primary objective of error handling and logging is to provide a useful reaction by the user, administrators, and incident response teams. Check that any authentication activities, whether successful or not, are logged. Any activity where the user’s privilege level changes should be logged. Any administrative activities on the application or any of its components should be logged. Any access to sensitive data should be logged. While logging errors and auditing access is important, sensitive data should never be logged in an unencrypted form. Logs should be stored and maintained appropriately to avoid information loss or tampering by an intruder. If logs contain private or sensitive data, the definition of which varies from country to country, the logs become some of the most sensitive information held by the application and thus very attractive to attackers in their own right.

9. Review how error and exceptions are handled by the application to ensure that sensitive debugging information is not exposed

Review the exception handling mechanisms employed by the application. Does the application prevent exposure of sensitive information in error responses, including system details, session identifiers, software details? Error messages should not reveal details about the internal state of the application. For example, file system paths and stack information should not be exposed to the user through error messages. Given the languages and frameworks in use for web application development, never allow an unhandled exception to occur. The development framework or platform may generate default error messages. These should be explicitly suppressed or replaced with customised error messages as framework generated messages may reveal sensitive information back to the user.

10. Check if the application takes advantage of the security HTTP response headers

This is important for web applications and APIs consumed by browsers or HTTP clients. The security-related HTTP headers give the browser more information about how you want it to behave. It can protect clients from common client-side attacks like Clickjacking, TLS Stripping, XSS. Security headers can be used to deliver security policies, set configuration options and disable features of the browser you don’t want to be enabled for your site. Look for the X-Content-Type-Options: nosniff header as explained earlier to ensure that browsers do not try to guess the data type. Use the X-Frame-Options header to prevent content from being loaded by a foreign site in a frame. Content Security Policy (CSP) and X-XSS-Protection headers help defend against many common reflected Cross-Site Scripting (XSS) attacks. HTTP Strict-Transport-Security (HSTS) enforces secure (HTTP over SSL/TLS) connections to the server. This reduces the impact of bugs in web applications leaking session data through cookies (section 4) and defends against Man-in-the-middle attack.

The output of the review is a written record of what transpired, list of issues identified and corresponding risk levels along with a list of actions that named individuals have agreed to perform to mitigate it. This is a very crucial part of effective review that often gets overlooked.

The Language of Security Risk Management

Risk management is fundamental to securing information, systems, and critical business processes. You can’t effectively and constantly manage what you can’t measure, and you can’t measure what you haven’t defined. Basic risk management can be much more effective with clear and concise definition. Here are a couple of the most important ones:

Control is a measure that is modifying risk.Controls can be split into strategic, tactical and operational.

Strategic controls are usually high level, such as risk avoidance, transfer, reduction and acceptance.

Tactical controls determine a course of action such as preventative, corrective and directive.

Operational controls determine the actual treatment such as technical, logical, procedural or people and physical or environmental.

Likelihood is the chance of something happening. It should be used instead of possibility as many things are possible and likelihood gives no indication wether a particular security event is actually likely to take place.

Probability is the measure of the chance of occurrence as a number between zero and one.

Resilience is the adaptive accapacity of an organisation in a complex and changing environment.

Qualitative risk assessment are subjective and generally expressed in terms such as ‘high’,‘medium’,‘low’. This method should be avoided as it renders risk assessments unreliable.

Quantitative risk assessment is generally expressed in numerical terms such as financial values or percentages of revenue. These provide a more accurate measurement of risk and are usually more time consuming to undertake.

Residual risk is the remaining after risk treatment and once all other risk treatment options have been explored. It is normal to accept or tolerate this since further treatment might be prohibitively expensive or have no effect.

Risk is the effect of uncertainty on objectives. Risk is the product of consequence or impact and likelihood or probability.

Risk acceptance or risk tolerance is the informed decision to take a particular risk.

Risk analysis is the process to comprehend the nature of risk and to determine the level of risk.

Risk appetite is the amount and type of risk that an organisation is willing to pursue or retain.

Risk avoidance is an informed decision not to be involved in, or to withdraw from an activity in order not to be exposed to a particular risk.

Risk management is a coordinate activity to direct and control and organisation with regard to risk.

Risk modification is the process of treating risk by the use of controls to reduce either the consequence/impact or the likelihood/probability.

Risk register is a record of information about identified risks.

Risk transference is a form of risk treatment involving the agreed distribution f risk with other parties. One of the risk treatment options is to transfer the risk to or to share it with a third party. This doesn’t however change the ownership of the risk, which remains with the organisation itself.

Risk treatment is the process to modify risk. Treatment may involve risk transference or sharing, risk avoidance or termination.

Stakeholder is a person or organisation that can be affected by a decision of activity.

Threat is a potential cause of an unwanted incident which may result in harm to a system or organisation. Threats are usually manufactured (whether accidental or deliberate) and are different from hazards or natural events.

Threat vectors is a method or mechanism by which an attack is launched against an information asset.

Threat actors is a person or organisation that wishes to benefit from attacking an information asset. Threat actors mounts the attacks. Threat sources ofter pressurise threat actors to attack information assets on their behalf.

Vulnerability is the intrinsic property of something resulting in susceptibility to a risk source that can lead to an event with a consequence. Vulnerabilities or weaknesses leave it open to attack from a threat or hazard.

Coaching to Develop Security Talent

Adopting a coaching mindset in most circumstances adds significant benefits to any security leader and even more importantly helps build successful security teams. These benefits include improved engagement, motivation and team morale. It delivers greater accountability and communication. Security projects and tasks can be performed faster freeing up the valuable time leaders require to operate at the correct strategic level.

It is the most effective when an employee has the skills and ability to complete the task at hand, but for some reason is struggling with the confidence, focus, motivation, drive, or bandwidth to be at their best, coaching can help.

I use the below process in my coaching sessions with great success. I steer the conversation towards the distinctive phases shown below and look for the meaning behind responses provided. In reality, a coach is there to guide you toward your own solutions, and hold you accountable for taking action.

The most important way to achieve this is by asking your coachee the right questions. These great questions may force someone to look at their situation from another perspective, thereby encouraging the breakthrough they need to succeed.

Here is a list of effective questions I use at each stage of the coaching process:

Now

  • What will be useful for us to talk about today?
  • How long have you been thinking of this?
  • What is most important to you about this?
  • On a scale of 1 - 10 how important is this to you?
  • Do you notice any patterns?

Future

  • If you had this the way you wanted what would that be?
  • What do you really want here?
  • What is your definition of success?
  • How will you know you have what you want?
  • Can you start and maintain this?
  • Are you in control of the outcome?

Blocks

  • What stops you from achieving this?
  • How can you do it anyway?
  • What do you believe is currently stopping you?
  • Is it possible to do it?
  • Is it right for you to do it?
  • Where are you either too flexible or too uncompromising about this issues?
  • Is there anything else stopping you?

Resources

  • What will help you achieve this?
  • What resources do you need?
  • What resources do you already have?
  • Are there any similar situation you have resolved in the past?
  • At your best, what would you do right now?

Action

  • What will you do about this?
  • What practical actions you can take right now?
  • What will your first step be?
  • When will you start?
  • How will you like to let me know you have achieved these actions?

Password Storage

Passwords in cleartext

Storing cleartext passwords is a bad idea. Passwords should be irreversible when in storage. When passwords are stored as original cleartext, anyone who has access to the underlying storage system will also get access to all the accounts. Cryptographic hashes should be used to protect such sensitive authentication data at rest.

Hashing

The process of hashing takes the original password and transforms it using a one-way function to data of a fixed size. A few different types of hashing are out there. The most common are MD5, SHA-1, SHA-256. In recent years a number of known cryptographic weaknesses have been identified making some of these algorithms unusable as a message digest. However, not all of these apply in the context of password hashing. The main problem being these algorithms are too fast. You can address this by implementing iterative hashing, where you hash the password multiple time. You should pick the number of iterations based on time. I suggest a minimum of 200 ms for the function to complete.

Rather than building a bespoke iterative hashing, it’s a much better idea to use a hashing method that is considered a de facto standard for password storage. There are many robust algorithms out there that come with configurable work factor. To name the most popular here:

  • Bcrypt
  • PBKDF2
  • Scrypt
  • Argon2

Argon2 is the latest and was selected as the winner of the Password Hashing Competition in July 2015. This should be the default choice for any new application.

Storage format

The evolution of parallel computing enables new and faster attacks against password hashing. You should carefully choose the format used to store hashes. Cryptographic algorithms change over time. Even if an algorithm is still secure, you should increase the work factor every year to keep up with Moore’s law.

Here is an example of a format used by GNU C library:

1
$id$rounds=N$salt$hash

where $id is one of:

1
2
3
4
5
6
id  | Method
─────────────────────────────────────────────────────────
1   | MD5
2   | Blowfish
5   | SHA-256
6   | SHA-512

Example:

1
$6$rounds=6000$gFakm/qCJ77dPS.I$E5FDu2k7zTeehOC2uZ1AsUGqjO1G4Fdn8Lv0sK8iAw86gER7hPRxjDFayVBTW6inT4mlFpfaE/W7fz9jVXkqR/

The rounds=N is the number of hashing rounds actually used. The $salt stands for the up to 16 characters of random salt. The $hash part is the actual hashed password using the $id algorithm. The length of the hash depends on the algorithm used.

This is just an example and you can come up with your own scheme. However, extending the existing one with new algorithms like Argon2, PBKDF and assigning them a new $id will make things more clear. It will be also much easier to understand for anyone familiar with this storage format.

Hashes are often stored in a database. It makes sense to store other metadata about the account too:

  • Last password change time and date
  • Last login time and date
  • Active / inactive / locked flag

Future proofing

As the time passes you will end up in a situation where the existing password storage solution becomes cryptographically outdated and must be updated either by changing the work factor or the algorithm used. You should built-in the ability to upgrade in place without adversely affecting existing user accounts right from the start. The best approach is to upgrade it upon successful authentication. When a user attempts to log in, the application can hash the password using the old (weak) and the new (secure) algorithm. If the old hash matches the existing database records, then the new stronger hash is stored in the database replacing the old, weak one. With the additional metadata, you can measure the uptake and decide what to do with users who haven’t logged in since the change was made. An email asking users to log in back is a good idea. It makes sense to lock the account if it hasn’t been used for some time, e.g. one year.

Password reuse

Two in three people reuse the same password for multiple accounts. If a password is compromised elsewhere, it can be correlated by username or email address to other services using the same password, thus propagating the threat. It makes sense for any modern application to check the user passwords against existing data breaches. The Pwned Passwords is such a database that is available for both offline and online use. The public Have I Been Pwned API uses a k-anonymity model where the password can be hashed client-side with the SHA-1 algorithm, and only the first 5 characters of the hash are shared. Alternatively, you can download the entire Pwned Password list, upload it to a database and create a local service. Unlike the public API, the local service will require periodic updates to ensure it contains the latest breached hashes.

Go passwordless

Adopting a risk avoidance strategy for password storage may be an option for modern applications. This is only possible if you eliminate risk by withdrawing or not becoming involved in the most high-risk activity of user authentication. One such option is to implement Social Login. From the user’s perspective, it provides a frictionless way to login; it also eliminates the need to store and process passwords by the application. There are pros on cons of using social login. I’m not going to discuss it here but just to mention a few like the loss of control to a third party, privacy concerns and long-lived access tokens that may not be suitable for all applications.

Another approach is to provide a one time password (OTP) using an out-of-band channel (e.g. push notification, email, SMS) upon login. This method is as good as the security of the out-of-band channel. Also, the OTP can sometimes be delayed by hours depending on the delivery mechanism used. This may not be acceptable in some scenarios.

Finally, you can trade the password storage for cryptographic material storage. WebAuthn (Web Authentication) is a new emerging standard for authenticating users to web-based applications and services using public-key cryptography. A public key and randomly generated credential ID must be still stored. However, even if exposed the risk is minimal, public keys by designed to be openly shared. Additionally, due to a much larger keyspace, compared to average password complexity, public keys are considered infeasible to brute force at this time (keys 2048 bit and larger). All major browsers are adopting WebAuthn on both mobile and desktop. Once it becomes part of the core iOS and Android platform will mean that devices like a phone can provide us with biometric verification and WebAuthn will slowly replace traditional passwords. Watch this space closely.

Detecting SSL and Early TLS

Secure Sockets Layer (SSL) has been unquestionably the most widely-used encryption protocol for over two decades. SSL v3.0 was replaced in 1999 by TLS v1.0 which has since been replaced by TLS v1.1 and v1.2.

In April 2015, SSL and early TLS were removed as an example of strong cryptography in PCI DSS v3.1. For this reason, PCI-DSS version 3.2 has established a deadline for the migration of SSL and TLS, set on June 30, 2018.

Unfortunately, TLS v1.0 remains in widespread use today despite multiple critical security vulnerabilities exposed in the protocol in 2014.

It’s not always a straightfoward task to establish where in your organisation TLSv1.0 may be used. It’s common to look at the external footpring only overlooking any internal and outbound communication.

No matter how complex your environment is you can always divide it into smaller segments and look at three major places:

Inbound

Most vulnerablity scanners have signatures to detect legacy SSL and early TLS versions. You can scan all of your external hosts to enumerate services that need fixing. Here’s a sample finding from Qualys:

You can run a simple search all your assets managed by Qualys,

1
vulnerabilities.vulnerability.qid: 38628

Alternatively, you can use nmap or other tools like testssl.sh

Web and Mobile Clients

You can inspect your web analytics tool to get some understanding of the clients that still rely on TLSv1.0. Create a report to show stats from the below clients:

  • Android 4.3 and earlier versions
  • Firefox version 5.0 and earlier versions
  • Internet Explorer 8-10 on Windows 7 and earlier versions
  • Internet Explorer 10 on Win Phone 8.0
  • Safari 6.0.4/OS X10.8.4 and earlier versions

From my experience, the numbers will be low, often below 1%. Most users have modern browsers and phones that support TLSv1.2. There will be some older Android devices and general noise from bot traffic often using spoofed headers.

API Clients

If your company provides external APIs used by customers to integrate with your services you may need to do more work before deprecating early-TLS. There’s usually no analytics available for such integrations, the client-side systems owned by customers may not be regularly updated. Finally, the most popular software frameworks like the .NET default to TLSv1.0 even though a higher version is supported.

Here are a few popular software development languages and frameworks that need upgrading, recompiling or a custom configuration change to support TLSv1.2

  • .NET 4.5 and earlier
  • Java 7 and earlier versions
  • OpenSSL 1.0.0 and earlier (Ruby, Python, other frameworks that use OpenSSL)

The best way to gain some visibility into the clients still negotiating early TLS on your systems is to enable extra logging on the edge web servers or load balancers. If you use Nginx simply add $ssl_protocol to your log_format or %{SSL_PROTOCOL} for Apache.

Here’s a sample log entry with SSL protocol version and cipher specs logging enabled:

1
127.0.0.1 - - [11/Jan/2016:12:34:56 +0200] TLSv1.2/ECDHE-RSA-AES128-GCM-SHA256 "GET / HTTP/1.1" 200 1234 "-" "curl/7.37.0"

If your infrastructure sits behind a content delivery network like Akamai or Cloudflare you need to enable the extra logging there. This is not always a simple task. For example on Akamai to enable this additional logging, you need to select a particular log format that includes “Custom Field” in your Log Delivery Service (LDS):

In Akamai’s property manager for you site you need to again enable Custom Log Field and specify a couple of special variables that should be captured:

Here’s a sample log entry from Akamai with SSL scheme, protocol and cipher:

1
2016-07-24      13:49:41        127.0.0.1  GET     /www.example.com/1531720867000.js     200     5109    1 "https://www.example.com/example.aspx"      "Mozilla/5.0 (Windows NT 6.1; WOW64; Trident/7.0; rv:11.0) like Gecko"  "-"     "https|tls1.2|ECDHE-RSA-AES256-SHA384"  10.0.0.1   1       "trTc_19843||"

With the TLS details now being logged, you can use your favourite log searching tool to identify clients that need to be upgraded before you withdraw support for early-TLS.

Outbound

It is not always straightforward to detect outbound connections that still use TLSv1.0. Inspecting the client side configs don’t always show the problem. For example, clients that use .NET 4.5 and SChannel default to TLSv1.0 even though the underlying operating system fully supports TLSv1.2.

I found that the most effective way of detecting such clients is to run a packet capture at the egress points of the network.

I use BPF filter only to capture SSL handshakes that include the protocol version negotiated. With this approach, you can run the dump for longer, with less risk of causing performance or disk space issues. Running the capture for a day or week can uncover clients that only sporadically connect e.g. batch jobs.

I suggest running the dump for a short period of time first, e.g. 1 minute to get the feeling of the volume of traffic you will capture. If you are happy with the size, then let it run for longer e.g. 24h to catch the full day’s traffic.

Here’s a sample command to capture TLSv1.0 and SSLv3.0 ClientHello/ServerHello packets.

1
tcpdump -s0 -i any "tcp and (tcp[((tcp[12] & 0xf0) >> 2):2] = 0x1603) and ((tcp[((tcp[12] & 0xf0) >> 2)+9:2] = 0x0300) or (tcp[((tcp[12] & 0xf0) >> 2)+9:2] = 0x0301))" -w /tmp/TLShandshake.pcap

where:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
      record type (1 byte)
      /
     /    version (1 byte major, 1 byte minor)
    /    /
   /    /         length (2 bytes)
  /    /         /
 +----+----+----+----+----+
 |    |    |    |    |    |
 |    |    |    |    |    | TLS Record header
 +----+----+----+----+----+


 Record Type Values       dec      hex
 -------------------------------------
 CHANGE_CIPHER_SPEC        20     0x14
 ALERT                     21     0x15
 HANDSHAKE                 22     0x16
 APPLICATION_DATA          23     0x17


 Version Values            dec     hex
 -------------------------------------
 SSL 3.0                   3,0  0x0300
 TLS 1.0                   3,1  0x0301
 TLS 1.1                   3,2  0x0302
 TLS 1.2                   3,3  0x0303

You can analyse the captures with thsark and command line tools:

1
tshark -r TLShandshake.pcap -q -z conv,ip

Here’s an example that extracts certificate’s common name together with the internal client IP and sorts by the number of connections:

1
2
3
4
5
6
7
tshark -nr TLShandshake.pcap -Y "ssl.handshake.certificate" -V | egrep  '(GeneralNames|Internet)' -A3  | egrep '(^Internet|dNSName:)'| sed s/"Internet Protocol Version 4, "//g | perl -p -e 's/([0-9])\n$/$1/' | awk '{print $4,$6}' | sort | uniq -c | sort -rn

 300 10.248.1.109 *.newrelic.com
 266 10.248.1.109 dynamodb.eu-west-1.amazonaws.com
  99 10.248.138.239 *.newrelic.com
  95 10.248.159.229 *.newrelic.com
  63 10.248.142.144 dynamodb.eu-west-1.amazonaws.com

Internal Services

It’s very likely that some internal services in your environment will use older TLS to communicate. However, this scenario may be the easiest to fix provided you have a test environment that closely mirrors production. You can run a vulnerability scan to determine endpoints that need fixing and apply the changes to your test environment first. You can then reconfigure or upgrade clients that no longer connect.

You may still need to run tcpumps in strategic places in your production environment to validate that early TLS has been successfully eradicated. From my experience, services that use proprietary protocols or capable of upgrading the connection to TLS (like STARTTLS) may not always show up on vulnerability scans. In this scenario, a manual inspection of the configuration and TLS handshake dumps goes a long way.

Building an Enterprise Security Program From Scratch

In this post, I’m going to touch on all the aspects of building an enterprise security program. This is a vast topic, and I could have spent pages and pages explaining each element of a successful program. However, the goal of this post is to merely define the necessary steps and provide a roadmap to get you started.

Here is a basic outline for an enterprise security program:

  • Risk Assesment
  • Plan of Action
  • Tactical and Strategic Goals
  • Security Budget
  • Security Policies and Procedures
  • Vulnerability Management
  • Training and Awareness
  • Quarterly Security Reviews
  • Anual Program Review

The idea of building a security program from scratch is a daunting task. You need to have a comprehensive background in IT and security to be successful. Knowledge and expertise of IT is 90% of this job. Most of your time you will be evaluating technologies, advising business teams, deciding what is or is not a risk and finally directing security priorities and implementations.

Risk Assesment

The goal of the risk assessment is to identify where your risks are. First, you need to know where the sensitive data resides that you need to protect. Effective asset management helps with identification where the critical assets are. You need to focus on the sensitive data first. If you perform credit card processing that you need to start with PCI DSS. A PCI DSS gap analysis is usually the first step to understand the compliance status. If you store and process personal data (PD) than CIS Critical Security Controls (CSC), risk assessment or a more detailed NIST using SP 800 series would be recommended together with a GDPR gap analysis.

Plan of Action

The outcome of the above risk analysis will feed into the plan of action. This is mainly focused on hight and mid-level risks. The plan of action would include all the major risks, mitigation strategy, budget requirements, timelines. In many cases, this is also referred to as a gap assessment.

Ref Risk Priority Mitigation Budget Milestones

Tactical and Strategic Goals

A typical security strategy is a combination of both short-term tactical and long-term strategy. You are faced with a continually changing landscape so tactical planning should be limited to 6 months and strategic to max 12-24 months.

The strategic plan looks beyond the tactical focus. Some problems and risks will take a long time to mitigate.

The output of Risk Assessment is used in the plan of action based on the risk levels identified. Then you prioritise the plan of action to create a tactical and strategic security plan. The prioritisation is based on the sensitivity of data you process.

Security Budget

The security budget should closely map to the tactical and strategic security plans. It is something that takes a lot of consideration. You need to negotiate all products you buy. Security products are always overpriced; this is a fact. Look at the open source products first to understand functionality that is already available out there for free. When you select a product, the features should come first and cohesion second. Let’s face it you cannot overlook the support side of the equation. Even most sophisticated and functional open source products will fail if you don’t have the right people to support it.

Security Policy

There are two significant policies that every organisation should have. You should focus on getting these two right before moving on to the rest of security policies for your organisation.

Data Classification

You need to classify all data within the organisation appropriately. Sensitive data may be defined as PCI, PD or health information. You need to know where the sensitive data resides. It makes sense to divide the data into tiers based on sensitivity classification. For example:

  • Tier 0 - data that could be used to prejudice or discriminate against specific individuals. This includes ethnic origin, political membership, health information, sexual orientation, employment details or criminal history. Also payment card data, genetic data, biometric data and authentication data like passwords.
  • Tier 1 - data that on its own can be used to identify specific individuals. E.g. names, addresses, email addresses, phone numbers, copies of passports or drivers' licences.
  • Tier 2 - any data when aggregated with other tier-2 or tier-1 may allow specific individuals to be identified. E.g. IP addresses, transaction metadata and geolocation data.
  • Tier 3 - data that may, when aggregated with Tier-2 or tier-1 data, but not with other T3 data, allow specific individuals to be identified. E.g. Device ID, Advertising IDs, hashes, cookies and search string data.

The data classification policy should make a clear distinction between data types to help your organisation. It should describe the proper handling of each type.

Data Protection

You want to document a data protection standard. The document explains how the data is protected, and who should have access to it.

The data protection policy is to ensure that data is protected from unauthorised use or disclosure and complies with data classification policy and best practices and standards.

Vulnerability Management

Vulnerability management is an ongoing approach to the collection and analysis of information regarding vulnerabilities, exploits and possible inappropriate data flows. A comprehensive vulnerability management program provides you with knowledge awareness and risk background necessary to understand threats to the organisation’s environment and react accordingly.

A successful vulnerability management program consists of 5 distinctive steps: * Determine the hardware and software assets in your environment * Determine the criticality of these assets * Identify the security vulnerabilities impacting the assets * Determine a quantifiable risk score for each vulnerability * Mitigate the highest risk vulnerabilities from the most valuable assets

Most of the steps in vulnerability management can be automated to some extent with the exception of Penetration Testing. This type of manual testing is the perfect complement to automated vulnerability scanning. Start with a smaller scope and target higher-risk assets. Learn from it and expand the practice.

Security Training and Awareness

Security training is important and should be embedded into several core areas in your organisation:

  • New hire training
  • Quaterly or yearly awarness training that includes common threats, spear phishing, whaling, social engineering
  • Email newsletters, alerts, breaking news that impact staff
  • Security demos, presentation, seminars, security engineering days
  • Security wiki, initiatives and changes everyone should know about

I’m afraid organisational culture and human behavior have not evolved nearly as rapidly as technology. If you look closely at recent data breaches you’ll notice that phishing or another social engineering technique at some point during the attack was used. To fight such scams, employees need to be aware of the threats.

Quaterly Security Reviews

The quaterly scurity reviews are citical to ongoing security operations. The larger the team you have the more frequent you should be perfomrning these.

These are perioding checkups to address vulnerability status, progress with risk mitigation, review the policies and procedure.

Here are some of the things you should review.

  • Vulnerability scan results and remediation
  • Review penetration testing
  • Access controls
  • Policy and Procedures Reiew
  • Progerss toward the tactial plan
  • Review the impact of security changes made during the quarter.
  • Update the executive managemetn and senior leadership

Annual Program Review

This is a great opportunity to step back and see the bigger picture to ensure that the security program is heading in the right direction. Several major tasks must be completed for the annual refresh:

  • Anuual Risk Assesment (CIS CRC 20, PCI, GDPR)
  • Update the plan of action
  • Tactical security plan for the coming year
  • Budget planning

Conclusion

One of the most important things to understand that there are never enough security resources available to cover all the work.

Prioritisation is necessary, and you want to be highly efficient where you apply your energy and resources. You need to be proficient in getting maximum value out of your efforts to improve security in your organisations. Security professionals need to negotiate hard and often to get things done.

The person responsible for security in any given organisation must have a complete vision of where they want to take the security program.

Security's Low-hanging Fruit

Code Red, Nimda, and SQL Slammer are three of the most well know worms that had a massive impact on the Internet. The industry has considerably improved since then, and it’s much harder to target operating systems in the past few years. Automated patching, vulnerability scanning, sandboxing, compiler, and memory management techniques improved by adding a layer of security and making exploit writing harder. A straightforward and easy to vulnerabilities in the infrastructure are almost non-existent and even when found the level of complexity required to exploit it on multiple systems reliably had grown exponentially.

Threat actors do not stand still

The threat actors are evolving as well. The traditional hacks for fun and profit, a.k.a. see how far we can spread this thing are few and far between. There are more sophisticated attackers nowadays whose goal is financial gain, and the Web is the perfect place for this. There are all sorts of sensitive personal and payment information processed, transmitted and stored by Web applications. I am sure that your personal data is there on some Web application right now.

Old problems in a new world

The widespread of web apps means new propagation channels for next generation of malware or worms. A growing number of popular websites with good reputation become compromised with the site owners having no knowledge of it. The site is then used to launch drive-by attacks where the malicious code on the web site attempts to covertly install malicious code on computers of visitors to the site.

The low hanging fruit

Software teams are turning to agile software development methods to improve velocity and deliver results quickly. Agile methods should generally promote a disciplined project management, but the reality is quite different. Web applications are put together in a rush with little attention given to security and defensive programming. A whole new ecosystem of web frameworks has sprung up to existence that prioritises quick results and ease of use over security. Many applications are so exposed that an attacker requires only very simple file inclusion exploits. That’s the reason why some people are exploiting them rather than targeting the underlying infrastructure. It only takes minutes to understand a typical web application’s coding errors. By nature, web applications to be successful must be indexed by Google and other search engines. It is a double-edged sword. A simple search for vulnerable installations may reveal more candidates with a similar vulnerability. In just a few minutes, an average attacker with little talent and even less time can compromise a typical web application.

No silver bullet for AppSec

There are no silver bullets for ensuring web application security. No amount of network hardening, platform auditing, or vulnerability scanning can compensate for bad programming. Understanding the limitation of any automated application security tools is also essential. Tools like SAST, DAST and IAST are not technically capable of finding the types of vulnerabilities found by penetration testers or your QA team. Automated tools are not capable of identifying access control flaws or business logic issues. Robust application security is essential to the long-term survival of any organisation. Application security begins with secure coding and design, continues with security activities embedded in the software development lifecycle and is maintained over the life of the software.

It takes skill and manpower to design, review and test web applications. I’m afraid there are no shortcuts, it’s twisty and hard-to-follow route to success.

CapEx vs. OpEx: Budgeting for Application Security

In a highly agile environment, security is fast becoming embedded into the Continuous Delivery of software as part of the DevOps process. As we are shifting things increasingly forward in the software development lifecycle which means the budgeting of software security should be more aligned with development costs.

Basic definitions

Software development costs fall into two categories: capital expenses (CapEx) and operational expenses (OpEx). Let’s first recap the basic definitions of CapEx and OpEx:

  • CapEx is a business expense incurred to create an asset that will have a future benefit and will span beyond the current tax year. These assets are presumed to have a useful life. Expenses for these assets are recognised as they are depreciated over time.

  • OpEx is a business expense to keep the business running. It is recognised in the period incurred (i.e., in the current tax year). Most expenses for the day-to-day operations that are not directly contributing towards the creation of assets with benefits spanning beyond the current tax year would end up being categorised as operational expenditure.

CapEx vs. OpEx

Why is the CapEx vs. OpEx distinction so important when trying to get a software security initiative approved?

Because the right mixture of CapEx and OpEx in your project can mean the difference between getting it approved and having it rejected for budgetary reasons.

Most software development activities resulting in a creation of a software asset would generally be accounted for as CapEx. A modern Agile software development process allows you to capitalise more of your costs. I will touch on this later in the post.

Any SDLC activities of a security team that directly contribute to the creation of a specific software like Security Architecture, Threat Modelling, Code Reviews or Testing would fall under CapEx.

On the other hand, any day-to-day operations that are not directly contributing to the creation of an asset like quarterly security testing of the running application portfolio, security training, vulnerability management would fall under OpEx.

Advanage of Secure Agile Development

You can capitalise the cost of development activities, provided you can demonstrate how the software will yield future economic benefits. However, any day-to-day operations as well ass research costs associated with the software project planning must be considered OpEx and cannot be capitalised.

Phase Expense Activities
Research OpEx Planning, researching, estimating
Development CapEx Secure programming, code reviews, security testing
Production OpEx Installation hardening, bug fixing, training

The classic waterfall development process employs a rigidly structured, sequential steps to produce an entire, finished software application in one iteration or, possibly, in several linear phases. While Agile process expends less effort during the research phase of a project, which means you need less OpEx to get your project started. Additionally, continuous delivery blends the development and production phases as new code is steadily added in an automated fashion to the product reducing the cost required to install and configure, means less OpEx.

Making an investment in building security early in the SDLC for specific software products (CapEx) should reduce the amount of OpEx that is required after the fact for security bug fixes as part of maintenance. OpEx should decrease because down the line maintenance costs of the software will be less as it was built securely from the beginning and thus will require fewer security fixes down the line

Alignment is key

A software security initiative should align with a specific software project to balance CapEx/OpEx and derive the greatest benefit. Strengthening that alignment is key as resources must directly contribute to the creation of asset to be capitalised. DevOps process helps with this as it creates self-sufficient teams that are self-contained and responsible for development and maintenance of a specific product.

Initiative Expense Alignment
Purchase of static code analysis tool CapEx Developing a specific software product
Triaging results from static analysis built into SDLC CapEx Developing a specific software product
Static/Dynamic scanning of the existing application OpEx As part of the standard bug fixes and software maintenance
Security Architects/Specialists CapEx Building security in a specific software product
Security Architects/Specialists OpEx Not aligned with a specific software project. Providing general assistance across the organisation

Organisations may sometimes choose to be more conservative and count some of that spend OpEx depending on current financial needs. By capitalising as much as possible, an organisation can amortise costs over several years and spread out the impact on earnings. On the contrary, organisations that prefer to expense quickly will take an immediate hit in the current year, allowing a greater one-time effect on earnings.

Book a meeting with your CFO or a member of the accounting department and find out how your organisation handles CapEx/OpEx. You may find that the ability to capitalise a larger percentage of software security costs can get your projects done sooner.

Hello World

1
2
3
4
#include <stdio.h> 
int main() { 
  printf("Hello World!"); 
}