>_ Unvalidated input

stdin: is not a tty

Detecting SSL and Early TLS

Secure Sockets Layer (SSL) has been unquestionably the most widely-used encryption protocol for over two decades. SSL v3.0 was replaced in 1999 by TLS v1.0 which has since been replaced by TLS v1.1 and v1.2.

In April 2015, SSL and early TLS were removed as an example of strong cryptography in PCI DSS v3.1. For this reason, PCI-DSS version 3.2 has established a deadline for the migration of SSL and TLS, set on June 30, 2018.

Unfortunately, TLS v1.0 remains in widespread use today despite multiple critical security vulnerabilities exposed in the protocol in 2014.

It’s not always a straightfoward task to establish where in your organisation TLSv1.0 may be used. It’s common to look at the external footpring only overlooking any internal and outbound communication.

No matter how complex your environment is you can always divide it into smaller segments and look at three major places:


Most vulnerablity scanners have signatures to detect legacy SSL and early TLS versions. You can scan all of your external hosts to enumerate services that need fixing. Here’s a sample finding from Qualys:

You can run a simple search all your assets managed by Qualys,

vulnerabilities.vulnerability.qid: 38628

Alternatively, you can use nmap or other tools like testssl.sh

Web and Mobile Clients

You can inspect your web analytics tool to get some understanding of the clients that still rely on TLSv1.0. Create a report to show stats from the below clients:

  • Android 4.3 and earlier versions
  • Firefox version 5.0 and earlier versions
  • Internet Explorer 8-10 on Windows 7 and earlier versions
  • Internet Explorer 10 on Win Phone 8.0
  • Safari 6.0.4/OS X10.8.4 and earlier versions

From my experience, the numbers will be low, often below 1%. Most users have modern browsers and phones that support TLSv1.2. There will be some older Android devices and general noise from bot traffic often using spoofed headers.

API Clients

If your company provides external APIs used by customers to integrate with your services you may need to do more work before deprecating early-TLS. There’s usually no analytics available for such integrations, the client-side systems owned by customers may not be regularly updated. Finally, the most popular software frameworks like the .NET default to TLSv1.0 even though a higher version is supported.

Here are a few popular software development languages and frameworks that need upgrading, recompiling or a custom configuration change to support TLSv1.2

  • .NET 4.5 and earlier
  • Java 7 and earlier versions
  • OpenSSL 1.0.0 and earlier (Ruby, Python, other frameworks that use OpenSSL)

The best way to gain some visibility into the clients still negotiating early TLS on your systems is to enable extra logging on the edge web servers or load balancers. If you use Nginx simply add $ssl_protocol to your log_format or %{SSL_PROTOCOL} for Apache.

Here’s a sample log entry with SSL protocol version and cipher specs logging enabled:

1 - - [11/Jan/2016:12:34:56 +0200] TLSv1.2/ECDHE-RSA-AES128-GCM-SHA256 "GET / HTTP/1.1" 200 1234 "-" "curl/7.37.0"

If your infrastructure sits behind a content delivery network like Akamai or Cloudflare you need to enable the extra logging there. This is not always a simple task. For example on Akamai to enable this additional logging, you need to select a particular log format that includes “Custom Field” in your Log Delivery Service (LDS):

In Akamai’s property manager for you site you need to again enable Custom Log Field and specify a couple of special variables that should be captured:

Here’s a sample log entry from Akamai with SSL scheme, protocol and cipher:

2016-07-24      13:49:41  GET     /www.example.com/1531720867000.js     200     5109    1 "https://www.example.com/example.aspx"      "Mozilla/5.0 (Windows NT 6.1; WOW64; Trident/7.0; rv:11.0) like Gecko"  "-"     "https|tls1.2|ECDHE-RSA-AES256-SHA384"   1       "trTc_19843||"

With the TLS details now being logged, you can use your favourite log searching tool to identify clients that need to be upgraded before you withdraw support for early-TLS.


It is not always straightforward to detect outbound connections that still use TLSv1.0. Inspecting the client side configs don’t always show the problem. For example, clients that use .NET 4.5 and SChannel default to TLSv1.0 even though the underlying operating system fully supports TLSv1.2.

I found that the most effective way of detecting such clients is to run a packet capture at the egress points of the network.

I use BPF filter only to capture SSL handshakes that include the protocol version negotiated. With this approach, you can run the dump for longer, with less risk of causing performance or disk space issues. Running the capture for a day or week can uncover clients that only sporadically connect e.g. batch jobs.

I suggest running the dump for a short period of time first, e.g. 1 minute to get the feeling of the volume of traffic you will capture. If you are happy with the size, then let it run for longer e.g. 24h to catch the full day’s traffic.

Here’s a sample command to capture TLSv1.0 and SSLv3.0 ClientHello/ServerHello packets.

tcpdump -s0 -i any "tcp and (tcp[((tcp[12] & 0xf0) >> 2):2] = 0x1603) and ((tcp[((tcp[12] & 0xf0) >> 2)+9:2] = 0x0300) or (tcp[((tcp[12] & 0xf0) >> 2)+9:2] = 0x0301))" -w /tmp/TLShandshake.pcap


      record type (1 byte)
     /    version (1 byte major, 1 byte minor)
    /    /
   /    /         length (2 bytes)
  /    /         /
 |    |    |    |    |    |
 |    |    |    |    |    | TLS Record header

 Record Type Values       dec      hex
 CHANGE_CIPHER_SPEC        20     0x14
 ALERT                     21     0x15
 HANDSHAKE                 22     0x16
 APPLICATION_DATA          23     0x17

 Version Values            dec     hex
 SSL 3.0                   3,0  0x0300
 TLS 1.0                   3,1  0x0301
 TLS 1.1                   3,2  0x0302
 TLS 1.2                   3,3  0x0303

You can analyse the captures with thsark and command line tools:

tshark -r TLShandshake.pcap -q -z conv,ip

Here’s an example that extracts certificate’s common name together with the internal client IP and sorts by the number of connections:

tshark -nr TLShandshake.pcap -Y "ssl.handshake.certificate" -V | egrep  '(GeneralNames|Internet)' -A3  | egrep '(^Internet|dNSName:)'| sed s/"Internet Protocol Version 4, "//g | perl -p -e 's/([0-9])\n$/$1/' | awk '{print $4,$6}' | sort | uniq -c | sort -rn

 300 *.newrelic.com
 266 dynamodb.eu-west-1.amazonaws.com
  99 *.newrelic.com
  95 *.newrelic.com
  63 dynamodb.eu-west-1.amazonaws.com

Internal Services

It’s very likely that some internal services in your environment will use older TLS to communicate. However, this scenario may be the easiest to fix provided you have a test environment that closely mirrors production. You can run a vulnerability scan to determine endpoints that need fixing and apply the changes to your test environment first. You can then reconfigure or upgrade clients that no longer connect.

You may still need to run tcpumps in strategic places in your production environment to validate that early TLS has been successfully eradicated. From my experience, services that use proprietary protocols or capable of upgrading the connection to TLS (like STARTTLS) may not always show up on vulnerability scans. In this scenario, a manual inspection of the configuration and TLS handshake dumps goes a long way.

Building an Enterprise Security Program From Scratch

In this post, I’m going to touch on all the aspects of building an enterprise security program. This is a vast topic, and I could have spent pages and pages explaining each element of a successful program. However, the goal of this post is to merely define the necessary steps and provide a roadmap to get you started.

Here is a basic outline for an enterprise security program:

  • Risk Assesment
  • Plan of Action
  • Tactical and Strategic Goals
  • Security Budget
  • Security Policies and Procedures
  • Vulnerability Management
  • Training and Awareness
  • Quarterly Security Reviews
  • Anual Program Review

The idea of building a security program from scratch is a daunting task. You need to have a comprehensive background in IT and security to be successful. Knowledge and expertise of IT is 90% of this job. Most of your time you will be evaluating technologies, advising business teams, deciding what is or is not a risk and finally directing security priorities and implementations.

Risk Assesment

The goal of the risk assessment is to identify where your risks are. First, you need to know where the sensitive data resides that you need to protect. Effective asset management helps with identification where the critical assets are. You need to focus on the sensitive data first. If you perform credit card processing that you need to start with PCI DSS. A PCI DSS gap analysis is usually the first step to understand the compliance status. If you store and process personal data (PD) than CIS Critical Security Controls (CSC), risk assessment or a more detailed NIST using SP 800 series would be recommended together with a GDPR gap analysis.

Plan of Action

The outcome of the above risk analysis will feed into the plan of action. This is mainly focused on hight and mid-level risks. The plan of action would include all the major risks, mitigation strategy, budget requirements, timelines. In many cases, this is also referred to as a gap assessment.

Ref Risk Priority Mitigation Budget Milestones

Tactical and Strategic Goals

A typical security strategy is a combination of both short-term tactical and long-term strategy. You are faced with a continually changing landscape so tactical planning should be limited to 6 months and strategic to max 12-24 months.

The strategic plan looks beyond the tactical focus. Some problems and risks will take a long time to mitigate.

The output of Risk Assessment is used in the plan of action based on the risk levels identified. Then you prioritise the plan of action to create a tactical and strategic security plan. The prioritisation is based on the sensitivity of data you process.

Security Budget

The security budget should closely map to the tactical and strategic security plans. It is something that takes a lot of consideration. You need to negotiate all products you buy. Security products are always overpriced; this is a fact. Look at the open source products first to understand functionality that is already available out there for free. When you select a product, the features should come first and cohesion second. Let’s face it you cannot overlook the support side of the equation. Even most sophisticated and functional open source products will fail if you don’t have the right people to support it.

Security Policy

There are two significant policies that every organisation should have. You should focus on getting these two right before moving on to the rest of security policies for your organisation.

Data Classification

You need to classify all data within the organisation appropriately. Sensitive data may be defined as PCI, PD or health information. You need to know where the sensitive data resides. It makes sense to divide the data into tiers based on sensitivity classification. For example:

  • Tier 0 - data that could be used to prejudice or discriminate against specific individuals. This includes ethnic origin, political membership, health information, sexual orientation, employment details or criminal history. Also payment card data, genetic data, biometric data and authentication data like passwords.
  • Tier 1 - data that on its own can be used to identify specific individuals. E.g. names, addresses, email addresses, phone numbers, copies of passports or drivers' licences.
  • Tier 2 - any data when aggregated with other tier-2 or tier-1 may allow specific individuals to be identified. E.g. IP addresses, transaction metadata and geolocation data.
  • Tier 3 - data that may, when aggregated with Tier-2 or tier-1 data, but not with other T3 data, allow specific individuals to be identified. E.g. Device ID, Advertising IDs, hashes, cookies and search string data.

The data classification policy should make a clear distinction between data types to help your organisation. It should describe the proper handling of each type.

Data Protection

You want to document a data protection standard. The document explains how the data is protected, and who should have access to it.

The data protection policy is to ensure that data is protected from unauthorised use or disclosure and complies with data classification policy and best practices and standards.

Vulnerability Management

Vulnerability management is an ongoing approach to the collection and analysis of information regarding vulnerabilities, exploits and possible inappropriate data flows. A comprehensive vulnerability management program provides you with knowledge awareness and risk background necessary to understand threats to the organisation’s environment and react accordingly.

A successful vulnerability management program consists of 5 distinctive steps: * Determine the hardware and software assets in your environment * Determine the criticality of these assets * Identify the security vulnerabilities impacting the assets * Determine a quantifiable risk score for each vulnerability * Mitigate the highest risk vulnerabilities from the most valuable assets

Most of the steps in vulnerability management can be automated to some extent with the exception of Penetration Testing. This type of manual testing is the perfect complement to automated vulnerability scanning. Start with a smaller scope and target higher-risk assets. Learn from it and expand the practice.

Security Training and Awareness

Security training is important and should be embedded into several core areas in your organisation:

  • New hire training
  • Quaterly or yearly awarness training that includes common threats, spear phishing, whaling, social engineering
  • Email newsletters, alerts, breaking news that impact staff
  • Security demos, presentation, seminars, security engineering days
  • Security wiki, initiatives and changes everyone should know about

I’m afraid organisational culture and human behavior have not evolved nearly as rapidly as technology. If you look closely at recent data breaches you’ll notice that phishing or another social engineering technique at some point during the attack was used. To fight such scams, employees need to be aware of the threats.

Quaterly Security Reviews

The quaterly scurity reviews are citical to ongoing security operations. The larger the team you have the more frequent you should be perfomrning these.

These are perioding checkups to address vulnerability status, progress with risk mitigation, review the policies and procedure.

Here are some of the things you should review.

  • Vulnerability scan results and remediation
  • Review penetration testing
  • Access controls
  • Policy and Procedures Reiew
  • Progerss toward the tactial plan
  • Review the impact of security changes made during the quarter.
  • Update the executive managemetn and senior leadership

Annual Program Review

This is a great opportunity to step back and see the bigger picture to ensure that the security program is heading in the right direction. Several major tasks must be completed for the annual refresh:

  • Anuual Risk Assesment (CIS CRC 20, PCI, GDPR)
  • Update the plan of action
  • Tactical security plan for the coming year
  • Budget planning


One of the most important things to understand that there are never enough security resources available to cover all the work.

Prioritisation is necessary, and you want to be highly efficient where you apply your energy and resources. You need to be proficient in getting maximum value out of your efforts to improve security in your organisations. Security professionals need to negotiate hard and often to get things done.

The person responsible for security in any given organisation must have a complete vision of where they want to take the security program.

Security's Low-hanging Fruit

Code Red, Nimda, and SQL Slammer are three of the most well know worms that had a massive impact on the Internet. The industry has considerably improved since then, and it’s much harder to target operating systems in the past few years. Automated patching, vulnerability scanning, sandboxing, compiler, and memory management techniques improved by adding a layer of security and making exploit writing harder. A straightforward and easy to vulnerabilities in the infrastructure are almost non-existent and even when found the level of complexity required to exploit it on multiple systems reliably had grown exponentially.

Threat actors do not stand still

The threat actors are evolving as well. The traditional hacks for fun and profit, a.k.a. see how far we can spread this thing are few and far between. There are more sophisticated attackers nowadays whose goal is financial gain, and the Web is the perfect place for this. There are all sorts of sensitive personal and payment information processed, transmitted and stored by Web applications. I am sure that your personal data is there on some Web application right now.

Old problems in a new world

The widespread of web apps means new propagation channels for next generation of malware or worms. A growing number of popular websites with good reputation become compromised with the site owners having no knowledge of it. The site is then used to launch drive-by attacks where the malicious code on the web site attempts to covertly install malicious code on computers of visitors to the site.

The low hanging fruit

Software teams are turning to agile software development methods to improve velocity and deliver results quickly. Agile methods should generally promote a disciplined project management, but the reality is quite different. Web applications are put together in a rush with little attention given to security and defensive programming. A whole new ecosystem of web frameworks has sprung up to existence that prioritises quick results and ease of use over security. Many applications are so exposed that an attacker requires only very simple file inclusion exploits. That’s the reason why some people are exploiting them rather than targeting the underlying infrastructure. It only takes minutes to understand a typical web application’s coding errors. By nature, web applications to be successful must be indexed by Google and other search engines. It is a double-edged sword. A simple search for vulnerable installations may reveal more candidates with a similar vulnerability. In just a few minutes, an average attacker with little talent and even less time can compromise a typical web application.

No silver bullet for AppSec

There are no silver bullets for ensuring web application security. No amount of network hardening, platform auditing, or vulnerability scanning can compensate for bad programming. Understanding the limitation of any automated application security tools is also essential. Tools like SAST, DAST and IAST are not technically capable of finding the types of vulnerabilities found by penetration testers or your QA team. Automated tools are not capable of identifying access control flaws or business logic issues. Robust application security is essential to the long-term survival of any organisation. Application security begins with secure coding and design, continues with security activities embedded in the software development lifecycle and is maintained over the life of the software.

It takes skill and manpower to design, review and test web applications. I’m afraid there are no shortcuts, it’s twisty and hard-to-follow route to success.

CapEx vs. OpEx: Budgeting for Application Security

In a highly agile environment, security is fast becoming embedded into the Continuous Delivery of software as part of the DevOps process. As we are shifting things increasingly forward in the software development lifecycle which means the budgeting of software security should be more aligned with development costs.

Basic definitions

Software development costs fall into two categories: capital expenses (CapEx) and operational expenses (OpEx). Let’s first recap the basic definitions of CapEx and OpEx:

  • CapEx is a business expense incurred to create an asset that will have a future benefit and will span beyond the current tax year. These assets are presumed to have a useful life. Expenses for these assets are recognised as they are depreciated over time.

  • OpEx is a business expense to keep the business running. It is recognised in the period incurred (i.e., in the current tax year). Most expenses for the day-to-day operations that are not directly contributing towards the creation of assets with benefits spanning beyond the current tax year would end up being categorised as operational expenditure.

CapEx vs. OpEx

Why is the CapEx vs. OpEx distinction so important when trying to get a software security initiative approved?

Because the right mixture of CapEx and OpEx in your project can mean the difference between getting it approved and having it rejected for budgetary reasons.

Most software development activities resulting in a creation of a software asset would generally be accounted for as CapEx. A modern Agile software development process allows you to capitalise more of your costs. I will touch on this later in the post.

Any SDLC activities of a security team that directly contribute to the creation of a specific software like Security Architecture, Threat Modelling, Code Reviews or Testing would fall under CapEx.

On the other hand, any day-to-day operations that are not directly contributing to the creation of an asset like quarterly security testing of the running application portfolio, security training, vulnerability management would fall under OpEx.

Advanage of Secure Agile Development

You can capitalise the cost of development activities, provided you can demonstrate how the software will yield future economic benefits. However, any day-to-day operations as well ass research costs associated with the software project planning must be considered OpEx and cannot be capitalised.

Phase Expense Activities
Research OpEx Planning, researching, estimating
Development CapEx Secure programming, code reviews, security testing
Production OpEx Installation hardening, bug fixing, training

The classic waterfall development process employs a rigidly structured, sequential steps to produce an entire, finished software application in one iteration or, possibly, in several linear phases. While Agile process expends less effort during the research phase of a project, which means you need less OpEx to get your project started. Additionally, continuous delivery blends the development and production phases as new code is steadily added in an automated fashion to the product reducing the cost required to install and configure, means less OpEx.

Making an investment in building security early in the SDLC for specific software products (CapEx) should reduce the amount of OpEx that is required after the fact for security bug fixes as part of maintenance. OpEx should decrease because down the line maintenance costs of the software will be less as it was built securely from the beginning and thus will require fewer security fixes down the line

Alignment is key

A software security initiative should align with a specific software project to balance CapEx/OpEx and derive the greatest benefit. Strengthening that alignment is key as resources must directly contribute to the creation of asset to be capitalised. DevOps process helps with this as it creates self-sufficient teams that are self-contained and responsible for development and maintenance of a specific product.

Initiative Expense Alignment
Purchase of static code analysis tool CapEx Developing a specific software product
Triaging results from static analysis built into SDLC CapEx Developing a specific software product
Static/Dynamic scanning of the existing application OpEx As part of the standard bug fixes and software maintenance
Security Architects/Specialists CapEx Building security in a specific software product
Security Architects/Specialists OpEx Not aligned with a specific software project. Providing general assistance across the organisation

Organisations may sometimes choose to be more conservative and count some of that spend OpEx depending on current financial needs. By capitalising as much as possible, an organisation can amortise costs over several years and spread out the impact on earnings. On the contrary, organisations that prefer to expense quickly will take an immediate hit in the current year, allowing a greater one-time effect on earnings.

Book a meeting with your CFO or a member of the accounting department and find out how your organisation handles CapEx/OpEx. You may find that the ability to capitalise a larger percentage of software security costs can get your projects done sooner.

Hello World

#include <stdio.h> 
int main() { 
  printf("Hello World!");