Category Archives: Uncategorized

Upcoming Drupal 8 Release and Beyond

drupal 8 logo Stacked CMYK 300Drupal 8 is coming! No really, it is this time. As of today there are four critical tasks, 21 critical bugs and five plans remaining in 8.x. Once all of those numbers go to 0, assuming no more criticals are filed or reclassified, a Drupal 8 release candidate will become available. At that time security issues with Drupal 8 will no longer be public in the issue queues, and site builders and developers alike can be confident that Drupal 8 is ready to start implementing in production.

So, what does this mean for the Drupal project you’re working on now (or getting ready to work on soon)? If your release date is 3 – 6 months out and you can get by without a bunch of contrib modules, now’s a good time to start looking at whether the site or web application could be implemented using Drupal 8. If you need to release before then, I’d recommend holding off on Drupal 8.

Drupal 8 brings a number of new advantages with it. If you haven’t heard yet, Drupal is now Object Oriented, built on top of the Symfony Framework. This means a number of procedural hooks have been moved to object oriented classes. Also, CMI (Config Management Initiative) brings the ability to define module default configurations through simplified YAML syntax.

One of my favorite new features, which elevates Drupal from your average content management system (CMS) to a fully mobile-ready service handler, is the Web Services and Context Core Initiative (WSCCI). Built on top of Symfony responses, Drupal 8 can now natively handle non-HTML responses, such as JSON. Everything returned to the user in Drupal 8 is now a response. In addition to the ones provided by Symfony and the Zend Framework, Drupal adds some really cool response types you can use in your code, such as AjaxResponse and viewAjaxResponse (used for Ajax responses specific to Views).

Speaking of Views, it’s now part of core! That means a lot of simple Drupal sites that only had contrib modules for Views can run Views out of the box! Based on historical data of the Drupal 7 release, most adoption trailed behind Views being available, which is no longer the case with Drupal 8.

Finally, Entities with full CRUD (Create, Read, Update and Delete) are not only available in core, but also are used as the basic building block of every piece of content in Drupal. Everything from a Node to a taxonomy term is now an Entity, and you have the tools with Drupal 8 to build your own custom Entities.

If you’re more interested in creating themes for Drupal, there’s much to rejoice about Drupal 8 as well. Drupal 8 templates are now using twig, which means themers for Drupal 8 can do simple logic statements without having to write templates and hooks in PHP.

This just scratches the surface of all the new things being added to Drupal 8. If you want to know more, check out This was years in the making and involved a lot of people in the community who volunteered their time and effort. My thanks goes out to all those who helped, whether it was writing documentation, reviewing patches, or contributing as part of one of the major Drupal 8 initiatives. If you’re interested in helping to make Drupal 8 better, we’re always looking for new people to join us. The community helps new folks get started with contribution on Wednesdays at 16:00 UTC. You probably won’t want to jump in on fixing one of those criticals I mentioned above, but working on major and minor bugs is valuable to the community as well.

Finally, I want to wrap up with what future releases for Drupal are going to look like. The Drupal Community is looking at switching to a 6-month minor release cycle with only releases that break backwards compatibility being classified as major. Prior to a major release, a Long Term Support (LTS) release will be made available to give developers plenty of time to update their code to support the new features being added. Even major releases are planned to take less than a year from code freeze. This is very exciting and means that things that don’t break compatibility can be added to improve Drupal 8 much faster than they have in the past.

If you haven’t looked at Drupal 8 yet, it’s getting close enough to release that you might want to consider doing so. Drupal 8 is a huge advancement from Drupal 7, and in my opinion it’s way ahead of where competing CMSs and platforms are right now. Having the power of Symfony at its core with all the Entities we’ve come to love with Drupal 7 makes Drupal 8 the best choice for building your next website or application.

Detecting Bad Apple Instances and Other Performance Issues in AWS

Amazon Web Services (AWS) is an awesome and quick way to get access to a lot of computational power; it is known for providing near-instant access to large amounts of computational resources. Between the API and the web interface, you can quickly scale your application with demand. As with any technology service though, it’s hardware based, and hardware does fail. I have personally seen that it is possible for AWS resources to become almost completely unreliable. These bad apple instances exist across the entire AWS infrastructure.

In looking at performance issues within AWS, most of them appear to originate from I/O performance. You can start a bunch of instances and receive widely differing performance levels. This can be caused by a failing disk in the infrastructure or, more likely, a least-optimal routing path for storage. The graph below shows a recent example of performance severely degrading read speeds on an instance, causing my system load to float around 2.5 on the midterm. After stopping and restarting the instance, AWS automatically rerouted the storage path and my load dropped to around .15. This is a huge increase in performance just from stopping and restarting the instance. This is why it is super critical to monitor and to check for these performance issues.

Graphite Browser 2015-04-09 03-03-30 (1)

We were able to catch this performance issue with the aid of a couple of tools. Icinga, along with our custom check_sshperf script (available at, allowed us to have agentless threshold checks for system load, network performance, memory usage, and much more. The tools collectd and Graphite can be used to collect and graph information about the system, with the ability to span a wide range in values. Icinga initially alerted us to a high load on the system. Then, using Graphite, we were able to see the impact of load over time and show that performance had been severely impacted.

These tools can help not just with monitoring hardware issues, but also with software misconfigurations that could be impacting memory, storage, or compute resources. Below are just a few of the things we can monitor with Graphite:

Graphite Browser 2015-04-09 03-17-45

Icinga can also be more deeply configured to check everything that makes your application run. So take the time to set up some in-depth monitoring across your application. It will allow you to detect performance problems before the end-user notices.


MFA for Cloud Hosting and Web Applications

If you are using an infrastructure as a service (IaaS) provider and do not have multi-factor authentication (MFA) enabled, stop reading now and come back when you are done. It’s OK… we’ll wait.

The terms data breach, unauthorized access, and hacked are no stranger to headlines recently. Common terms that follow those seem to be credentials, user account, and weak passwords. These reports illustrate the need to ensure that strong password policies are in place, but we are also seeing that passwords alone are no longer sufficient to guarantee security. This is especially true for cloud providers and hosted services. Because their purpose is to be well-known and easily located there is no obscurity here, making these sites a perfect target. So, if you didn’t stop reading before and you are still using a cloud provider and do not have MFA enabled, now would be a good time for a heart to heart.


You have implemented an exceptional password policy, you say?

Passwords have been the primary authentication method since the 60s, and for decades experts have warned that they are one of the weakest security points. Two out of three network intrusions involve weak or stolen credentials, with the highest percentage of attacks against web applications. The number of intrusions involving compromised credentials is almost three times that of software vulnerabilities or SQL injections, yet organizations often seem to have a more robust patch management strategy than access management strategy, and rarely do we see authentication methods beyond username and password in use.

In today’s environment it is common to see good complexity requirements of passwords to ensure factors such as minimum length, special characters, non-dictionary words, and password history. However, even with strong passwords, phishing attacks and malware can be used to obtain credentials and compromise accounts. Approximately half of all hacking breaches involve stolen passwords gained through malware, phishing attacks, or theft of password lists. We already know that organizations such as Target, eBay, and Code Spaces (to name a few) had password policies implemented, but complexity requirements did not help once credentials were compromised and malicious users were able to access the environment.


But why would someone want to compromise my site?

With more than a million dollars in e-commerce sales every 30 seconds, and one-third of transactions from the United States alone, it is clear that there is an incredible amount of business conducted over the Internet today. However, without customers or users able to easily find your site or service you are not going to see much business. So you work hard and devote resources to marketing, SEO, and any other method to get people to your site. But remember, as you are advertising to customers you are also advertising to hackers and malicious users. Also keep in mind that attackers are not always seeking gain, and sometimes a site is attacked simply “because.” If you want to do some research yourself on which sites, why, and how they are attacked, go to Google, pick an organization, and search for that organization followed by the phrase data breach. You will find no shortage of results, and that both large enterprises as well smaller businesses are susceptible to data security problems, and that all are targets of intrusion attempts.


Say hello to my little friend… MFA!

MFA offers additional protection against malware, attacks against web applications, point-of-sale attacks, insider abuse, and loss of devices. Approximately four out of five data breaches would have been stopped or forced the intruder to change tactics if an alternative to password authentication such as MFA was in use. Many organizations are concerned by the cost of implementing an MFA solution and the change of workflow, but we must remember that these costs have a direct effect on decreasing risk. We must also keep in mind the insurmountable and often incalculable cost of a data breach in the forms of loss of records, customer trust, and negative public image. For instance, the breach at Code Spaces destroyed such a large portion of the company’s data, including backups, that it decided that to restore all services was not feasible and subsequently closed its doors. Target experienced not only a financial loss because of its data breach, but it also took a big hit on customer loyalty, satisfaction, and trust, which surely turns into a financial cost in the long term.


Now what?

Let’s take a look at a few steps we can take to increase security using MFA within Amazon Web Services (AWS), the current leader in IaaS.

  • First and foremost, it is always a good idea to review compliance requirements to verify whether PCI, HIPAA, or other standards you are subject to require MFA, and to what extent.
  • Create a strong password and enable MFA for root accounts. Use this root account to create an Identity and Access Management (IAM) account, which will be used for administrative tasks going forward. Do not use the root account for everyday administration.
  • Enable MFA for all other accounts as well, especially any account with administrative rights.
  • Explore linking MFA to API access to services. This allows you to protect certain tasks, such as TerminateInstances, by requiring MFA before that task can be initiated.

If you would like to find out more on using MFA, contact your cloud service provider or contact AppliedTrust. We can help you secure your site and your business.

IAM Now Using Managed Policies for Everything

If you work with AWS IAM policies on a regular basis, chances are you have lots of groups defined with inline policies attached. If you just have a lot of users with inline policies attached, go back and do what I said in the last sentence. Just like in Active Directory, applying IAM policies (read “file/machine permissions”) directly to user accounts is guaranteed to fill your authentication structure with snowflakes fit for a ski hill. Creating consistent access structures is paramount to maintaining the ongoing security of your AWS presence.

Anyway, back to those inline policies. The workflow so far has been to create a group, create a policy for that group, and then add users to the group so they get to assume the permissions of the policy.  Need to change what a group can do? Go edit the inline policy for the group as needed. Need to find out why a user can’t (or can) access an S3 bucket? Open the user page, scroll down to inline group policies, see what the user has inherited, and move your way up the chain. Want to see a comprehensive list of the IAM entities that have access to create new buckets? That’s where things get tricky. Finding this information would require someone to sit down and open every user, group, and role created to examine their inline policies and look for the s3:CreateBucket action.  This can make auditing access and maintaining adequate security a chore, which likely means it won’t happen.

Enter managed policies. This feature was quietly rolled out in February and, like most AWS feature upgrades, was conveniently placed in front of your face for you to ignore while you go about your normal business. Here’s the thing: don’t. Aside from having 106 canned policies available for public use at the time of this writing, managed policies solve a lot of issues we have been working around inconveniently for years. I wanted to share a few key thoughts about the new features and how they can be useful.


  1. Staying Current

Just like with managed policies, AWS seems to be rolling out new minor features all the time. So you want to give a user full access to RDS? That requires a lot of specific permissions. Amazon has done a good job of providing examples to use for inline policies, but what happens when suddenly you can store backups in Glacier? You need to go back and copy its new example. Managed policies has provided a method for attaching Amazon managed policies to your own entities so that as it rolls out new features and requires additional access, your users automatically get the rights they need.

policy details

I will mention though that you should be VERY careful about using access policies managed by any third party in a production environment or any environment containing sensitive data. Even though Amazon is a trustworthy bunch, you should always be maintaining tight control over your data. These managed policies make it easy for your developers to try things out, but I will always recommend making a copy in your own managed policy and manually syncing changes after conducting a security review.


  1. Policy Versioning

We’ve all done it. You say to yourself on a Friday afternoon, “I’ll just make this one permissions tweak, grab a beer, and these backups will be able to run over the weekend.” Unfortunately, you ignored your organization’s change management policies and even forgot to copy and paste the old policy into your favorite text editor for posterity while saving the live change in the console. In the process you overwrote an obscure ARN that took you two weeks to figure out in the first place, and now you’ve broken something. Managed policies automatically save up to five versions of the policy, and rolling back is as simple as clicking “Set as Default.” In addition, there’s a handy timestamp you can use to search your CloudTrail logs and figure out who made previous changes.

image (3)

Alternatively, you could always just look back at what you missed and save a new working version as well. There are options.


  1. Access Auditing

As I said before, poring through a large number of IAM entities looking for a specific action or resource is a pain. Now all you have to do is look at the policy and see what it’s attached to. Keep in mind, however, that this only works if you have truly committed to using only managed policies, because inline policies will still work side by side.

attached entities


Here’s a hint: Create an administrator policy that only allows the creation of managed policies to avoid people making the mistake of adding inline policies in a pinch. A simple ‘deny’ in your admin policy takes care of it.


Still Using FTP? You Should Be Ashamed!

If you call yourself a sysadmin or IT person and are still using FTP, you should be ashamed of yourself. The original specification for the cleartext File Transfer Protocol, or FTP, was written by Abhay Bhushan and published as RFC 114 on April 16, 1971. Although completely groundbreaking at the time, not to mention a trailblazer for future file transfer protocols, FTP has outlived its usefulness in a time of billion-dollar digital bank heists and hijackings of millions of credit card numbers and protected health information records.

The reason FTP is so insecure is that it’s a completely cleartext protocol, which means that all data–including usernames and passwords–are transmitted across any network in cleartext. This is the equivalent of a postcard being sent via snail mail. Anyone and everyone that touches this information as it travels from origin to destination can plainly see the information written on it. It’s almost as silly as printing everything you need to make a credit card transaction on the physical credit card itself… but that’s a topic for another rant.

So why in an age of cyber heists, boundless information gathering, and profiling are we still using such an outdated and insecure technology? Simple: Because our trusted application vendors, software creators, and service providers are often lazy. As consumers, we tend to assume that when we sign up for a new product or service and provide any accompanying data–be it personal, financial, or medical–that the company behind that product or service does everything in its power to protect our information. In reality, this is often not the case.

As an IT auditor, I’m surprised at how many companies, hospitals, insurance companies, financial institutions, and city and federal government agencies are using antiquated and insecure methods for storing and transferring our data. I’m shocked when I find that that their supposedly cutting-edge high-tech vendors that are creating products for regulated sectors such as the heathcare and financial industries typically recommend and implement outdated protocols such as FTP.

Medical vendors, in my opinion, can be some of the worst offenders. These vendors–whose clients are always required to comply with HIPAA regulations in order to act as custodians of our electronic protected health information (ePHI)–regularly configure their products in a manner that is not compliant. It’s not that their products are incapable of being HIPAA compliant; it’s that they support outdated, insecure protocols such as FTP. To make it worse, these same protocols are typically the ones that get enabled because it’s easy, and it’s not the vendor’s responsibility to be HIPAA compliant: It’s the customer’s.

Often when confronted about the reasons such non-compliant protocols are available and supported within their products, vendors say that they don’t get paid to make it secure; they get paid to make it work. Unfortunately, most customers using these vendors will default to trusting the expertise and professionalism of said vendors, which results in non-compliance and, in many cases, security incidents. As an auditor and system administrator, I call out any vendor that supports and implements protocols that are clearly prohibited for the industry that vendor is supposedly advancing.

So as a system administrator or even a consumer, the next time you hear someone at your company, or a software vendor, or even a trusted connected partner mention FTP, be bold and tell the person that although you respect how FTP was a modern marvel for its time, and undoubtedly paved the way for file transfers of the future, you would really prefer to stop living in the 70s and implement a more modern and secure encrypted alternative. Maybe try SCP or SFTP, which are based on the SSH protocol, to bring you into the 90s!

Monitoring EC2

My dirty little secret is that I’m a bit of a monitoring junkie. I like to monitor all the things. I’ve been known to monitor my stereo receiver’s network connection so that I can make sure AirPlay is always available. That’s why I was surprised one afternoon a few years ago when several of my Amazon Web Services (AWS) EC2 instances started dropping like flies, with no warning. Typically I’d have seen some indication that the server was having problems, right? Wrong.

That was my first foray into AWS servers, and it turns out that they require you reboot them every once in a while, because they retire old hardware. If you don’t, they’ll do it for you — on their schedule. I should have known about this, you say? Maybe. I could have logged into the console and reviewed each host (see below) to see whether there were any notices attached to them. But why would I, if they weren’t showing problems in my monitoring system of choice (Nagios/Icinga)? There had to be a better way.


Enter Amazon’s EC2 API. Using this super handy interface (thanks, Amazon!), we were able to set up a check through our monitoring server to see whether our EC2 instances had any status events that we needed to be concerned with. You can be sure that we ran around and added this check to ALL of our instances as soon as it was ready. Using the access and secret keys as well as the AWS EC2 API Tools (, we were able to query our AWS environments for systems that had alerts attached to them — no more manually logging in and clicking through each instance to locate potential problems. This is just one of many things that can be monitored using the API. We also have checks to run and report on backups, something else that historically we had been handling through the console. All of our AWS Nagios checks (including check_ec2_status) are stored and available in our github repository here:

Because our monitoring server is tightly integrated with our ticketing system, tickets get opened automatically whenever the status check fails. This ensures that we have a ticket open for events such as scheduled retirement so that we can reboot or move EC2 instances on our own schedule. This helps us to ensure that we’re meeting SLAs, notifying appropriate stakeholders, and ensuring that we’re following proper change control procedures. No more unexpected terminations and/or reboots!


Monitoring AWS notification events is a great way to help ensure the best possible availability for your EC2 environment. Using Amazon’s API and your organization’s default monitoring or ticketing system will make sure your admins are tuned in and taking action to fix issues within your Amazon environment.

So, now that you know how we monitor our EC2 instances, we want to know how you’re doing it! Are you? If you want help implementing monitoring for your EC2 instances (or any other servers), please reach out and let us help find the solution that’s right for you.

A New Dawn: Decompression

IMG_3157In the Burning Man community lexicon, the term ‘decompression’ has varied meanings depending on the individual. An underlying theme is reflection on the series of experiences you were emerged in, easing oneself back into the “default world” of daily living. In my first trip out to the Playa, I wouldn’t even begin to fathom the experiences that awaited me, and boy did the event blow any ideas or expectations I had out of the water. Last month I was fortunate to attend the 31st annual Chaos Computer Congress (31c3) held by the Chaos Computer Club (CCC) in Hamburg, Germany, and in a little more than two weeks since returning, I’ve realized that I’m undergoing a similar type of decompression.


IMG_3139 (2)Over the past five plus years, the final days in December have been a bit blurry from the early morninglive streams of talks being held at the Congress, with fascinating topics ranging from technical discoveries to policy and operations. Trading the couch-based live stream at home for a real seat in a Saal, discussing topics with attendees from all different nationalities and technical backgrounds, was an experience I could not imagine. When speakers are talking about software libraries and packages used on a near-daily basis, and those authors are in the room listening to the same lecture, there is electricity that is hard to replicate. No matter how much planning and scheduling one might try to do to catch a given session, it’s almost impossible to attend every session — thus the online repository of talks offered by CCC, and the ability to download for offline viewing. I’m pretty sure there were are number of raised eyebrows on the plane ride back when discussions of nuclear weapon treaties, mobile phone network compromises, and Advanced Persistent Threat (APT) research sessions flashed across my laptop screen.


Over the past couple of weeks since my return, I’ve slowly consumed the remaining sessions that I was unable to attend in person, reviewed pictures of the truly awesome hacker space Assembly areas, and recounted tales to others. That’s when it hit me: This is decompression, like returning from days in the Playa. I was still digesting the experience and trying to understand it. For me, the range of technical and non-technical topics hit on common themes that we in the information security space discuss, practice, and preach.


IMG_3132One theme that resonated is the multitude of new attack vectors and approaches threatening both personal and corporate environments, and it is the active exploitation of the trust chain and the failure to validate trust that provides the opportunity for them. The increasing sophistication of attack tools and approaches once associated with criminal or nation-state actors are available in the wild, as they always have been. Yet the attention that high-profile compromises have received over the years seems to be driving many organizations to the point-product or compliance-checklist approach to securing an environment.


Security operations have made great advancements over the years, thanks to the information sharing and work done by security researchers and incident responders. The wealth of technical and operational information that can be gleaned from these resources illustrates that the role of security cannot remain a static, monolithic segment of IT operations; it may never be a revenue-generating portion of a company’s budget, but impacts to the bottom line are felt when operations falter.


Looking into 2015, a more nimble and integrated approach to security through methodologies such as Secure DevOps offers opportunities to tear down silo walls and operate in a more holistic manner. Keeping pace with the developing threat landscape may seem daunting, and nearly impossible by some measures. But through diligence, flexibility, and active engagement, security operations can continue to make advancements and help provide protection measures as a community.

Cloud-based DDoS protection: a cure, or a band-aid?

No love lost for DDoS

It’s no secret that distributed denial of service (DDoS) attacks are on the rise. Almost any company that has hosted a website for more than a few years can attest that DDoS attacks are more prevalent, swift, and effective than ever before. In order to prevent these attacks, many companies are turning to cloud-based traffic filtering providers such as CloudFlare, Incapsula, or Prolexic for help. These services mediate between the outside world and your web servers, inspecting traffic, monitoring and logging activity, and serving content for your site. This is a particularly attractive option if your site is already hosted in the cloud, because hardware protection and BGP rerouting services aren’t an option in most cloud hosting environments.

Neustar 2014 Annual DDoS Attacks and Impact Report

Neustar 2014 Annual DDoS Attacks and Impact Report

Directing Distracting traffic through the filtering infrastructure

Cloud-based DDoS protection providers rely on DNS changes made in your domain to reroute traffic through their network. After these changes are applied, any web user/bot/attacker that attempts to access your site using the DNS name will be redirected through the provider’s infrastructure, as shown in the pretty drawing below.

An Overview of CloudFlare DDoS Protection

An Overview of CloudFlare


However, isn’t an assumption being made here? The assumption is that an attacker will try to attack using the DNS name, get stopped by the DDoS protection, sigh, and give up. In reality, if a human attacker is motivated to hack a website or bring it to a halt, he won’t throw in the towel here. And if the origin is not secured correctly, the attacker will soon stumble into a gaping hole into which he can pour malicious traffic until the cows come home. That hole is accessed by way of the unprotected origin IP address.

Discovering the origin IP may allow an attacker to completely circumvent cloud-based DDoS protection. Most DDoS protection providers have a step-by-step setup process that brings you up to the point of the DNS cutover but does not address properly securing the origin after traffic is being routed through the traffic filtering infrastructure. If additional security measures are not in place to protect the host behind that IP, the attacker can pound the IP address with traffic and the protection service will be none the wiser.

Discovering the origin

Finding the origin IP address is simple with services such as DNS History or, which enable you to enter a domain name and see all previous IP addresses associated with it. If this search is fruitless, there are often multiple services running on the same server, so a quick Google search for common subdomains such as ‘’ or ‘’ may yield an IP address. These subdomains can also be discovered using web-based tools that attempt to return all DNS records and associated IP addresses in a domain. These IP addresses can be tested by creating an entry for the domain in the hosts file using the IP addresses you’ve discovered.

After you’ve found and tested the IP address, congratulations, you’ve circumvented DDoS protection! It probably took less than 5 minutes. You are now free to aim your botnet and unleash DDoS, cross-site scripting, and SQL injection attacks to your heart’s content, and the administrators will be scratching their heads at why their filtered traffic graphs look completely normal.

Protecting the origin

Now that we know the path of attack, we can take steps to secure it. Securing the origin IP address itself is a difficult task, given that discovery is simple using the steps above. However, it can be done. One option may be to switch the IP address of the origin host right before hiding it behind the DDOS protection provider, preventing historical DNS services from recording the IP address before it is hidden. Separating auxiliary services such as FTP and mail onto dedicated hosts can help prevent the production web IP from being disclosed by other DNS records.

Setting firewall rules correctly is the best way to protect the origin host. DDoS protection providers should have a list of IP addresses or ranges that are used by the protection network. Restricting access using these and other trusted IP ranges will ensure that traffic from attackers is filtered before it reaches the origin. The IP ranges may change occasionally, so if the protection service is having trouble displaying your site, you may want to check for updates to the provider’s IP ranges.

If your origin IP is running on physical hardware, you also have the option of using a hardware-based solution or BGP scrubbing. There are many great options for DDoS protection regardless of your setup, budget, and service-level requirements, so do your research carefully and be sure to consider user reviews when shopping.

As with all areas of IT, security in web hosting must be approached holistically and with continuous vigilance. Security can’t be obtained by slapping a band-aid over your infrastructure and forgetting about it (as important and effective as that band-aid may be). If underlying layers of security haven’t been considered, it may only take one attacker to get under that band-aid and wreak havoc. Following best practices at every level is the only effective way to consistently minimize the threat of attack!

Setting Up An Android Mobile Penetration Testing Environment

Mobile applications are not new to the realm of information security. The public has questioned repeatedly why certain popular applications require privileged access on a user’s phone when its functionality is so basic and, more recently, certain applications have been shown to send more data back to the creator than the user initially realized or agreed to. Organizations such as the Open Web Application Security Project (OWASP) now provide detailed information about mobile application vulnerabilities, such as its Top 10 Mobile Risks.  Resources such as these help IT Security professionals analyze mobile applications for vulnerabilities such as information disclosure (in both legitimate and malicious applications).


Today, I’m sharing two methods for setting up an Android test environment.


Current methods of setting up a mobile penetration testing environment:

Using the Android Emulator available in the Android SDK:

  1. Download and install the Eclipse IDE from
  2. In Eclipse, use the Eclipse Marketplace to install Android Development Tools (ADT).
    1. Help -> Eclipse Marketplace.
    2. Search for Android Development Tools and install “Android Development Tools for Eclipse.”
  3. In Eclipse, install any needed Android APIs and plugins.
    1. Window -> Android SDK Manager.
    2. Install the latest Android SDK Tools.
    3. Install the latest version of Android necessary for the mobile application being tested.
    4. Many mobile applications will require Google Play Services, so be sure to install Extras -> Google Play Services.
  4. Use the Android Virtual Device (AVD) Manager to create a new device for testing.
    1. Window -> AVD Manager.
    2. Create an AVD using the Android version and CPU necessary to run the mobile application being tested.
    3. Enable snapshots so that the AVD state is saved.
  5. Start the AVD via command line and enable proxy.
    1. In your terminal shell, browse to your Android SDK directory: android-sdks/tools/.
    2. Run emulator -avd <your AVD name> -http-proxy
    3. This assumes your IP address is and your proxy tool is running on Port 8080.
  6. Install the mobile application on the running AVD via the Android Debug Bridge (ADB):
    1. Make sure you can see your AVD: adb devices.
    2. Install the mobile application: adb install <path_to_apk>.
    3. ADB can also be used to forward ports, run shell commands, pull files from devices, or push files.
  7. Add the ZAP, Fiddler, etc. SSL certificate to the list of Trusted Certificates on your device (ADB is useful here).


Using a physical mobile device:

  1. Acquire a mobile device that can be wiped and used for testing.
  2. Unlock and root the device using an opensource tool found online.
  3. Install the ProxyDroid application:
    1. Enable Auto Connect.
    2. Set the Host to your proxy IP.
    3. Set the Port to your proxy port.
    4. Use the HTTP proxy type.
  4. Install the mobile application that will be tested.
    1. The Dropbox application is useful for getting necessary files onto the physical device.
    2. The Terminal Emulator application is useful for making command line changes on physical device.
  5. Add the ZAP, Fiddler, etc. SSL certificate to the list of Trusted Certificates on your device.
  6. Enable ProxyDroid.
  7. Check proxy tool for data.


Through personal experience, I’ve come to prefer testing with my own physical mobile device. I don’t necessarily recommend using your main phone, but maybe the old one laying around after you upgraded to that new Galaxy S5. Testing with the Android Emulator proved to be frustrating because of buggy snapshot functionality and extreme slowness. I wouldn’t give up on the Emulator just yet though, as there are many factors that could contribute to those issues. Try setting up both and see what you prefer!


Below are some links to help you get started in mobile application penetration testing.


OWASP guides for mobile testing:


Tool for mobile application testing:

7 Steps to Consider While Building Out Your Graylog2 Environment

Graylog2 is an awesome tool that can provide insight into your environment to help you catch small problems before they become raging infernos. It’s also great at just making things more visible. Perhaps you’ll catch something in your logs that could be changed to make things run more efficiently. Stop looking through those boring log files! Get some Graylog2 in your life, but first let’s go over some recommended prerequisites.

1. Design an environment that fits your needs.

It’s easy to get in over your head when setting up your own Graylog2 log analysis environment. Without proper preparation, you could end up with an environment that does not fit your needs. Most likely, you’ll end up with a Graylog2 server that cannot process the volume of logs being forwarded to it, which can lead to dropped logs and unreliable data. There are a couple of steps you can take to avoid these problems:
Determine how many devices will be forwarding logs to Graylog2.
Estimate the number of logs being generated per device per second.
Build your servers with enough resources to handle the estimated messages per second, with room for growth.

2. Build your environment with scalability in mind.

Individual components that make up a Graylog2 environment were made to be scalable. Elasticsearch allows a new node to be added to a cluster with minimal configuration. A Graylog2 server can also be added to the environment with minimal changes to the Graylog2 Web Interface configuration file. Graylog2 also has easy load balancer integration via REST API that allows for HTTP ALIVE or DEAD checks on Graylog2 servers, which can help remove a Graylog2 node from the cluster if it is having issues.

3. Optimize data retention to get the most out of your environment.

By default, Graylog2 is configured to retain a total of 20 indices, with each index holding close to 20,000,000 events.

Graylog2.conf code snippet:
elasticsearch_max_docs_per_index = 20000000

# How many indices do you want to keep?
# elasticsearch_max_number_of_indices*elasticsearch_max_docs_per_index=total number of messages in your setup
elasticsearch_max_number_of_indices = 20

Unfortunately, Graylog2 does not yet allow for indexing and rotation based on date, but it is likely to be added in a future release. It is possible to tweak the elasticsearch_max_docs_per_index setting in the Graylog2 configuration file, so that one index holds close to a single day’s or week’s worth of logs. This may make it easier to manage data retention by date, but it is not exact. It is important to note that you should not set your data retention so high that disk space runs out on the Elasticsearch nodes.

4. Install and configure NTP!

It’s critical that you keep your Graylog2 servers synced with a time server, or you may end up wasting time looking at the error below and wondering what the heck is going on.

graylog ntp time skew

Search the Graylog2 Google Group for this error, and you may find a post by a certain blog writer (ahem) that had to troubleshoot this issue a while back. Time skewing causes your Graylog2 nodes to get out of sync. Don’t let that happen to you!

5. Utilize the LDAP authentication functionality of the Graylog2 Web Interface.

The built-in LDAP authentication module in Graylog2 is great. For one, you don’t have to keep track of another account in your ever-growing password safe. Also, it’s just plain smart for security. The fewer user accounts out there being neglected, the less likely that one of them will be compromised!

I recommend creating an Active Directory group and adding those who will be using Graylog2 to that group. Be sure to set the default role as “Reader.” If a user needs administrative rights, just have that user log in one time so that the account is created, and then manually bump the user up from “Reader” to “Administrator.” Easy! Plus, it’s better than making everyone Administrators by default.

6. Monitor key components in your Graylog2 environment.

If you run Nagios or Icinga internally then it would be extremely beneficial to monitor the Graylog2 environment. Monitoring the graylog2-server, graylog2-web-interface, mongod, and elasticsearch services is a great way to ensure that these key services remain running. A loss of any of those services could be detrimental to log processing.

AppliedTrust offers 2 opensource Graylog2 service checks that can be pulled from the Github links below. One will assist with monitoring the Graylog2 server itself. The other can be used to run queries on the log data stored by Graylog2. For example, have it alert if any query returns with results because a user ran “sudo su” or some other privilege escalating command. It’s super simple and very effective!

The Elasticsearch API provides some awesome methods to determine the health of your Elasticsearch cluster. It is possible to pull the Elasticsearch cluster status through the API. The Elasticsearch documentation recommends using this Nagios plugin for intergrating your monitoring environment with your Graylog2 environment: Using this, you can have a Nagios or Icinga check that will check for a “green” cluster status and alert if it ever goes “yellow” or “red.”

It is very important that you monitor disk space usage on servers in the Graylog2 environment. The root partition of all servers should be monitored to prevent any operating systems from crashing. The Elasticsearch data directory (/var/elasticsearch/data/) is also a key partition to monitor, because letting it max out could cause your Graylog2 environment to crash.

It is also a best practice to monitor key system components such as CPU utilization, memory usage, and system I/O to watch for any irregular activity on the core Graylog2 servers.

Oh yeah, don’t forget to monitor the ntpd service!

7. Don’t depend on Graylog2 as your only log retention tool.

Don’t rely on any single point of failure for log retention, even Graylog2! Yes, it has all the nice bells and whistles, but it can be unstable at times. It is wise to store all of your logs in a secondary location. If possible, have all of your devices log to a centralized syslog server. This centralized server can store logs in a flat log file format and then forward logs over to the Graylog2 servers.