Friday, September 28, 2012

How do you think they'll attack your iOS app?

Write an app of any intrinsic value (either in user data, transactions, or whatever), and someone is going to attack it. It's 2012, after all, and I'm sure no one reading this will in any way be surprised to hear that there are computer miscreants out there who are going to attack apps of value.

The thing is, though, it's been my experience that many times the very people who write the apps themselves fail to sufficiently understand and internalize just how the attacks will happen. Sure, we've all read about various hacks, but to many people, those are nothing more than abstract thoughts. When it comes to understanding how someone will attack your work, however, it becomes real. So let's consider that a bit.

Which of the following do you think are likely to happen to your app:

  • Do you think your attacker will install and run your app to try to learn how it works? Sure, that's a given, right?
  • Do you think your attacker will work his/her way through all the views and data fields in your app, and perhaps try various dictionaries of the big bad boys (e.g., SQL Injection, Cross-Site Scripting (XSS)) and so on? Sure, that too is a given, right? Even if those attacks aren't in any way relevant to the technologies in your app, they're going to try them anyway.
  • Do you think your attacker will look through all the files in your app's sandbox (e.g., its ~/Documents folder), looking for potentially damning information like a user ID, password, or session token in a .plist file? Yup. Plenty of tools make that one real easy too.
  • Do you think your attacker will configure a network proxy to intercept all of your app's communications to/from its server(s), looking for login credentials, session tokens, etc.? Oh yeah, still well in the realm of feasibility here.
  • Do you think your attacker will use that same network proxy to try to get your app to connect to a server that he configures -- perhaps with a self-signed SSL certificate, or with a signed certificate where the root CA has been installed as a profile on the attacker's iOS device? Ruh roh! (Now I'm starting to hear "They can do that?!")
  • Do you think your attacker will examine your app's executable file, doing surface analysis of it to look for strings, symbols, and other telltale info in the binary executable itself? Of course. But executables in the App Store are encrypted, you say? On a jailbroken device, an attacker can use a debugger to access the unencrypted executable commands just fine. (They have to execute, after all...)
  • For that matter, do you think an attacker will load your app on a jailbroken device, put it into a debugger, and single step through it, looking for crypto keys and other sensitive data? Ruh roh, indeed!
  • Do you think an attacker will try to tamper with the Objective C runtime by intercepting messages to/from the various objects in your app? 
  • Do you think an attacker will attempt to inject messages into your app in that debugging session, and get your app to misbehave?
That list can continue on and on for quite some time. I wrote it in order of increasing attack complexity and difficulty, but every one of these things is achievable today using tools and techniques available to any attacker. This list isn't science fiction or "Hollywood" in any way.

The question you should be asking is whether an attacker would go to this level of difficulty to attack your application. Well, that depends on the potential gain and the likelihood of being caught, among other things. 

And on the point of getting caught, your attacker has all the advantages and you all the disadvantages. Every one of these attacks can be done in the safety and comfort of the attacker's "laboratory", with pretty much zero chance of being caught.

What you're left with then is what is the potential gain to the attacker, and that's not something I can answer for you.

What can you do about it? I'll address that in Part 2 of this blog entry within the next few days. And, of course, Gunnar (@OneRaindrop) and I (@KRvW) will be talking about issues like this at our Mobile App Security Triathlon in November.

Cheers,

Ken van Wyk

Thursday, September 27, 2012

OAuth 2.0 - Google Learns to Crawl

LtcGood news - Google is shipping OAuth 2.0 tools via Google Play. Wish this had happened years ago. when the Android platform shipped but its good its happening now.

OAuth 2.0 is not perfect from a security perspective but as Tim Bray says this is Pretty Good Security meets Pretty Good Usability. Makes sense to me - we have to stop using passwords and we have to do so in a way that won't have developers rioting in the streets and burning cars. But why be happy about shipping something that has a 70 page long threat model in its wake? This dev comment from the blog announcement says it all- "After implementing my own authentication for my app, I really would have appreciated something like this!"

Point is, “Out of the crooked timber of humanity no straight thing was ever made", this is forward progress because custom access control implementations will be definitely worse, and yes I have seen this many times.

So yes its progress, Why did it take so long - who knows? But here we are.

Its helpful to track evolution through a Crawl - Walk - Run maturity curve.

From where I sit, Crawl has been achieved with this release - a standard way to register your app, get a token and use it - plus many future apps that do not rely on passwords, but what about walking and running?

Walking should be about not just using a standard protocol as an improvement over ad hoc access control but also using the protocol safely. Its an access control protocol after all, its failure modes are ugly and have consequences to users and platforms. A chainsaw is great for cutting timber and its an excellent way to cut off your own limb(s). Use of a safer protocol is desirable but guidance on safe use is required to get full value. This release is not quite there yet. OAuth tokens, like anything else, have vulnerabilities large and small, but in removing crypto and signature functions the implementation increases its reliance on TLS for security. Fair enough for many apps, but there is no way to discern this from the documentation, SDK, and APIs. The OAuth 2.0 protocol, by itself without TLS, is not good enough.

"The sign above the players' entrance to the field at Notre Dame reads 'Play Like a Champion Today.' I sometimes joke that the sign at Nebraska reads 'Remember Your Helmet.'  Charlie and I are 'Remember Your Helmet' kind of guys. We like to keep it simple."- Warren Buffett

OAuth 2.0 should be shipped with a 'Remember TLS' reminder stapled to each and every release. Otherwise, numerous threats are in play. OAuth 2.0 with TLS meets the Pretty Good Security bar for many apps, without TLS its playing without a helmet.

Further, both the client and server side developers have some work to do to avoid shooting themselves in the foot with the protocol, for example the client developer may not realize the sensitive nature of the token and how best to protect its storage. The server side developer deals with a myriad of concerns like session management, linking the token to access control, replay and others that in most/all cases mirror the issues for most webapp security. Here we face two challenges though developers not being trained up on security protocols and so miss a lot of the subtleties and nuance in deploying security protocols. And infosec blithely assuming that silver bullet - this all singing all dancing protocol solves my problem is all too common. Not saying Google is fomenting either of these but I see it in the trenches every single day. I would prefer to see Google include a short and sweet Security Checklist to make sure people remember their helmets. Do not have to reinvent the whole Threat Model but guidelines for safe use would get this a long way towards Walking in my view.

The worst security posture is not being insecure, all systems have vulnerabilities, the worst security posture is to assume you are secure when in fact you are not. Here the current implementation is lacking and tailored guidance and/or checklists from client and server side  developers' perspective to know what the protocol is doing and what it is not doing would be very useful. I know this just shipped, but this gap should be closed soon. As a  group, developers across the globe have had zero training in secure coding. When I go into train a dev team on secure coding, even those with decades of programming experience, I am likely teaching them their first day of secure coding. You cannot expect them, even good developers, to know all the right things to do, and to pick up on the subtleties at work in implementing security protocols. I am all for finding the balance on Pretty Good Security and Pretty Good Usability - that's a worthy goal, but the dots need to be connected.   There's a world of difference between https://sites.google.com andhttp://myappisowned.com, Google's Android team should help to close these gaps out, clearly state what can and should be done to foster safe use of OAuth 2.0.

Implementing security protocols is a new proposition for most developers, they were never trained but back in the day it never mattered, the container or server did for them and the threat was not high. Neither of these is the case today any more. This stuff matters. We could easily do a "how to break Android" class and get the security people all fired up to attend, but what would that solve really? We need to start building better stuff and we need developers in the game to make progress. This is Why We Train. OAuth 2.0 and TLS can improve the security in most mobile apps, implemented wrong they can also make it worse. There are design and implementation things to consider from Crawling to Walking, but developers need to know what they are to make it happen- we tackle these on Day One of Mobile AppSec Triathlon.
**
Come join two leading experts, Gunnar Peterson and Ken van Wyk, for a Mobile App Security Training - hands on iOS and Android security, in San Jose, California, on November 5-7, 2012.

How do you protect your users' sensitive data? -- iOS

What would you think of someone who spent [an enormous amount of money] and installed industrial/military grade locks throughout his house, and then put a spare key under the doormat in front of the house? And what then would you say about someone who spent [another enormous amount of money] and installed an alarm system, and then put the unlock code on a post-it note on the face of the alarm system control panel? I can imagine most people wouldn't think highly of such a foolish person. There may even be expletives involved...

So then, what would you think if I told you the file system and every file on an iOS solid state disk (NAND flash storage) is encrypted using hardware-based AES-256? Your first impression might be pretty favorable. However, the disk itself (and its HFS journal) is encrypted with one key (the EMF! key), and then the vast majority of files are encrypted with another key (the Dkey) -- and these two keys are stored on the NAND in plain sight in Block 1 (PLOG). D'oh!

No worries, you say -- your device is locked using a passcode. So there! Well, it turns out that an attacker with physical access to your device can put it into DFU mode (Device Firmware Upgrade -- although I admit I thought it meant something else when I first learned about it) and boot via USB cable using a RAMdisk, easily created on a Mac using Xcode. Once booted, the attacker can access and steal all those files that are encrypted using the Dkey.

To be fair, not every file on an iOS device is encrypted using the Dkey. Notable exceptions to this are the user's email, and any file created by an app where "Complete" file protection is used. The device's keychain database also isn't quite as easy to to decrypt.

Most of those exceptions are encrypted using a key that is derived from the device's Unique IDentifier (UID) and the user's passcode. On the vast majority of consumers' iOS devices, that passcode is a 4-digit PIN. Not to worry, the good folks at Sogeti have provided us with a set of tools that can, among other things, brute force guess all 10,000 PINs and then decrypt most of the rest of the data.

Sounds pretty grim, doesn't it? For more reading on this, see Jonathan Zdziarski's excellent book "Hacking and Securing iOS Applications: Stealing Data, Hijacking Data, and How to Prevent It".

As a consumer, there are a few things you can do. As an enterprise, you can deploy a Mobile Device Management (MDM) solution and, among other things, enforce strong passcodes and such.

But, as a developer, you're not so lucky. As a developer, you cannot assume your customers will be smart enough to use a strong passcode. No, developers must assume the lowest common denominator in order to protect their customers' data adequately. And remember, the OWASP Mobile Security Project ranks a lost or stolen device as the number one risk faced by mobile consumers.

That means that for information exceeding simple consumer grade sensitive data, you must not rely on Apple's built-in file protections to protect your customers' data. There are a few alternatives, of course. We can make use of Apple's CCCrypt Library and make use of all that crypto hardware ourselves -- and stay in control of the crypto keys ourselves. (See Apple's sample CryptoExercise code for an example of how to do this -- it's a bit dated, but you'll get the basics.) We can also use third party libraries like SQLcipher to create encrypted databases where, once again, we control the crypto keys ourselves.

The common denominator in both of these approaches is control of the crypto keys. It's also the toughest (by far) problem to solve in using cryptography securely.

We'll be discussing these options, of course, at our upcoming Mobile App Sec Triathlon in San Jose, California, on 5-7 November. We hope you'll come join us, and let's discuss different approaches to tackling this enormously important and difficult problem.

Cheers,

Ken van Wyk


Wednesday, September 26, 2012

What's in Your Android Security Toolkit, Part 3

In the last two posts, we explored what goes into building an Android Security Toolkit, these are tools that developers can apply to minimize the amount of vulnerabilities in their Android app and, because no app is perfect, to lessen the impact of those that remain.

So far we focused on access control, which helps to establish the "rules of the game" authentication and authorization controls who is allowed to use the app and what they are allowed to do. If you read the Android Security documentation, access control concepts dominate, but this is only part of the security story. Access control enforces the rules for customers, employees, and users who are effectively trying to get work done; however access control does little to mitigate threats of people deliberately trying to break the system.

It pays dividends to learn and apply access control services because a vulnerability here will cascade across the system and be available to attackers as well, but it pays to go further just access control in your mobile security design and development. I usually describe this situation as - I would bet a lot of money that I can beat both Garry Kasparov and Michael Jordan in a game. The way I would do this of course is to play Kasparov at basketball and Jordan at chess.

This is what attackers do, they change the rules of the game or change the game entirely. So while access control gives us the According to Hoyle security rules that the app would like to play under, the attacker makes no such assumption, the asserted rules are the beginning of the game not the end.
All  security is built on assumptions, when these fails so does the access control model. For example, as we discussed in the last blog the Android access control policies are enforced in the kernel so the assumption is that the kernel hasn't been directly or indirectly subverted.

So if an app cannot be secured by access control alone, what's an Android developer to do? The requirements for access control are fairly straightforward on first pass - who is allowed to use the app and what are they allowed to do? Sure, it gets more complex from there, but the start and even endgame are fairly clear.

What's the starting point (much less endgame) in defensive coding? Threat models like STRIDE make an excellent starting point for finding requirements. Identify the key threats in the system and what countermeasures can be used to deal with them. STRIDE recommends, and I concur that data flow analysis is a practical way to begin modeling your application to discover where threat and vulnerabilities lie.

From there, refining the model with App attack surface - data, communications, and application methods, plus Mobile specific attack surface - GPS, NFC, SMS, MMS - adds more detail to both identify vulnerabilities and locate countermeasures.

The mindset of the Defensive Coder is fundamentally different than the access control mindset. The Defensive coder assumes compromise attempts and possible success at each layer in the stack. This includes standard techniques such as input validation, output encoding, audit logging, integrity checking, and hardening Service interfaces applied to local data storage, query and update interfaces, interaction with Intents and Broadcasts. Not just publishing these resources for use, but factoring in how they may be misused. How is the app resilient to attempts to crash it, an attacker impersonating a legitimate user, a malicious app with backdoors running on the device, or attempts to steal or update data?

The Threat Model cannot answer all these questions completely but it does lead the development effort in the right direction to finding ways to build margins of safety into the app.


**
Come join two leading experts, Gunnar Peterson and Ken van Wyk, for a Mobile App Security Training - hands on iOS and Android security, in San Jose, California, on November 5-7, 2012.

Monday, September 24, 2012

APIs behaving badly -- iOS

Did you know there are several system-level information caches where sensitive data can hemorrhage from your iOS apps? That's right, even an otherwise well written app can leak user information out and into areas that attackers can get to if they can get their grubby mitts on the device. (But remember, OWASP's Mobile Security Project considers a lost/stolen device to present the highest risk to consumers -- and rightly so!)

Examples? Here are the biggies to look out for:

  • Screenshots. Every time you (or your app's users) press the home key while they are running your app, the default behavior in iOS causes a screenshot to be made and stored in plain view on the device.
  • Spell checker. In order for that nifty, sometimes annoying, and often funny spell checker to work, the system keeps a running cache of what you type, even if it happens to be fairly sensitive. (Some data fields, like passwords, are generally protected from this.)
  • Cut-n-paste. Anything you put in the cut-n-paste buffer is readily accessible from any app. Face it, the feature wouldn't be so useful if you couldn't move data around.
The bad news is that all of these things can result in leaked information. The good news is that they're under the app developer's control. We'll of course discuss solutions to these and other issues at our Mobile App Sec Triathlon, including a coding lab where you'll implement fixes to these problems.

But also know that sometimes APIs don't behave entirely as we might expect. If you're a developer, no doubt you're duly shocked to hear this news, right?

Well, I encountered one such API inconsistency recently while working on the OWASP iGoat tool -- which we'll also use extensively at the #MobAppSecTri. Allow me to explain.

A few of us on the iGoat project have been working on a new exercise for iGoat. The exercise is supposed to illustrate the dangers of the keystroke cache used by the spell checker, as I explained above. Only, we've encountered some inconsistent behavior in how iOS treats this data.

In the new (not yet released) exercise, we're declaring a couple of text fields (UITextField) as follows:

     @property (nonatomic, retain) IBOutlet UITextField *subjectField;
     @property (nonatomic, retain) IBOutlet UITextField *messageField;

Next, in our implementation file, we're synthesizing those fields and setting them to not be cached (for spell checking) in our viewDidLoad method as follows:

    [subjectField setAutocorrectionType: UITextAutocorrectionTypeNo];
    [messageField setAutocorrectionType: UITextAutocorrectionTypeNo];

And, when we're finished with them, we're releasing both fields. All of this is as per Apple's API for UITextFields.

Now here's the strange part. When we run the exercise with both of these fields "protected", we find the first one (subjectField) is protected just fine, but the second one (messageField) shows up in the spell checker cache (located in ~/Library/Keyboard/dynamic-text.dat in the iPhone simulator).

Huh, that seemed odd. So, like any scientifically inclined geeks, we tried dozens of experiments to figure out why things were behaving this way. Eventually, we added a third field in exactly the same way as the first two. Sure enough, the first two fields are protected, but now the last (dummy) field goes into the cache.

Our next step, which we haven't yet done, is to test this on a hardware device, but my point here is pretty straight forward. 

Sometimes APIs misbehave. And, there's a security lesson to be drawn from this. If we'd done a code review of this app, we may well have concluded that all was fine (with regard to this issue). But that wouldn't have been enough. It's also vital to test these security assumptions during the testing phase. 

This type of issue is ideally suited for dynamic validation testing. Take your security assumptions and dynamically observe and verify them in a test bed. Surely, that would have (and did, in our case!) shown that there's still a problem.

Adding a third (dummy) field resolves only the symptoms of this problem, not the problem itself. The jury is still out on that one, but we won't rest until we've resolved it, one way or the other.

Cheers,

Ken

Friday, September 21, 2012

An annotated bibliography of MobAppSec -- iOS Edition

In the past few months, we've seen the publication of several highly useful texts on different topics related to mobile app security. We thought we'd start a small annotated bibliography here to point to the really useful stuff. It's not intended to be comprehensive, but these are documents that we've found to be exceptionally useful. If you've found some that are not on this list, please feel free to submit them to us; if we agree, we'll add them to the bibliography.

So, here's our list for iOS. We'll be building an Android version shortly, and quite likely a General MobAppSec version as well.

iOS

"iOS Security", May 2012, Apple, Inc. -- Say whatever you want about Apple's security practices. This guide provides a superb description of iOS's security architecture, from its boot process through all of the app-level protections provided by current iOS versions. This is a must read for anyone involved in iOS application development.

"Hacking and Security iOS Applications - Stealing Data, Hacking Software, and How to Prevent It", January 2012, Jonathan Zdziarski. -- Although it is largely focused on forensic analysis of iOS devices, this book is another absolute must read for iOS developers. In it, you'll learn how jailbreaking works, how to copy the contents of an iOS device's hard drive, how iOS encryption works in detail, among many other things. It includes several labs for the reader to work through, along with available source code for each.

"Security Configuration Recommendations for Apple iOS 5 Devices", March 2012, U.S. National Security Agency. -- Although more aimed at IT Security than MobAppSec audiences, this document provides some useful tips on how to configure iOS 5 devices and how to manage them in large enterprise environments.

"iOS Hardening Configuration Guide - For iPod Touch, iPad, and iPhone running iOS 5.1 or higher", March 2012, Australian Department of Defence. -- Conceptually similar to the NSA guide above (but written in Australian English :-), this useful document provides useful security configuration tips for iOS deployments. It also goes into good detail on how the platform's security features work, and is worthwhile reading for everyone involved in iOS application development.

"iOS Developer Cheat Sheet", July 2012, OWASP. -- This doc provides some quick pointers on how to avoid many of the major risks associated with mobile computing. The doc follows the (draft) OWASP Top Ten Mobile Risks, and points to possible solutions to consider for each. It is an open source document from OWASP, and others are encouraged to contribute and participate in expanding and improving it over time. (Full disclosure: I (@KRvW) was the principal author of the first version of this doc, so I'm somewhat biased...)


Mobile App Sec is being left behind

When it comes to application security, mobile app sec ("MobAppSec" as we like to call it) seems to be getting some pretty abysmal scores. What makes this especially risky business is that we're more and more putting real apps where real money (or other valuable information) is being put in harm's way.

Two studies were released this week, which together are useful at understanding the bigger picture when it comes to MobAppSec. The first is the fourth release of the venerable Building Security In Maturity Model (BSIMM) by Gary McGraw, Brian Chess, and Sammy Migues. Next, there's the fourth annual World Quality Report from consulting firm, Capgemini.

The BSIMM study collects and analysis observations from some 51 software development organizations across 12 industry verticals. In all, some 111 security activities are observed. It paints a rather thorough picture of what software developers around the world are doing with regards to software security. Although it's missing efficacy measurements -- to be fair, it doesn't set out to measure the efficacy of the activities observed -- it is easy to draw the conclusion that software development has come a long way in the last few years, at least in terms of security practices.

Since the launch of the BSIMM in 2008, for example, the software security groups (SSGs) in major software development organizations have flourished, rising from 1 SSG employee per 100 developers to 2 SSG employees per 100 developers. And it appears the limiting factor in staffing SSG organizations is finding qualified employees. This speaks well for the future of software security in large enterprises, to be sure.

In stark contrast to the BSIMM, however, Capgemini's World Quality Report (WQR) would indicate that MobAppSec isn't getting anywhere near the same level of security attention that other software projects get (per the BSIMM). (I should note that the BSIMM doesn't exclude mobile efforts, per se, but it doesn't directly address them either. Further, there is a note of a possible BSIMM Mobile Working Group, so perhaps we'll see some mobile-specific data in the future.) 

The WQR concludes that firms are failing at mobile application security. The MobApp communities seem, to be driven by more of a gold rush mentality, focusing on functionality and time-to-market.

While focusing first and foremost on functionality is completely appropriate for a business, doing that at the expense of security can result in unforeseen security consequences. For example, while iOS 6 is brand new in the hands of consumers, there are already reports of things like Siri allowing an attacker to send Facebook postings and tweets, even on a locked device. No doubt the security research community will be taking a far deeper dive into finding all the abuse cases that can be found in the new iOS 6 user interfaces, among other things.

The majority of BSIMM participants know that developing secure software requires attention to details throughout the development process, from inception through production and maintenance. MobApp developers would be well advised to learn from these things sooner than later. There's an old adage that a smart person learns from his mistakes, but a wise person learns from others' mistakes.

We'll help bring these things together at our upcoming Mobile App Sec Triathlon, of course. We'll talk about many of the things observed in the BSIMM study, and we'll help put those concepts into actionable steps that developers can immediately put into practice. We hope to see you there.

Cheers,

Ken van Wyk

Tuesday, September 18, 2012

Building an Android Security Toolkit Part 2


In the last post, we started building out an Android Security Toolkit, things every Android developer should know about security. Access control is fundamental to application security. In my perfect world, when a developer learns a new language they first learn Hello World, the next thing a developer learns should be how to implement who are you and what can you do in the langauge - authentication and authorization. The AndroidManifest.xml file describes the access control policy that forms the application boundary, but where is this boundary enforced and what services does it provide?

The access control chain consists of

1. Defining access control policy

2. Enforcing access control policy

3. Managing access control policy

The AndroidManifest.xml defines the permissions that the application requires, such as:

<uses-permission android:name="android.permission.
INTERNET" />
<uses-permission android:name="android.permission.
WRITE_EXTERNAL_STORAGE" />

The user is able to confirm or deny installation (but not change permissions) based on the AndroidManifest.xml file, this defines step 1 above. The policy is distributed with the application so policy management is under control of the distribution point such as AppMarket. This leaves step 2, enforcing access control policy.

Android apps run in the Dalvik VM, however IPC is not managed in the VM, instead its managed further down in the stack in the Binder IPC Driver which resides in the Linux kernel. Not sure, but I suspect the reason is that there are a number of permissions that requires lower level access.

The binder maps the permission and  either the caller's identity or binder reference to verify access privileges. From a design standpoint, permission boundaries can be defined and enforced at different layers in the App including Content Provider, Service, Activity, and Broadcast Receivers.

Access control is the beginning of thinking about security but its not the endgame, the next step to building an Android security toolkit is defensive coding, how to deal with cases like code injection that are designed to subvert the access control scheme.

**
Come join two leading experts, Gunnar Peterson and Ken van Wyk, for a Mobile App Security Training - hands on iOS and Android security, in San Jose, California, on November 5-7, 2012.

ANNOUNCING: MobAppSecTri Scholarship Program

For our upcoming three day Mobile App Sec Triathlon in San Jose, California on November 5-7, we are today announcing a student / intern scholarship program.

We will be giving away a few student / intern tickets to the event absolutely free to a small number of deserving students / interns.

Course details can be found here.

Requirements

To be considered for a student / intern free registration, you will need to submit to us by 8 October 2012 a short statement of: A) Your qualifications and experience in mobile app development and/or information security, and B) Why you deserve to be selected. Candidate submissions will be evaluated by the course instructors, Gunnar Peterson (@OneRaindrop) and me (@KRvW). Decisions will be based solely on the quality of the submissions, and all decisions will be final.

Details

All scholarship submissions are due no later than midnight Eastern Daylight Time (UTC -0400) on 8 October 2012. Submissions should be sent via email to us. Winning entrants will be notified no later than 11 October.

Student / intern ticket includes entrance to all three days of the event, along with all course refreshments and catering. Tickets do not include travel or lodging expenses.


Friday, September 14, 2012

PCI has gone mobile -- is your app ready?

The folks over at the Payment Card Industry (PCI) security standards council have just published their "PCI Mobile Payment AcceptanceSecurity Guidelines for Developers" document. If you're doing anything in the mobile payment space, this document is a must read, of course. Even if you're not doing mobile payments, though, it's still a pretty worthwhile read overall. But be prepared, some of their security goals are quite high indeed.

For starters, they lay down three security objectives (or requirements, if you will) as follows:

  1. "Prevent account data from being intercepted when entered into a mobile device."
  2. "Prevent account data from compromise while processed or stored within the mobile device."
  3. "Prevent account data from interception upon transmission out of the mobile device." 
These seem pretty reasonable starting points. They're all motherhood and apple pie sorts of requirements that we shouldn't find too many disagreements with.

Next, they set out a series of guidelines that are "essential to the integrity of the mobile platform and associated application environment." Here's where things start to get pretty tough, from the standpoint of a mobile app developer, to achieve. For example, "Prevent unauthorized logical device access." Now, there's nothing wrong with wanting to prevent logical device access, but app developers don't have much input on, for example, the use of strong passcodes on iOS devices.

But it's likely the case that the PCI council has taken a broader view here than simply the app itself. That's evident in the very next guideline, which speaks to server side controls.

The rest of the guidelines too, are worth reading. Some are high targets, like protecting the device from malware. And, to be fair, this isn't a standards document per se -- like, say the PCI Data Security Standards (PCI-DSS) itself is. This document lays out guidelines, after all.

To be sure, though, if you're writing apps that involve mobile payment systems, you'd better be diving into this document and taking it seriously. We'll be delving into this document and its ramifications for mobile developers at our Mobile App Sec Triathlon in San Jose this November 5-7, so bring your questions with you and let's discuss what mobile developers need to know and do.

Cheers,

Ken van Wyk

Thursday, September 13, 2012

iOS 6 and UDID deprecation

This is somewhat of a follow-up to my posting yesterday re what iOS devs should know about security-relevant changes to iOS 6.

We've all known for some time that Apple would be deprecating the use of Universal Device IDentifiers (UDIDs) in apps. We've also known more recently that attackers have been targeting those UDIDs.

And now, we need to prep our apps because, as of iOS 6, the use of UDIDs is no longer available. (Actually, reports indicate that Apple has been rejecting UDID-using apps for at least a couple months already.) But in iOS 6, Apple gives app developers an alternative in the form of a so-called "Advertising Identifier".

So, the question you might be asking yourself is this: Since this issue relates mostly to advertising, why do we care from a security perspective, and what's the big deal with UDIDs anyway? Glad you asked.

For starters, UDIDs are persistent identifiers. Many app developers have used UDIDs to identify sessions between mobile apps and servers. After all, they're unique identifiers, right? There are a couple of problems with that approach. First of all, if a consumer sells his iPhone, the UDID remains with the device, even if the iPhone gets wiped with a factory reset. Secondly, there are privacy concerns over associating users and persistent hardware identifiers.

So, in our apps, we really should avoid using persistent hardware identifiers to associate with users, sessions, etc. (Advertisers have also used these identifiers, but that's outside the scope of what I'm discussing here today.)

And besides, even if we mistakenly thought using UDIDs was a good thing, Apple has taken that option off the table.

That leaves us, at the very least, with the new advertising identifier. It isn't associated with the hardware, and can be cleaned with a factory reset, so many of the privacy concerns are reduced.

But let's step back a bit and consider this from a security perspective. If we're looking for a session tracking token, why wouldn't we generate a new one with every session, similar to how JSESSIONID works on Java web apps? If we're identifying a user, why not use a username and/or user number of some sort? Isn't then the advertising identifier simply an issue for the advertisers to deal with (as the name would imply)? I believe so.

But the fact remains that many apps have used UDIDs for session tokens, user identifiers, etc., for some time. Those apps will need to be re-tooled, if they haven't already been. I consider the use of something like a UDID to simply be sloppy coding, and we need to do better than that.

We'll discuss using the advertising identifier and other approaches at our Mobile App Security Triathlon in San Jose, on November 5-7.

Cheers,

Ken van Wyk


Wednesday, September 12, 2012

iPhone 5 and what every (secure) developer should know

Well, the Apple iPhone 5 big event has come and gone, and what new stuff do we need to know from a security standpoint?

For starters, the new iOS 6 Gold Master, Xcode 4.5 Gold Master, and iTunes 10.7 are available for download, as of this writing. (Mine are downloading as I type.)

While there was a lot of buzz about the "i5" getting Near Field Communications (NFC) capability, for payment systems and other short range RF comms, that didn't pan out. From what we can gather at this point, it appears the new Passbook system in iOS 6 is going to be based on barcode scanning, much like the existing Starbucks app has been doing for well over a year.

But then there is iOS 6 itself, and while the jury is still out on its under the hood security enhancements -- which are inevitable with each new major iOS release -- there aren't a lot of security changes on the surface.

Certainly, to support the bigger i5 screen, app devs are going to have to tweak their UIs, but that's all functional stuff and will no doubt happen in due time.

So, from an app security standpoint, the best thing we can be doing right now is to ensure our apps build properly in Xcode 4.5 and start diving into what Passbook has to offer us (if you're doing anything like payments, coupons, boarding passes, etc.). And, since we're forced now to support two different screen geometries, now might not be a bad idea to build UI XIBs for all of them (including iPad) and build our apps as Universals. While those are compiling, we'll be diving into the iOS 6 docs looking for any minor or major security UI enhancements.

Either way, we'll plan on a iOS 6 changes sidebar at our upcoming Mobile App Sec Triathlon in San Jose on November 5-7. Hope to see you there. Bring your iOS 6 questions with you!

Cheers,

Ken

Tuesday, September 11, 2012

What's in your Android Security Toolkit?

Ken van Wyk asks mobile developers - what's in your bag of tricks? From a security perspective Ken lists a number of critical things for developers to protect their app, their data and their users; these include protecting secrets in tranist and at rest, server connection, authentication, authorization, input validation and out put encoding.

These are all fundamental to building a secure mobile app. Over the next few posts, I will address the core security issues from an Android standpoint and what security tools shold be in every Android developer's tookit.

First, with regard to security for Android I think there are three key areas:
  • Identity and Access Control - provisioning and policy for how the system is supposed to work for authorized users
  • Defensive Coding - techniques for dealing with malicious users
  • Enablement - getting the app wired up to work in a real world deployment
So onwards to policy for Identity and Access Control, a good place to start is with AndroidManifest.xml.

There are only two hard things in Computer Science: cache invalidation and naming things. -- Phil Karlton

AndroidManifest.xml provides the authoritative source for package name and unique identifier for the application, this effectively bootstraps the apps' activities, intents, intent filters, services, broadcast receivers, and content providers. These show the externaly interfaces available for the application.

The next step is assigning permissions. Android takes a bold stance by publishing the permissions that the app requests before its installed. This has the positive effect of letting the user know what they are permitting, but at the same time the user cannot change or limit the app. If they want to play Angry Birds (and who doesn't?) they choose to install Angry Brids with the permissions set by the developer or they choose to live an Angry Birds-free existence. So the overall effect is to inform the user but not let the user choose granular permissions (this last has the positive effect of not turning the average user into a system adminisrator for a tiny Linux box).

The AndroidManifest.XML contains the request for access to system resources such as Internet, WIFI, SMS, Phone, Storage, and other

<uses-permission android:name="android.permission.
ACCESS_WIFI_STATE" />
<uses-permission android:name="android.permission.
CHANGE_WIFI_STATE" />
<uses-permission android:name="android.permission.
CHANGE_NETWORK_STATE" />
<uses-permission android:name="android.permission.
INTERNET" />
<uses-permission android:name="android.permission.
WRITE_EXTERNAL_STORAGE" />

The first step for App Developers here is to only request the least amount of privileges necessary for your app to get the job done. Saltzer and Schroeder first defined the principle of Least Privilege:

Every program and every user of the system should operate using the least set of privileges necessary to complete the job. Primarily, this principle limits the damage that can result from an accident or error. It also reduces the number of potential interactions among privileged programs to the minimum for correct operation, so that unintentional, unwanted, or improper uses of privilege are less likely to occur. Thus, if a question arises related to misuse of a privilege, the number of programs that must be audited is minimized. Put another way, if a mechanism can provide "firewalls," the principle of least privilege provides a rationale for where to install the firewalls. The military security rule of "need-to-know" is an example of this principle.

Notice the two facets of this principle. The first is the conservative assumption to limit the damage of accident and error. This margin of safety approach should be near and dear to every engineer's heart. The second part of the principle is simiplicity - if its not needed turn it off, or in this case do not publish or request access to it.

From a security point of view, the AndroidManifest file helps to reduce your applications' attack surface. If you don't need SMS or Internet or Wifi, don't ask for it.

Android has a pretty interesting approach to access control from some under involvement to declarative permission to capabilities, and we will dig deeper into this in the next post.

**
Come join two leading experts, Gunnar Peterson and Ken van Wyk, for a Mobile App Security Training - hands on iOS and Android security, in San Jose, California, on November 5-7, 2012.

Monday, September 10, 2012

Is your mobile app ready for legalized Wi-Fi sniffing?

Sure, we've all known about network sniffing for many years, right? We've also known that sniffing a network we don't own is illegal--or was illegal anyway. But now that a US Federal judge has ruled sniffing an open, public wireless network to be legal, it's a different game.

Let's put this into context a bit first. The Mobile Security Project over at OWASP started working a while back on a Top Ten Mobile Risks effort. They reckon the third biggest risk to mobile users is "insecure transport layer protection". (Number one on the list was insecure local storage, such as in the case of a lost or stolen device, and number two on the list was weak server side controls.)

Insecure transport layer protection is a kind way of saying that mobile developers often times don't adequately protect their apps' secrets while in transit. When we fail to encrypt things that matter -- e.g., authentication credentials, session tokens, device identifiers, user data, geolocation data -- we expose our users to what I like to refer to as a "coffee shop attack". Prior to that court ruling, the coffee shop attack was illegal. Of course, criminals weren't much deterred by that, but at least the victim might have some legal recourse if an attacker's action was itself illegal. No more. The gloves, as they say in ice hockey, are off.

Of course, those of us who understand the technologies involved wouldn't dream of using an open Wi-Fi without first encapsulating all of our network traffic inside a strong VPN tunnel (if at all).

Well, the average consumer can't even spell VPN, folks. Assuming our users will use a VPN is simply not adequate. So what does that mean for mobile app developers? How do we protect our consumers from (now legal) network sniffing?

For starters, we have to design and implement our apps under the assumption that our users will be using the apps in a hostile network environment, like an open Wi-Fi in a coffee shop. If your app can't withstand the scrutiny of running securely on an open Wi-Fi, you have no business using the word "secure" to describe it in any way.

That's all easy to say, of course, but how does it translate into Gunnar's "what do I do?" sort of action?

Here's some things to consider:

  • Make an inventory of all the sensitive data in your app, from the low level (e.g., user authentication) stuff through the high level (e.g., user data).
  • Make it a security requirement that all such sensitive data will be protected while at rest as well as while in transit, whenever possible.
  • Ensure that all network connections where sensitive data is to pass are strongly encrypted (e.g., SSL, perhaps with certificate pinning or other strong certificate verification).
  • Verify through code reviews that all sensitive data is encrypted in transit.
  • Validate through dynamic validation testing that all sensitive data is in fact (not just in theory) encrypted in transit.
I know the above list is an over simplification in many ways, but our consumers are not likely to easily forgive us for an "oops" when it comes to exposing their sensitive data to a coffee shop attacker.

Securing network data is just one of many things we need to do, of course. But it's a biggie. Building security into our apps is a lot like physical fitness in that way. We don't just go for a jog the day after New Year's Eve because we feel guilty about how much we consumed over the holidays. It's a lifestyle change. It's a discipline. We need to think about it all the time and live it.

In our upcoming Mobile App Sec Triathlon, Gunnar and I will cover these topics, of course -- right down to code examples of how to implement the above list. We hope to see plenty of mobile app devs there, and to engage in meaningful dialog about different ways of approaching this and many other issues regarding secure mobile apps.

Cheers,

Ken van Wyk

Friday, September 7, 2012

Why We Train

Ken van Wyk asks what is in your Mobile App Security toolkit? I had planned to write a post responding to that, but saw the tweet below from two of my favorite people in the industry and thought I would expand on this:
Jeremiahtweet
The first part, mostly, makes sense. Training developers is not an instantaneous fix, to be sure. In my training for developers we look at concrete ways for developers and security people to improve their overall security in their apps. The ways to do this vary, some are short term design/dev fixes (improving input validation for example) and some are longer term (swapping out access control schemes). There is some latency from the time you train developers til the time you realize all the benefits in your production builds.  However, unless you roll code at a glacial pace, I do not believe it takes 18 months for training to pay off. Should happen way faster.

The second part of the tweet boils down to the old adage - "what if you train them and they leave?" The counter argument to this is simple and serious - "what if you don't train them and they stay?" Believe me I have seen plenty of the latter and lack of clue does not age well.

So while I agree with the spirit (but not timetable) of the first part of the tweet, I definitely disagree with the second part of the tweet. We need more training, better educated developers and security people, not less.

Specifically, we need hands on security engineering skills - the basic principles of security are not rocket science, the challenge is all in how do you apply it in the real world?

Despite increasing budgets, the security industry has not solved many problems in the last decade, but one thing the industry absolutely excels at is - conferences!
900
900 - NINE HUNDRED - Infosec conferences! This is not a record to be proud. Granted there are a handful of very good conferences, but the security industry's conference problem is that the industry as a whole is geared to talking not doing. We've all seen the conference hamster wheel - oh big problems, oh solutions that seems hard, when is beer? You get on the plane home with the same problems (or more) than you left with. Repeat.

Many years ago, I was working on a project at a large company with thousands of developers, and they wanted to tackle software security. The company put its top architect on the project, a software guy not a security guy. We met early on the project, he was very talented one of the better architects I have worked with, and like is the case with all such people was very curious, he really wanted to learn. He asked me - how do I get up to speed on security matters? I told him to read Michael Howard's books, Gary McGraw's books and Ross Anderson's books. I came back a month or two later, to his credit he had plowed through, they were piled up behind him. He looked at me seriously and asked - "I see where the problems are, but what do I do about them?"

The what do I do question has haunted me ever since. We got down and worked on a plan for this company, but the industry as a whole glamorizes the oh so awful security problems at conferences but leaps over the what do I do part.

This is where training comes in. I am not naive enough to believe training is all we need to do, but I definitely believe that education for security people, architects and developers has a major role to play in improving our collective situation. We need better tools and technologies, advances in vulnerability assessment tools, identity and access management, these have all helped a lot over the decade, we need better processes on how to apply them in real world systems, your SDL matters. But so do your people! Without basic training you won't know what tools to use and where, how to apply them and what traps to avoid. This is why we train.

Ken and I will be in San Jose, Nov 5-7 doing three days of training on Mobile AppSec. If you or your dev teams are doing work on iOS, Android, or Mobile, there is a lot to talk about. The focus is hands on, what problems are out there in mobile today and what to do about them.

The first time I went to Black Hat, I was intrigued and impressed by the depth of FX's and other presentations, but I was also horrified. There was simply no one in the software world (at that time) talking about this stuff, it was clear the problems would just keep getting worse and they did. But enumerating problems decade plus later is not good enough, we need time materials, resources and people on what to do about them - how to fix. Out of 900 conferences, there is no equivalent "how to fix" conference that is akin to Black Hat. If you plant ice, you're gonna harvest wind.

By the way, waiting to deal with problems is a proven way to fail, and there is nothing more permanent than a temporary solution. Ken and I started on Mobile because now is the chance, the initial mobile deployments for many enterprises, to get it in right, with some forethought on security.


The last thing we need is more hand waving, bla bla and power point at a conference on "the problem" we need to get busy engineering better stuff, and that is where training comes in. As the USMC says the more you sweat in training, the less you bleed in battle. You might ask - with so many problems, can we really engineer our way out? Let me ask then - if we had 900 cons a year on how to build better stuff would be better off or worse?


Security always lags technology. In the early days of the Web, the security was egregious. But this did not matter so much because the early websites were brochureware. The security industry had time to catch up (though still behind) and learned over time how to deal with SQL Injection et al.

In Mobile its much worse. The security industry is behind the technology rate of change as always, the developers are untrained, but the initial use cases for Mobile are not low risk brochureware, they are high risk mobile transactions, Banking, and customer facing functionality. Security's window to act on building better Mobile App Sec for high risk use cases is not 3 years away, its now.

**
Come join two leading experts, Gunnar Peterson and Ken van Wyk, for a Mobile App Security Training - hands on iOS and Android security, in San Jose, California, on November 5-7, 2012.

Creepy featurism

In yesterday's launch of the new Kindle, Amazon CEO Jeff Bezos said some interesting things about today's smart phones and tablets. In particular, his point about customers not wanting "gadgets" but wanting services that improve over time really hit home for me.

I've been an "early adopter" for many years. I had an Apple Newton (MP-120, MP-130, and MP-2000) and loved them. More recently, I've been searching for 10+ years for the right smart phone for my needs. I had an old Linux-based Motorola A-780 and really wanted to believe that someone had finally built the right device for me. Its list of features was right on target (for 2003 or so). But it failed me miserably. I also tried a Blackberry 8800, but it too was just a box of silicon features with crappy software, IMHO. Total #FAIL.

Finally, I felt I found what I was looking for when I got my first iPhone. And, by and large, I did. I'm now a few iPhones down that path (on a 4S now, but that'll change in a week or so), and I'm a pretty happy customer.

Of course, there are many lessons to be learned in all of this. How about security, and how does this all relate to mobile app developers? Excellent question.

It's 2012, and few people would disagree that smart phones have become hugely important to a vast number of consumers. We're doing things on our devices today that we would have laughed at the day before the iPhone (or Android!) was released. The mobile phone world has been flipped onto its head, thanks to these pioneers.

But it's not about a competition of feature lists. To succeed in today's market, the device has to just work, and has to just work for non tech-savvy consumers. It has to pass the Uncle Bill and Aunt Betty test.

Apple long ago learned to de-emphasize the technical specifications race, and focus on the "user experience". When they release a new product, the focus of their announcements is showing us how things work, not the CPU speed of the new multi-core processor. Although those things are important, they're not what matters to our consumers.

Because, guess what -- today's consumers don't understand the technology (by and large), and they surely don't understand security. Security, like the functionality in our devices, has to just work. And those two words, "just work", have to be something that we all live and breathe.

Force a user to install a root CA certificate into the /var/blah/blah/blah folder and you've already lost. But make it "just work" and do it securely, and you've won.

Security too cannot be an after thought. We have to consider security at every possible stage of our work. It has to simply be a quality of our efforts.

Mr. Bezos is right in that regard. It can't be about building a product with all the latest buzzwords included in the ingredient list. It has to be about making our users happy. One of the things that will keep our users happy is to enable them to securely do all the cool things that today's (and next week's) devices can do. Security must simply be an intrinsic quality of our software.

Are you prepared? In our Mobile App Sec Triathlon, Gunnar (@OneRaindrop) and I (@KRvW) will give you plenty of food for thought, and discussion. Come join us in San Jose this 5-7 November and let's talk about what needs to be done.

Cheers,

Ken van Wyk




Wednesday, September 5, 2012

Mobile devs -- what tools are in your bag of tricks?

Software developers are great at keeping their own set of tools around -- that class library, for example, that does the heavy lifting for various functions we repeatedly do. These are usually simple functional utilities that we collect over the years to save us time when we (inevitably) need to do [that same thing again].

Well, security should be no different. To build secure mobile apps, there are certain things that we're just going to have to do, and darn near every single time we write code.

So, what security goodies are in your bag of tricks? Here's some food for thought on some things you might find useful, in no particular order:


  • Protecting secrets at rest --
    Inevitably, we need to protect some data locally on the mobile device. Of course, the principles of sound design should guide us to minimize the data we store locally on the device. Some argue that we shouldn't really store anything of value locally, but our users don't always share that view. So we need to protect data locally: usernames (a la remember me button), passwords (best avoided, but for some consumer grade apps, it's acceptable), session tokens, customer names, and on and on.

    We need a reliable set of tools that help us protect things locally. Both iOS and Android give us some ability to do that, but for times when we cannot rely on the OS, we need more. SQLcipher is one such example. It's an open source extension of SQLite that does AES-256 using the venerable OpenSSL library, and it works on Android and iOS.
  • Protecting secrets in transit --
    Of course, any modern OS and app can do SSL encryption, but things aren't always that simple. Sometimes we want to more strongly verify the SSL certificates on both ends of the connection. Sometimes we want to encrypt data that doesn't play nicely with TCP connections, like Voice over IP data that is best suited for UDP.
  • Server connections --
    We often need to connect to different types of back-end services, and of course those connections need to be established securely. At a network layer, we can use SSL, like I've described above, but at a data layer, we also need to ensure our connection is strong. For example, if we're connecting to an SQL database of some sort, we need to ensure our SQL API is immutable, and not subject to SQL injection and such.
  • Authentication --
    We need strong mutual authentication among all of our application components, of course, but we also need to authenticate users and any other entities our app interacts with. We can use x.509 certificates in some cases. Other times we need to use simple username/password combinations. Either way, though, the authentication needs to be mutual and worthy of trust. We have to avoid mistakes like hard coding credentials into our code.
  • Authorization --
    Once a user or entity is identified and authenticated, we then need to ensure that it is able to get to all the data and resources it needs, of course. But we also need to ensure that it is not able to get to data and resources that it doesn't need -- that it's not authorized to access. That means we need to weave access control throughout our system, and it needs to be consistently applied across our architecture.
  • Input validation --
    All data entering our application, through whatever input source possible, needs to be validated. For example, if we're expecting a credit card number, then our code should validate that a credit card number has indeed been input, and nothing other than a credit card number. That's called input validation, and it's vital we get it right. Input validation problems lead to cross site scripting and a myriad of other security problems, after all. But there are many decisions to make, like where do we perform input validation? Our design time choices can have vast impacts on our application's ability to perform its given tasks securely.
  • Output escaping --
    Any and every time we need to output untrusted data, we need to "output escape" it in a contextually appropriate way. For example, if we're outputting data into an HTML context, we need to ensure that the data itself doesn't contain any HTML instructions (e.g., Javascript). When the data does contain HTML instructions, we need to use an encoding scheme that is relevant for that data context. In this case, HTML encoding.
These are all examples of security functions that pretty much every modern piece of software, including mobile apps, needs to do over and over. It makes sense to have an arsenal of tools available to help with these tasks, and that arsenal should be rigorously tested and reviewed.

If every app developer does this, the world will undoubtedly have more secure software. If every dev organization does it, all the better -- and there's a hidden benefit to doing this organizationally. When we do, code reviews become somewhat easier. Why? Because we're reviewing code for compliance to a set of code patterns. We're making sure our developers are using trustworthy code for these common security functions.

In our Mobile App Security Triathlon, Gunnar (@OneRaindrop) and I (@KRvW) will discuss all of these things, and provide code examples of how to implement them on Android and iOS. Consider how you'd approach each function, and join us in San Jose on 5-7 November, and let's discuss different ways of tackling these issues.

Cheers,

Ken van Wyk

Saturday, September 1, 2012

Mobile Attack Surface

Jim Bird and Jim Manico are working on a new addition to the OWASP Cheat Sheets family, they have a draft cheat sheet on Attack Surface in process. The Attack Surface helps you see where your system can be attacked, from the Cheat Sheet:
"Attack Surface Analysis helps you to:
  1. identify what you need to review/test for security vulnerabilities
  2. identify high risk areas of code that require defense-in-depth protection
  3. identify when you’ve changed the attack surface and need to do some kind of threat assessment"
I would add a 4th - it helps you see where you can defend. I use the Attack Surface Model in combination with a Threat Model to identify and locate countermeasures. The Threat Model helps to identify and the Attack Surface model helps to locate. This point is important because while you can't do much to control the attacker, you can control your defensive posture.

Eoin Keary wondered if there were some special considerations for attack surface analysis on Mobile, and I think there are plenty. Mobile attack surface is one of the main areas that changes the nature of the threat and the field of choice for defenders.

Tyler Shields at Veracode blogged about the Mobile Security stack and showed a number of the key points
Mobile-stack-full
I have no quibble with this high level model, in particular there are logical extensions for security people to use at the OS and app layers, but at the same time the Hardware and Infrastructure layers need some elaboration to see why mobile is different. The Hardware is in motion, the Infrastructure layers include many byzantine protocols and format such as GPS, NFC, SMS and harware implementations vary greatly.
Standard use of STRIDE Threat Model + Attack Surface shows how each threat is dealt with so for apps that are using SSL you can see where its mitigating threats across the attack surface
Threat Countermeasure Data Method Channel
Spoofing Authentication
Tampering Integrity Hash
Repudiation Audit Logging
Information Disclosure Encryption
 SSL
Denial of Service Availability
Elevation of Privilege Authorization, Hardened Interfaces

Note that the above should not be viewed as Checkbox Olympics where six STRIDE Threats times 3 attack surface pars always yield 18 countermeasures, this is basically never the case. But what it does do is show where and how countermeasures play in the stack and give you ideas on the most cost effective places to defend.

So our foundational Threat Model + Attack Surface needs some extenstion to deal with Mobile which could include new protocols like GPS, SMS, MMS, and NFC, and which will vary by HW type. Also new application distribution models through App Stores/Markets and updates. Finally there are different assumptions to be made around physical access and the like through the Lost/Stolen scenarios. So a basic extension to Threat Model + Attack Surface view could yield something like the below:

Threat Counter
measure
App 
Distro
GPS SMS MMS NFC Lost/
Stoten

Spoofing AuthN
Tampering Integrity
Repudiation Audit
Logging

Inforomation
Disclosure
Encryption

DoS Availability
Elevation of
Privilege
AuthZ
Hardened
Interfaces


Again, as above its not a case of Checkbox Olympics and there are limitations in what can be done in any protocol, but using this combination helps to where we can reasonably expect to place countermeasures. In addition I think a big takeaway for most people is that when you start with the view that Tyler Shields' post showed, you assume that the variability is more on the top layer, but in Mobile you need to assume there is room for much or more variability lower in the stack too.

**
Come join two leading experts, Gunnar Peterson and Ken van Wyk, for a Mobile App Security Training - hands on iOS and Android security, in San Jose, California, on November 5-7, 2012.

Mobile payment systems approaching boiling point

In my recent Computerworld column, I talked about the state of mobile payment systems today, as well as some near-term developments that are coming down the road. There's so much at stake for everyone--merchants, customers, and the payment card industry--that an "oops" could cost us all dearly.

Merchants and the payment card industry have a lot at stake in monetary terms, but we consumers also have much at stake. A blunder at this point could cost us precious time.

I believe all of us stakeholders appreciate the vision of mobile payments and how they could simplify our lives. Wouldn't it be wonderful to finally be able to ditch our wallets and have all of our important credentials on our digital devices? OK, that could be some time off still, but just consider it for a moment... What single thing in your wallet couldn't be represented digitally? Someday. I, for one, would welcome it.

And of course, that utopian vision of a better tomorrow places an enormous burden on our ability to secure things. The transaction systems must be secure. Our mobile platforms must be secure. At the very least, we simply must do better than we've ever done in the past. And there's much that can go wrong.

In the US, the payment card industry is plagued by an arcane design based on magnetic strip readers in the point of sale ("POS" -- and they were named this without an apparent hint of irony) terminals. The biggest problem with the existing system is that the merchant gets full access to the customer's account information, leaving it wide open to fraud. I've personally been burned by that multiple times, forcing me to change my card numbers with dozens of merchants each time. POS indeed.

Elsewhere in the world, the prevalent system is called "EMV" (after the three primary backers of the system: Europay, Mastercard, and Visa) or "chip and pin". The big advantage with EMV is that the merchant does not have access to the customer's account information. But even that system isn't without its perils, as demonstrated by some graduate researchers at Cambridge University a couple of years ago.

No, if we're ever going to succeed at deploying a digital wallet system that we can truly have faith in, we're going to need to do better than both of these.

And that brings us to mobile platforms. Both Android and iOS contain various pitfalls that the digital wallets are going to have to carefully avoid. System caches that store keystrokes, screenshots, etc., are just the beginning.

We hope that the folks in the back rooms implementing the various aspects of digital wallets right now are paying close attention to the mistakes of the past so that we can avoid them in the future. If they're going to make mistakes, let's at least ensure they're new mistakes and not the ones we've seen in the past.

That's a tall order, of course. Gunnar Peterson (@OneRaindrop) and I (@KRvW) will be discussing many of the pitfalls to avoid and how to avoid them in our upcoming Mobile App Security Triathlon, taking place on 5-7 November 2012 at the Fairmont Hotel in San Jose, California. We hope to see many mobile wallet developers there!

Cheers,

Ken van Wyk