Wednesday, August 29, 2012

Identity is Center Stage in Mobile Security Venn


In looking at the overall pieces in play for Enterprise security architecture in Mobile app deployments there are three high level categories of security concern.
  • Mobile Security - this is net new for the enterprise. Mobile apps need to deal with proprietary, byzantine systems and their access control models. Unlike traditional enterprise desktops where enterprise security teams can configure systems the way they would like, the smartphones and tablets of today are akin to buying a car with the hood welded shut. On top of that the security teams must deal with new use cases around lost and stolen, remote wipe and an overall collision course of security and privacy. Finally, the continued lengthening of the access control chain to meet the latest extension of distributed systems means more federation, namespaces, token types and protocols
  • API Security - where Mobile security is a revolution, API Security is more of an evolution, much of the core of API security is iterative improvements on Web services security. The gateway vendors and other Web services security tools and technologies all have important roles to play here.
  • Enterprise Security - the changes here are more evolutionary in nature as well. The enterprise stack must develop and deploy APIs to communicate with mobile, factor in new data security requriements and security protocols, but the main challenges are more integration in this space than anything else.
So the above three areas lead us to the following Venn of Mobile security

Understanding the main relationships is fundamental to building out an Enterprise Mobile Security architecture. From an Enterprise point of view, tools like MDM (Mobile Device Management) give the enterprise a way to provision devices and handle mobile-specific use cases like Lost Stolen and Remote Wipe. From this view, provisioning an iPhone is similar to provisioning a laptop, something any enterprise security team has extensive experience with.

However, as Ping Identity's Paul Madsen asks - if my CEO and I both have an iPhone is the device really the right level of granularity for security policy? So to close this gap, Mobile Access Management (MAM) has stepped into address some of the key bits around application access control.These may be packaged up and deployed together or separate with an API Gateway to broker these security protocols, perform inside/outside token exchanges and other services.

MDM, MAM and API Gateways all address pieces of the problem, but the enterprise still lacks a cohesive view. Should it support SAML, oAuth, OpenID Connect or other? 499 of the Fortune 500 uses Active Directory for their users, what is the equivalent in Mobile? Where should the PEPs integrate to? Where and how should token exchanges be supported? How do the three different security protocols - proprietary Mobile client, Web services/App communications, and back end enterprise- interact? How do I test it all?

These are some of the core questions that enterprises deal with today, Ken van Wyk and I will explore these in detail in both a security architecture view and a hands on developer view at the Mobile AppSec Triathlon in San Jose this November (come join us!). We are in the opening part of the game, some possible variations of the Mobile end game are starting to emerge. Until then, one thing is for sure - provisioning identity, enforcing access control decisions in each layer from Mobile to API to Enterprise and making the layers work together cohesively is critical. To meet this challence means Identity is at the center of each stage.

Tuesday, August 21, 2012

"Astounding amount of iOS apps have been hacked" Really?

Ripped straight from the headlines: "Report Says Astounding Amount Of iOS Apps Have Been Hacked". Those are some mighty strong words.

Now, the report itself does also clearly say that, "The research was complied by looking for hacked version of the apps that were available from third-party sites outside of the App Store." But even still, the headline is pretty explosive. Let's dive in a bit and find what's relevant here to consumers and to developers.

From a consumer perspective, what we can take away from this message is that jailbreaking your iOS device and using third-party app sites is fraught with danger. You're removing pretty much all of the inherent security mechanisms that Apple designed into the iOS infrastructure when you jailbreak your device. Sure, there are plenty of jailbreaking tools, and the allure is certainly there -- many apps are available through these third-party sites that simply aren't available in Apple's App Store.

Nonetheless, and for the vast majority of consumers, it's best to avoid jailbreaking, at least from a security perspective.

But, how about iOS app developers? What do they need to know and do? For registered, licensed app developers who submit their apps through Apple's App Store, nothing has changed in that app ecosystem. It's still built around a massive digital signature hierarchy.

Here are a couple of key questions that we should be asking:
  1. Do we have to worry about people pirating our wares, adding malicious features to them, and releasing them through third-party underground app stores? Of course we do.
  2. Do we have to worry about our properly signed apps being run on jailbroken devices that are themselves affected by these other hacked apps? Of course we do.
Both of these scenarios are very realistic, and we should at least be aware of what's going on.

With regard to 2. above, Apple used to provide an API for checking if the device your app is running on is jailbroken, but that API has been deprecated. Beyond that, Jonathan Zdziarski has some useful tips on detecting jailbroken devices in his book, "Hacking and Securing iOS Applications: Stealing Data, Hijacking Software, and How to Prevent It".

Now, protecting our apps from being pirated, "enhanced", and placed on rogue app stores is a different beast entirely. That's a tough problem to solve. At some level, since the original app executable is completely in the hands of the end users on their iOS devices, it is unavoidable. At another level, we should consider our application architectures carefully -- for example, keep proprietary algorithms and such back on our server processing. And lastly, we can take steps to obfuscate our code (or portions of it). It's a tough problem to solve, and a discussion that we'll certainly go into during our upcoming Mobile App Sec Triathlon.

Cheers,

Ken

Friday, August 17, 2012

iOS SMS spoofing -- what a developer should know

Today's big news in the world of iOS security is that someone ("pod2g") has found a way to spoof an SMS sent to an iPhone (see the original posting here). If the information is correct, we hope Apple addresses the issue, of course. But until and unless they do, what does an iOS developer need to know and do about the problem?

For starters, let's see what "pod2g" has to say:
In the text payload, a section called UDH (User Data Header) is optional but defines lot of advanced features not all mobiles are compatible with. One of these options enables the user to change the reply address of the text. If the destination mobile is compatible with it, and if the receiver tries to answer to the text, he will not respond to the original number, but to the specified one.
Most carriers don't check this part of the message, which means one can write whatever he wants in this section : a special number like 911, or the number of somebody else.

In a good implementation of this feature, the receiver would see the original phone number and the reply-to one. On iPhone, when you see the message, it seems to come from the reply-to number, and you loose track of the origin.
So the problem appears to be at an operating system level, in how iOS parses through an incoming SMS packet. It's worth noting that this is the actual carrier-based Short Message Service protocol, not Apple's own iMessage protocol -- which, we assume is not affected by this bug.

Since the problem is at an operating system level, it's not likely that there are any publicly accessible APIs where we need to worry about this in our own apps. But does that mean it's not an issue for our apps? Not entirely.

Many apps (e.g., Google, Facebook) these days use SMS as an out-of-band authentication mechanism to verify our users are who they say they are. If your app uses this, you should be aware that an attacker could conceivably send a spoofed validation code to your users. An attacker could also spam your users making the messages appear to be from your own app's "caller ID" or phone number.

For that matter, if you're using SMS for other things -- perhaps contacting your users to let them know about updates or some such -- it's worth knowing that an attacker could spoof your messages. (And you really should be using the built in notification service in any case.)

Does that matter to your app? More to the point: is there anything you should be doing about this problem right now? In the grand scheme of things, we don't think so. We'll go ahead and file that under "good to know, but there's not much we need to do about it today -- and hope that Apple fixes this input validation problem in their next iOS release."

Cheers,

Ken van Wyk

Wednesday, August 15, 2012

Launching the blog

In our upcoming Mobile App Sec Triathlon, Gunnar and I are going to be presenting a deep dive into the app security worlds for both Google's Android and Apple's iOS platforms. But which is better (in security terms)? Or does that even matter to our consumers?

Well, for one thing, the Triathlon event isn't about one versus the other, but comparisons are inevitable nonetheless. The truth is that both platforms offer consumers--and developers--considerable security features and, at the same time, pitfalls to avoid.

In some ways, the two smart phone / tablet environments offer several similarities. They're both built on top of venerable UNIX / Linux kernels, and their respective lists of features are quite formidable, and have huge areas of overlap. From a security perspective, both environments have implemented sandboxes for their apps, so that a security defect in one app should not impact the rest of the system. Or so the theory goes.

But peel back that onion just a little bit, and the differences start to surface very quickly. For one thing, Android's security foundation differs substantially from iOS's. It is more of a traditional UNIX-like model that relies on file access controls via unique UIDs and GIDs for each app installed. Apple, on the other hand, accomplishes their app sandboxing via a massive hierarchical digital signature chain, coupled with rigorously reviewed and enforced app policies surrounding their app store.

Neither approach is perfect, and both have significant strengths and weaknesses, as you might well expect.

Time (and consumers) will be the ultimate judge of which approach is more effective. As of this writing, however, any objective measure will show that Android leads the pack in active malware samples in its ecosystem. The argument could be made that Apple's policy-heavy approach has thus far served it well, while Android's more open approach has shown some signs of problems.

But one thing is for certain, at no time have consumers had more or better choices in mobile computing devices. As a result, there's been a veritable gold rush of apps hitting both ecosystems.

As pragmatists, Gunnar and I like to focus on how to make the best use of whatever tools we're given. The truth is that a determined app developer can write significantly secure--note I didn't say perfect--software on either platform. But, when you have to cross a minefield, it's always best to know where the mines are.

Cheers,

Ken van Wyk