Wednesday, February 20, 2013

Android adds a Secure Default for Content Providers

Security requires some thought in design, lots of developer attention in secure coding, but there are gaps that the platform can close that can make the designer and the developers lives easier, setting secure defaults. Default Android introduces a number of ways that companies can unwittingly open up vulnerabilities. Jelly Bean offers a number of security improvements, one of the more interesting is adding a new and important Secure Default which protects Content Providers, aka your data. The setting protects against data inadvertently leaked to other apps. Android's permission model is pretty expressive and lets you set fine grained access control policy. Unfortunately, this means that there are many options and so many enterprises that ship with default settings can expose their data to any other app running on the Android device.

Most developers assume that when they create a database for their Android application that its only able to be used by their app. Unfortunately, this assumption is not valid. The security policy defined in the Android Manifest is the place to check to make sure this is set properly. A developer who sees the following may assume their data is protected:


<provider android:name=”com.example.ReadOnlyDataContentProvider”
    android:authorities=”com.example” />


But for Android 4.1 or prior the Manifest has an insecure default for Content Providers in that if read and write permission are not set (turned off) then its assumed that your Content Provider is readable and writeable by other apps. (note - its unlikely but I can imagine why some app might want their data readable by other apps, why there is a default for other apps to write is something I have never understood). In any case, if you have deployed Android apps its pretty likely that you have the defaults set unless someone specifically turned off read and write acces, so you should check the Android security policy and test the app.

How to check
For yours apps, the best place to start is to review your Android Manifest.xml and check that the permissions are set to disallow access that you do not want, such as other apps reading and writing to your apps databases. On 4.1 or prior this has to be set otherwise the permission is granted.

How to test
There are a variety of ways to test for this, the Mercury test suite for Android gives you a way to see what is running:

               ..                    ..:.
              ..I..                  .I..
               ..I.    . . .... .  ..I=
                 .I...I?IIIIIIIII~..II
                 .?I?IIIIIIIIIIIIIIII..
              .,IIIIIIIIIIIIIIIIIIIIIII+.
           ...IIIIIIIIIIIIIIIIIIIIIIIIIII:.
           .IIIIIIIIIIIIIIIIIIIIIIIIIIIIIII..
         ..IIIIII,..,IIIIIIIIIIIII,..,IIIIII.
         .?IIIIIII..IIIIIIIIIIIIIII..IIIIIIII.
         ,IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII.
        .IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII.
        .IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII:
        .IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII

        The heavy metal that poisoned the droid
     
mercury> connect 127.0.0.1

*mercury> provider

*mercury#provider> info -p null

Package name: com.example.myapp
Authority: com.example.myapp.mydataprovider
Required Permission - Read: null
Required Permission - Write: null
Grant Uri Permissions: false
Multiprocess allowed: false

Package name: com.android.alarmclock
Authority: com.android.alarmclock
Required Permission - Read: null
Required Permission - Write: null
Grant Uri Permissions: false
Multiprocess allowed: false

Package name: com.android.mms
Authority: com.android.mms.SuggestionsProvider
Required Permission - Read: android.permission.READ_SMS
Required Permission - Write: null
Path Permission - Read: /search_suggest_query needs android.permission.GLOBAL_SEARCH
Path Permission - Read: /search_suggest_shortcut needs android.permission.GLOBAL_SEARCH
Grant Uri Permissions: false
Multiprocess allowed: false

(truncated)

Probably most Android apps have null permissions set and do not realize that it is the case or the impact of that omission (that other apps can read and write their data). In the case above the example app is set to allow other applications read and write its data. This happens many times with Android apps that contain sensitive data and the companies do not realize the exposure. This is just a snapshot but the Android permission sets are very much like a Purdy shotgun, great for skilled hunters, but also great for committing suicide.

**
Three days of iOS and Android AppSec geekery with Gunnar Peterson and Ken van Wyk - Training dates NYC April 29-May 1

Sunday, February 17, 2013

To understand the iOS passcode bug, consider the use case

If you follow any of the iOS-related new sites in the last few days, you'd have to be aware of a security bug that has surfaced in Apple's mobile operating system. After all, a failure in a screen lock / authentication mechanism is a pretty big issue for consumers.

Indeed, there's a lot of uproar in the twitterverse and such over this security failure. And to be fair, it is an important issue and the failure here mustn't be downplayed. But the failure doesn't seem to me to be a failure of their file protection architecture. It seems to me to be a presentation layer issue that can be exploited by a truly bizarre set of circumstances. The end result is still a data exposure, but let's consider things a bit deeper to see where the real problem is.

Apple prides itself on putting the user first. Among their mantras is the notion of delivering products that delight their customers. Great. Let's start there.

In iOS, there are a few ways of protecting data at rest. There's a File Protection API with four different classes of protection. There's also a Keychain Protection API with four different classes of protection. These are used respectively to protect files and keychain data stored on a device.

The reason for the four different protection classes is to accommodate different use cases, and therein lies the key (no pun intended) to understanding this latest iOS security bug.

Consider the following use case: Your iPhone is locked, even immediately following a reboot (yes, that matters in the various protection classes). You have yet to unlock the device during this boot session. The phone is in your pocket and a call comes in.

To handle that call, the phone app by necessity must look into your Contacts / Address Book and compare the incoming Caller ID with your list of people you know. If the caller is in your address book, a photo (optional) is displayed along with the caller's name. If not, just the incoming phone number is displayed.

In order to accomplish that use case, the Address Book can only be protected using the NSFileProtectionNone class. That's the same protection class that is used for the vast majority of files on an iOS solid state disk (NAND). Despite the name, it actually is encrypted: first by the file system itself, which is encrypted with a key called the "EMF!" key, and secondly at a file level by a key called the "DKEY" key. AES-256 encrypted, in fact, using a hardware chip for the encryption. The problem in their implementation, however, is that the EMF! and DKEY keys are stored in plaintext on the disk's Block 1, leaving them open to an attacker.

But, back to the use case for the address book data. In iOS 6.1, the AddressBook data is stored in /var/mobile/Library/AddressBook in standard SQLlite data format. The good news is that data is outside of your installed apps' sandboxes, so other apps aren't supposed to be able to get there. The bad news is the Contacts app itself can get there just fine.

In the case of a locked phone, there's an interface between the screen lock, the phone app, and the contacts app by necessity.

That leads me to conclude the bug isn't a fundamental one in Apple's NSFileProtection API. Rather, it is a serious bug in the implementation of one or more of the above app components. To be sure, neither the phone, contacts, nor lock app should ever grant unauthenticated access to that data. But the decision lies in those apps, not at a lower level in the file protection architecture.

Still confused? Come to our next Mobile App Sec Triathlon and we'll discuss in detail how to use both the file protection and keychain protection classes properly. Hope to see you in New York this April!

Cheers,

Ken



Wednesday, February 13, 2013

The front lines of software security wars

There are wars being fought out there, and not just the ones we hear about in the media. I'm talking about "software security wars", and nowhere are they more apparent than in the iOS jailbreaking scene. What's going on there is fascinating to watch as an outsider (or, I'll bet, as an insider!), and could well be paving the future of secure software.

Just over a week ago, the "evad3rs" team released their "evasi0n" jailbreak tool for iOS. It works on most current iOS devices, including the iPhone 5, which had thwarted jailbreaking attempts for a few months. Notably absent from the evasi0n supported devices list is the third generation Apple TV, which  was released in March of 2012 and has yet to see a successful jailbreak published.

So what's the big deal? After all, they broke almost all current devices, right? Well, yes they did. But a) the process took months, not weeks or days as we'd seen in prior device and iOS releases, and b) the ATV3 remains unbroken.

Let's take this a bit further. The evasi0n tool had to combine a "cocktail" of five different vulnerability exploits in order to successfully break a device. No single vulnerability unearthed by the evad3rs team was sufficient to accomplish everything needed to do the jailbreak.

Apple has come a long way in hardening its system, indeed. There are a couple of "soft" targets in the system, however, that the jailbreakers are constantly seeking to exploit.

When you put an iOS device into Device Firmware Update (DFU) mode, you can boot from a USB-provided kernel. Clearly, Apple doesn't want you to be able to boot just any old kernel, so they rigorously protect the DFU process to try to ensure that only signed kernels can be loaded. Any flaw in the USBmux communications, and a non-signed kernel could potentially be booted.

In the case of the evasi0n tool, one of the exploits it used involved altering a file inside the sandbox with a symbolic link to a file outside the sandbox -- clearly a significant flaw in Apple's sandboxing implementation!

So then, back to the "war". This battle is raging between two sets of software techies. One builds a strong defense, and then the other searches for weaknesses and exploits them. Of course, there are many such front lines being fought in other software security wars, but this one is pretty tightly focused, which enables us to shine a spotlight on it and really study what both sides are doing.

With each release of iOS, Apple has been upping the ante by adding new security features to make it more difficult to break the system. These include features like address space layout randomization (ASLR) that pretty much eviscerated old-school style stack and heap overflow attacks. The war wages on and on.

Who will win the war? I believe Apple will eventually protect the system to the point that jailbreaking is no longer cost or time effective to the attackers -- at least not to attack teams like the evad3rs. The fact that the current jailbreak took months makes this a fairly safe bet, IMHO. Time will tell.

So, what does all this mean to software developers? Ah, that's really the underlying question here. Once we have an iOS device with adequate authentication (and no, 4-digit PINs are NOT adequate), and that system is on a platform that can't be exploited in a reasonable amount of time, we'll have a platform that is truly trustworthy. For now, we have to continue to apply app-level protections to safeguard our most sensitive app data.

Join Gunnar (@OneRaindrop) and me (@KRvW) at our next Mobile App Security Triathlon event for a deep dive into these issues. New York in April/May!


Wednesday, February 6, 2013

Buyer Education for Avoiding Mobile Dim Sum Surprise Projects

Recently I did a talk at OWASP Twin Cities on building a mobile app security toolchain. The talk went pretty well, lots of good questions. One takeaway, there are many people in many different kinds of companies struggling with how to do Mobile App Sec. The room was sold out, and so it looks like the OWASP Chapter is organizing a repeat talk some time this month, so if you missed it and want to come, stay tuned.

The basics of the talk are around what does an end to end process look like for Mobile AppSec, what tools are involved, and what dragons are lurking along the way? For the three day training that Ken and I do the second and third days are focused on hands on iOS and Android security issues. The first day is focused on a number of issues like how to fix your back end for mobile, what identity protocols might be used, what new use cases and risks does mobile present, and threat modeling for mobile.

One thing that I have seen is that many mobile projects are outsourced, both development and vulnerability assessment work. Of course, companies outsource lots of things these days, but I would say its more pronounced with Mobile. In part this may be due to a small skillset of mobile talent. And maybe also companies figuring out if mobile is a fad that will go away or if they really need to build out a team. To me, the answers for most companies are - mobile is not going away, build your team, seed it with the right mix of folks and train them.

There's another variable at play here. Outsourcing is fine as far as it goes, but its only as good as your ability to select and target the right consulting firms, teams, and work direction. For mobile vulnerability assessment in particular it can be a real hodge podge, some tools and services left over from the webapp security days (do you still need them? yes, but you need others too), many things that apply on one platform but not on another, and a brand new set of use cases for mobile. In all, its a bit like going to dim sum, things whizz by and you point at something you sort of recognize, only after eating do you know if the choice was any good (ok, but who doesn't like pork belly buns though?).

The full three day class is for hands on developers and security people, we talked about making it only for them, but decided to leave the one day option because there are many design, architecture and other issues that extend to other parts of the organization. Whether directing an internal team or brining in a consulting team, education is important to make more informed decisions. One thing we work to build in the training on day one is to make sure people are educated buyers.The mobile app security process and results should not be a surprise.  Don't just point at a menu of services, instead learn to identify what tools and services are most vital to your project, and focus on those.

**
Three days of iOS and Android AppSec geekery with Gunnar Peterson and Ken van Wyk - Training dates NYC April 29-May 1