Session management vulnerabilities are tricky. They are highly dependent on context. Identifying session fixation, session replay and the like means looking at the end to end session lifecycle from creation to use to termination.
On normal webapps this is mostly a straightforward affair including - examine the session cookie, ensure proper cookie hygiene, make sure transport is protected, and that timeouts are set correctly. On normal webapps the server sets the timeout for the session cookie (say 20 minutes), sends to the browser and the server validates the session on the return trip. The session lives as a relationship between the browser client and the web server. But what about mobile sessions? They are pretty different, let's count the ways.
First off the user likely authenticates locally to the mobile app itself, let's call this session #1. Then any time the app needs to do something on the network (like synchronize data or replicate) it authenticates from the mobile app to the server, let's call this session #2. Next the server is very likely an API Gateway with no data or business logic, that is on the backend app servers, so the Mobile API Gateway has authenticate to the backend servers, let's call this session #3.
Now just logging into each of these sessions is a decent bit of work in and of itself. Add onto that the fact that very likely these are three fundamentally different protocols - maybe username/password for #1, OAuth for #2 and SAML for #3. Logging in is where it begins, but that's not where it ends.
How do you ensure consistent policy across these different protocols? When do you timeout the session? What happens if session #1 times out but sessions #2 & 3 are still alive? How do you reinstantiate? What happens when your user logs out?
Today these are mainly exercises left to the implementers to figure out, the tools market is pretty nascent. The above scenario is a pretty simple view compared to some Mobile apps. Enterprises still struggle with sessions management for webapps, ensuring session data isn't easily spoofed or stolen requires careful review, but its vastly more complicated for many mobile apps. Until ready made tools are available, enterprise's time spent on end to end design and testing that the sessions mesh appropriately is time well spent.
Update: Paul Madsen added on in Twitter "and the original SAML session from enterprise IdP" For sure there are many combinations and permutations to consider. What I am seeing though is that a base case Mobile app has at least 3x more compelxity for session management than a base case web app. Considering may webapps still struggle this is food for thought.
**
Three days of iOS and Android AppSec training with Gunnar Peterson and Ken van Wyk - Training dates NYC April 29-May 1
Blog home for Gunnar Peterson (@OneRaindrop) and Ken van Wyk (@KRvW) for topics related to our joint Mobile App Security Triathlon events. For more info, see our website: www.MobileAppSecTriathlon.com Contact us to schedule a MobAppSecTriathlon at your organization.
Tuesday, April 2, 2013
Friday, March 22, 2013
Security Implications from One Year on Mobile Only
Benjamin Robbins (@PaladorBenjamin) just completed 52 solid weeks working solely on mobile. Of course there were some issues, but he did it and the lessons learned are instructive.
A key takeaway:
Your mobile device is an extension of other things, its not a full replacement. So as someone designing security and identity services for mobile, you have to be able to mesh that identity with the server, the other machines and the directory management systems.
It tempting to think of machines and mobile devices as islands that we need to protect (enterprise archipelago security architect?), but this is to miss the point. The mobile device needs data input from other places (likely by people using keyboards ;-P), need access to documents, and they need server side communications. Users also want something resembling a consistent set of access rights no matter what platform they are using - laptop, webapp, mobile, workstation or tablet. These are unsolved problems in the security and identity industry today.
Still Benjamin Robbins' piece is a great testament to, practical issues aside, how far things have come in a short while for mobile. I continue to expect that we see more mobile apps not less and that the devices will snowball on top of the servers, browsers, services, and desktop/laptop machines you already have to cope with. Design your security services accordingly.
A key takeaway:
From a practical perspective I’ve learned that there are certain needs of human ergonomics that you just can’t engineer your way around no matter how cool the technology. I can say with confidence that a monitor and keyboard are not going anywhere anytime soon.
Your mobile device is an extension of other things, its not a full replacement. So as someone designing security and identity services for mobile, you have to be able to mesh that identity with the server, the other machines and the directory management systems.
It tempting to think of machines and mobile devices as islands that we need to protect (enterprise archipelago security architect?), but this is to miss the point. The mobile device needs data input from other places (likely by people using keyboards ;-P), need access to documents, and they need server side communications. Users also want something resembling a consistent set of access rights no matter what platform they are using - laptop, webapp, mobile, workstation or tablet. These are unsolved problems in the security and identity industry today.
Still Benjamin Robbins' piece is a great testament to, practical issues aside, how far things have come in a short while for mobile. I continue to expect that we see more mobile apps not less and that the devices will snowball on top of the servers, browsers, services, and desktop/laptop machines you already have to cope with. Design your security services accordingly.
**
Three days of iOS and Android AppSec training with Gunnar Peterson and Ken van Wyk - Training dates NYC April 29-May 1
Three days of iOS and Android AppSec training with Gunnar Peterson and Ken van Wyk - Training dates NYC April 29-May 1
Tuesday, March 19, 2013
US FTC fires a warning shot in the mobile software security wars
If you weren't looking carefully, you probably weren't even aware of it. (Indeed, I hadn't seen it until I read John Edwards's piece over at The Mobility Hub.) But, make no mistake about it, this is a big deal for the software industry. The ramifications could be far reaching and could end up touching every company that develops software (at least for US consumers).
What's the big deal? HTC America recently settled a complaint filed against them by the Federal Trade Commission. The terms of the settlement force HTC to develop patches to fix numerous software vulnerabilities in its mobile products, including Android, Windows Mobile, and Windows Phone products.
Blah blah blah, yawn. Right? WRONG!
What makes this case interesting to software developers in the mobile and not-mobile (stationary?) worlds is the litany of issues claimed by the FTC. Among other things, FTC claims that HTC:
What's the big deal? HTC America recently settled a complaint filed against them by the Federal Trade Commission. The terms of the settlement force HTC to develop patches to fix numerous software vulnerabilities in its mobile products, including Android, Windows Mobile, and Windows Phone products.
Blah blah blah, yawn. Right? WRONG!
What makes this case interesting to software developers in the mobile and not-mobile (stationary?) worlds is the litany of issues claimed by the FTC. Among other things, FTC claims that HTC:
- "engaged in a number of practices that, taken
together, failed to employ reasonable and appropriate security in the design and
customization of the software on its mobile devices";
- "failed to implement an adequate program to assess the security of products it shipped
to consumers;"
- "failed to implement adequate privacy and security guidance or training
for its engineering staff;"
- "failed to conduct assessments, audits, reviews, or tests to
identify potential security vulnerabilities in its mobile devices;"
- "failed to follow well-known and commonly-accepted secure programming practices, including secure practices
that were expressly described in the operating system’s guides for manufacturers and
developers, which would have ensured that applications only had access to users’
information with their consent;"
- "failed to implement a process for receiving and addressing security vulnerability reports from third-party researchers, academics or other members of the public, thereby delaying its opportunity to correct discovered vulnerabilities or respond to reported incidents."
Oh, is that all? No, it's not. The FTC complaint provides specific examples and their impacts. The examples include mis-use of permissions, insecure communications, insecure app installation, and inclusion of "debug code". It goes on to claim that consumers were placed at risk by HTC's practices.
Now, I'm certainly no lawyer, but reading through this complaint and its settlement tells me that the US Federal Government is hugely interested in mobile product security -- and presumably other software as well. I don't know the specifics of just what HTC really did or didn't do, but this sure looks to me like a real precedent nonetheless. It should also send a firm warning message to all software developers. There but for the grace of God go I, right?
Reading the complaint, there are certainly some direct actions that the entire industry would be wise to heed, starting with implementing a security regimen that assesses the security of all software products shipped to consumers. Another key action is to implement privacy and security guidance or training for engineering staff. That list should go on to include assessments, audits, reviews, and testing products to identify (and remediate) security vulnerabilities.
There are many good sources of guidance available today regarding this sort of thing. Clearly, we believe mobile app developers could do a lot worse than attending one of our Mobile App Security Triathlon events like the one we're holding in New York during April. But that's just one of many good things to do. Be sure to also look at the Build Security In portal run by the US Department of Homeland Security. OWASP's Mobile Security Project can also be useful in looking for tips and guidance.
Come join us in New York and we'll help you build your mobile app security knowledge, as well as provide many pointers to other useful resources you can turn to so that your organization isn't so likely to find itself in the FTC's crosshairs.
Cheers,
Ken van Wyk
Schneier Says User Awareness: Tired, Dev Training: Wired
Bruce Schneier tackles security training in Dark Reading. He basically says that training users in classic "security awareness" training is a waste of money. Certainly there is a lot of evidence to back up that claim, users routinely click on certificate warnings, for example.
What I found most interesting is what Bruce Schneier recommended to do instead of security awareness training for users:
On the other hand, developers, security people and architects are actually building and running the system. If they know how to avoid mistakes they are in a position to protect across all the app users from a broad range of threats.
This is the essence of what Ken and I focus on in Mobile App Sec Triathlon training. I wrote about it in Why We Train. We want to help developers, security people and architects recognize security problems in design, development and operations; and, crucially, have some concrete ideas on what they can do about them.
Companies are scrambling to get "something" up and running for Mobile, either enterprise side or customer/external facing or both. It really reminds me of the early days of the web. A lot of this is vert fragmented inside of companies. A lot is outsourced, too. Ken and I put a lot of thought into the three day class so that its focused on what companies want and need.
Choose Your Own Adventure
Day one is about mobile threats that apply to all platforms, architecture, and design considerations. We look at threat modeling for Mobile. We drill down on the identity issues for mobile, server side and what makes a Mobile DMZ. The class is setup so that architects and dev managers may choose to just attend day one.
Days two and three are hands on iOS and Android. Depending on what your company is building and/or outsourcing. You come out of these days knowing how to avoid security pitfalls in coding for mobile. Whether you are doing the dev in house or working with a provider, developers and security people will have a deeper understanding of the core security design and development options for building more secure code.
We recently announced scholarship program for students and interns. Based on past trainings, this has proven to be a great way to get fresh perspective on mobile trends. Finally since many companies are launching new mobile projects, we often see whole teams that need to get up to speed on issues rather quickly (before deployment0, so to serve this need we offer a group discount, send three people and the fourth comes free.
Overall our approach is geared towards adapting to the things that are most useful to companies trying to build more secure mobile apps. Training developers on secure coding is not yet a sina qua non, but for those that invest in building up skills and expertise it pays dividends in protecting your users, data, and organization.
**
Three days of iOS and Android AppSec training with Gunnar Peterson and Ken van Wyk - Training dates NYC April 29-May 1
What I found most interesting is what Bruce Schneier recommended to do instead of security awareness training for users:
we should be spending money on security training for developers. These are people who can be taught expertise in a fast-changing environment, and this is a situation where raising the average behavior increases the security of the overall system.Of course I wholeheartedly agree with this. Let's say doing a great job on security awareness training for users, best case, maybe takes the rate of users clicking through cert warnings from 90% to 80%.
If we security engineers do our job right, users will get their awareness training informally and organically, from their colleagues and friends. People will learn the correct folk models of security, and be able to make decisions using them. Then maybe an organization can spend an hour a year reminding their employees what good security means at that organization, both on the computer and off. That makes a whole lot more sense.
On the other hand, developers, security people and architects are actually building and running the system. If they know how to avoid mistakes they are in a position to protect across all the app users from a broad range of threats.
This is the essence of what Ken and I focus on in Mobile App Sec Triathlon training. I wrote about it in Why We Train. We want to help developers, security people and architects recognize security problems in design, development and operations; and, crucially, have some concrete ideas on what they can do about them.
Companies are scrambling to get "something" up and running for Mobile, either enterprise side or customer/external facing or both. It really reminds me of the early days of the web. A lot of this is vert fragmented inside of companies. A lot is outsourced, too. Ken and I put a lot of thought into the three day class so that its focused on what companies want and need.
Choose Your Own Adventure
Day one is about mobile threats that apply to all platforms, architecture, and design considerations. We look at threat modeling for Mobile. We drill down on the identity issues for mobile, server side and what makes a Mobile DMZ. The class is setup so that architects and dev managers may choose to just attend day one.
Days two and three are hands on iOS and Android. Depending on what your company is building and/or outsourcing. You come out of these days knowing how to avoid security pitfalls in coding for mobile. Whether you are doing the dev in house or working with a provider, developers and security people will have a deeper understanding of the core security design and development options for building more secure code.
We recently announced scholarship program for students and interns. Based on past trainings, this has proven to be a great way to get fresh perspective on mobile trends. Finally since many companies are launching new mobile projects, we often see whole teams that need to get up to speed on issues rather quickly (before deployment0, so to serve this need we offer a group discount, send three people and the fourth comes free.
Overall our approach is geared towards adapting to the things that are most useful to companies trying to build more secure mobile apps. Training developers on secure coding is not yet a sina qua non, but for those that invest in building up skills and expertise it pays dividends in protecting your users, data, and organization.
**
Three days of iOS and Android AppSec training with Gunnar Peterson and Ken van Wyk - Training dates NYC April 29-May 1
Monday, March 18, 2013
ANNOUNCING: MobAppSecTri Scholarship Program
For our upcoming three-day Mobile App Sec Triathlon in New York City on April 29 - 1 May, we are once again running a student / intern scholarship program.
We will be giving away a few student / intern tickets to the event absolutely free to a small number of deserving students / interns.
Course details can be found here.
Requirements
To be considered for a student / intern free registration, you will need to submit to us by 31 March 2013 a short statement of: A) Your qualifications and experience in mobile app development and/or information security, and B) Why you deserve to be selected. Candidate submissions will be evaluated by the course instructors, Gunnar Peterson (@OneRaindrop) and me (@KRvW). Decisions will be based solely on the quality of the submissions, and all decisions will be final.
Details
All scholarship submissions are due no later than midnight Eastern Daylight Time (UTC -0400) on 31 March 2013. Submissions should be sent via email to us. Winning entrants will be notified no later than 15 April 2013.
Student / intern ticket includes entrance to all three days of the event, along with all course refreshments and catering. Note that these free tickets do not include travel or lodging expenses.
Wednesday, March 13, 2013
What can/should the mobile OS vendors do to help?
Mobile device producers are missing important areas where they can and should be doing more.
What makes me say this? Well, I was talking with a journalist about mobile device/app security recently when he asked me what the device/OS vendors can do to help with security for end consumers. Good question, and I certainly had a few suggestions to toss in. But it got me thinking about what they can be doing to make things better for consumers. And that got me thinking about what they can be doing to help app developers.
On the consumer side, the sorts of things that would be on my wish list include:
Here are a few of the things I think would be useful to mobile app developers, in no particular order:
What makes me say this? Well, I was talking with a journalist about mobile device/app security recently when he asked me what the device/OS vendors can do to help with security for end consumers. Good question, and I certainly had a few suggestions to toss in. But it got me thinking about what they can be doing to make things better for consumers. And that got me thinking about what they can be doing to help app developers.
On the consumer side, the sorts of things that would be on my wish list include:
- Strong passcode authentication. On iOS, the default passcode is a 4-digit PIN, and many people disable passcodes entirely. Since the built-in file protection encryption key is derived from a combination of the hardware identifier and the user's passcode, this just fails and fails. Even a "protected" file can be broken in just a few minutes using readily available software that brute force guesses all 10,000 (count em) possible passcodes. Well, a stronger passcode mechanism that is still acceptable to end consumers would be a good start. There are rumors of future iOS devices using fingerprint scanners, for example. While biometric sensors aren't without their own problems, they should prove to be a whole lot better than 4-digit PINs.
- Trusted module. Still picking on iOS here... Storing the encryption keys in plaintext on the SSD (NAND) violates just about every rule of safe crypto. Those keys should be stored in hardware in a place that's impossible to get to programmatically, and would require a huge cost to extract forensically.
- Certificates. Whether they are aware of it or not, iOS users use certificates for various trust service on iCloud and others like Apple's Messages app. Since they're already generating user certificates, why not also give all iOS users certificates for S/MIME and other security services. That would also open up to app developers the possibility of stronger authentication using client-side certificates.
Here are a few of the things I think would be useful to mobile app developers, in no particular order:
- Authenticator client for various protocols. There are various ways to build an authenticator into a mobile app. In their various SDKs, it would be useful for device vendors to provide authenticator examples for popular authenticator protocols and services such as Facebook Connect and Google Authenticator.
- Payment services. Similarly, example code for connecting to PayPal and other payment services back-ends would be useful. We're seeing some of those coming from the payment providers themselves, which is great, but it's been a long time coming.
So, I have no inside knowledge at Apple or Google for that matter, but it's always nice to dream. A few relatively small enhancements to the underlying the technology could open up all sorts of possibilities for users and developer alike. As it stands, an app developer writing a business app on iOS app has to build so many things from scratch, as intrinsic options for safe data storage, transmission, etc., are just not acceptable for today's business needs.
How about you? What would you add or change on these lists? What are your pet peeves or wish list items? We'd love to hear them.
How about you? What would you add or change on these lists? What are your pet peeves or wish list items? We'd love to hear them.
Come join Gunnar (@OneRaindrop) and me (@KRvW) for three days of discussing these and many other issues in New York at our next Mobile App Sec Triathlon, #MobAppSecTri.
Cheers,
Ken
What Comprises a Mobile DMZ?
I have a new post on the Intel blog on Mobile DMZs. The post looks at what part of Identity and Access Management, Defensive Services and Enablement are the same for Mobile and what parts adapt?
**
Three days of iOS and Android AppSec geekery with Gunnar Peterson and Ken van Wyk - Training dates NYC April 29-May 1
**
Three days of iOS and Android AppSec geekery with Gunnar Peterson and Ken van Wyk - Training dates NYC April 29-May 1
Wednesday, February 20, 2013
Android adds a Secure Default for Content Providers
Security requires some thought in design, lots of developer attention in secure coding, but there are gaps that the platform can close that can make the designer and the developers lives easier, setting secure defaults. Default Android introduces a number of ways that companies can unwittingly open up vulnerabilities. Jelly Bean offers a number of security improvements, one of the more interesting is adding a new and important Secure Default which protects Content Providers, aka your data. The setting protects against data inadvertently leaked to other apps. Android's permission model is pretty expressive and lets you set fine grained access control policy. Unfortunately, this means that there are many options and so many enterprises that ship with default settings can expose their data to any other app running on the Android device.
Most developers assume that when they create a database for their Android application that its only able to be used by their app. Unfortunately, this assumption is not valid. The security policy defined in the Android Manifest is the place to check to make sure this is set properly. A developer who sees the following may assume their data is protected:
But for Android 4.1 or prior the Manifest has an insecure default for Content Providers in that if read and write permission are not set (turned off) then its assumed that your Content Provider is readable and writeable by other apps. (note - its unlikely but I can imagine why some app might want their data readable by other apps, why there is a default for other apps to write is something I have never understood). In any case, if you have deployed Android apps its pretty likely that you have the defaults set unless someone specifically turned off read and write acces, so you should check the Android security policy and test the app.
How to check
For yours apps, the best place to start is to review your Android Manifest.xml and check that the permissions are set to disallow access that you do not want, such as other apps reading and writing to your apps databases. On 4.1 or prior this has to be set otherwise the permission is granted.
How to test
There are a variety of ways to test for this, the Mercury test suite for Android gives you a way to see what is running:
.. ..:.
..I.. .I..
..I. . . .... . ..I=
.I...I?IIIIIIIII~..II
.?I?IIIIIIIIIIIIIIII..
.,IIIIIIIIIIIIIIIIIIIIIII+.
...IIIIIIIIIIIIIIIIIIIIIIIIIII:.
.IIIIIIIIIIIIIIIIIIIIIIIIIIIIIII..
..IIIIII,..,IIIIIIIIIIIII,..,IIIIII.
.?IIIIIII..IIIIIIIIIIIIIII..IIIIIIII.
,IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII.
.IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII.
.IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII:
.IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII
The heavy metal that poisoned the droid
mercury> connect 127.0.0.1
*mercury> provider
*mercury#provider> info -p null
Package name: com.example.myapp
Authority: com.example.myapp.mydataprovider
Required Permission - Read: null
Required Permission - Write: null
Grant Uri Permissions: false
Multiprocess allowed: false
Package name: com.android.alarmclock
Authority: com.android.alarmclock
Required Permission - Read: null
Required Permission - Write: null
Grant Uri Permissions: false
Multiprocess allowed: false
Package name: com.android.mms
Authority: com.android.mms.SuggestionsProvider
Required Permission - Read: android.permission.READ_SMS
Required Permission - Write: null
Path Permission - Read: /search_suggest_query needs android.permission.GLOBAL_SEARCH
Path Permission - Read: /search_suggest_shortcut needs android.permission.GLOBAL_SEARCH
Grant Uri Permissions: false
Multiprocess allowed: false
(truncated)
Probably most Android apps have null permissions set and do not realize that it is the case or the impact of that omission (that other apps can read and write their data). In the case above the example app is set to allow other applications read and write its data. This happens many times with Android apps that contain sensitive data and the companies do not realize the exposure. This is just a snapshot but the Android permission sets are very much like a Purdy shotgun, great for skilled hunters, but also great for committing suicide.
**
Three days of iOS and Android AppSec geekery with Gunnar Peterson and Ken van Wyk - Training dates NYC April 29-May 1
Most developers assume that when they create a database for their Android application that its only able to be used by their app. Unfortunately, this assumption is not valid. The security policy defined in the Android Manifest is the place to check to make sure this is set properly. A developer who sees the following may assume their data is protected:
<provider android:name=”com.example.ReadOnlyDataContentProvider”
android:authorities=”com.example” />
But for Android 4.1 or prior the Manifest has an insecure default for Content Providers in that if read and write permission are not set (turned off) then its assumed that your Content Provider is readable and writeable by other apps. (note - its unlikely but I can imagine why some app might want their data readable by other apps, why there is a default for other apps to write is something I have never understood). In any case, if you have deployed Android apps its pretty likely that you have the defaults set unless someone specifically turned off read and write acces, so you should check the Android security policy and test the app.
How to check
For yours apps, the best place to start is to review your Android Manifest.xml and check that the permissions are set to disallow access that you do not want, such as other apps reading and writing to your apps databases. On 4.1 or prior this has to be set otherwise the permission is granted.
How to test
There are a variety of ways to test for this, the Mercury test suite for Android gives you a way to see what is running:
.. ..:.
..I.. .I..
..I. . . .... . ..I=
.I...I?IIIIIIIII~..II
.?I?IIIIIIIIIIIIIIII..
.,IIIIIIIIIIIIIIIIIIIIIII+.
...IIIIIIIIIIIIIIIIIIIIIIIIIII:.
.IIIIIIIIIIIIIIIIIIIIIIIIIIIIIII..
..IIIIII,..,IIIIIIIIIIIII,..,IIIIII.
.?IIIIIII..IIIIIIIIIIIIIII..IIIIIIII.
,IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII.
.IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII.
.IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII:
.IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII
The heavy metal that poisoned the droid
mercury> connect 127.0.0.1
*mercury> provider
*mercury#provider> info -p null
Package name: com.example.myapp
Authority: com.example.myapp.mydataprovider
Required Permission - Read: null
Required Permission - Write: null
Grant Uri Permissions: false
Multiprocess allowed: false
Package name: com.android.alarmclock
Authority: com.android.alarmclock
Required Permission - Read: null
Required Permission - Write: null
Grant Uri Permissions: false
Multiprocess allowed: false
Package name: com.android.mms
Authority: com.android.mms.SuggestionsProvider
Required Permission - Read: android.permission.READ_SMS
Required Permission - Write: null
Path Permission - Read: /search_suggest_query needs android.permission.GLOBAL_SEARCH
Path Permission - Read: /search_suggest_shortcut needs android.permission.GLOBAL_SEARCH
Grant Uri Permissions: false
Multiprocess allowed: false
(truncated)
Probably most Android apps have null permissions set and do not realize that it is the case or the impact of that omission (that other apps can read and write their data). In the case above the example app is set to allow other applications read and write its data. This happens many times with Android apps that contain sensitive data and the companies do not realize the exposure. This is just a snapshot but the Android permission sets are very much like a Purdy shotgun, great for skilled hunters, but also great for committing suicide.
**
Three days of iOS and Android AppSec geekery with Gunnar Peterson and Ken van Wyk - Training dates NYC April 29-May 1
Sunday, February 17, 2013
To understand the iOS passcode bug, consider the use case
If you follow any of the iOS-related new sites in the last few days, you'd have to be aware of a security bug that has surfaced in Apple's mobile operating system. After all, a failure in a screen lock / authentication mechanism is a pretty big issue for consumers.
Indeed, there's a lot of uproar in the twitterverse and such over this security failure. And to be fair, it is an important issue and the failure here mustn't be downplayed. But the failure doesn't seem to me to be a failure of their file protection architecture. It seems to me to be a presentation layer issue that can be exploited by a truly bizarre set of circumstances. The end result is still a data exposure, but let's consider things a bit deeper to see where the real problem is.
Apple prides itself on putting the user first. Among their mantras is the notion of delivering products that delight their customers. Great. Let's start there.
In iOS, there are a few ways of protecting data at rest. There's a File Protection API with four different classes of protection. There's also a Keychain Protection API with four different classes of protection. These are used respectively to protect files and keychain data stored on a device.
The reason for the four different protection classes is to accommodate different use cases, and therein lies the key (no pun intended) to understanding this latest iOS security bug.
Consider the following use case: Your iPhone is locked, even immediately following a reboot (yes, that matters in the various protection classes). You have yet to unlock the device during this boot session. The phone is in your pocket and a call comes in.
To handle that call, the phone app by necessity must look into your Contacts / Address Book and compare the incoming Caller ID with your list of people you know. If the caller is in your address book, a photo (optional) is displayed along with the caller's name. If not, just the incoming phone number is displayed.
In order to accomplish that use case, the Address Book can only be protected using the NSFileProtectionNone class. That's the same protection class that is used for the vast majority of files on an iOS solid state disk (NAND). Despite the name, it actually is encrypted: first by the file system itself, which is encrypted with a key called the "EMF!" key, and secondly at a file level by a key called the "DKEY" key. AES-256 encrypted, in fact, using a hardware chip for the encryption. The problem in their implementation, however, is that the EMF! and DKEY keys are stored in plaintext on the disk's Block 1, leaving them open to an attacker.
But, back to the use case for the address book data. In iOS 6.1, the AddressBook data is stored in /var/mobile/Library/AddressBook in standard SQLlite data format. The good news is that data is outside of your installed apps' sandboxes, so other apps aren't supposed to be able to get there. The bad news is the Contacts app itself can get there just fine.
In the case of a locked phone, there's an interface between the screen lock, the phone app, and the contacts app by necessity.
That leads me to conclude the bug isn't a fundamental one in Apple's NSFileProtection API. Rather, it is a serious bug in the implementation of one or more of the above app components. To be sure, neither the phone, contacts, nor lock app should ever grant unauthenticated access to that data. But the decision lies in those apps, not at a lower level in the file protection architecture.
Still confused? Come to our next Mobile App Sec Triathlon and we'll discuss in detail how to use both the file protection and keychain protection classes properly. Hope to see you in New York this April!
Cheers,
Ken
Indeed, there's a lot of uproar in the twitterverse and such over this security failure. And to be fair, it is an important issue and the failure here mustn't be downplayed. But the failure doesn't seem to me to be a failure of their file protection architecture. It seems to me to be a presentation layer issue that can be exploited by a truly bizarre set of circumstances. The end result is still a data exposure, but let's consider things a bit deeper to see where the real problem is.
Apple prides itself on putting the user first. Among their mantras is the notion of delivering products that delight their customers. Great. Let's start there.
In iOS, there are a few ways of protecting data at rest. There's a File Protection API with four different classes of protection. There's also a Keychain Protection API with four different classes of protection. These are used respectively to protect files and keychain data stored on a device.
The reason for the four different protection classes is to accommodate different use cases, and therein lies the key (no pun intended) to understanding this latest iOS security bug.
Consider the following use case: Your iPhone is locked, even immediately following a reboot (yes, that matters in the various protection classes). You have yet to unlock the device during this boot session. The phone is in your pocket and a call comes in.
To handle that call, the phone app by necessity must look into your Contacts / Address Book and compare the incoming Caller ID with your list of people you know. If the caller is in your address book, a photo (optional) is displayed along with the caller's name. If not, just the incoming phone number is displayed.
In order to accomplish that use case, the Address Book can only be protected using the NSFileProtectionNone class. That's the same protection class that is used for the vast majority of files on an iOS solid state disk (NAND). Despite the name, it actually is encrypted: first by the file system itself, which is encrypted with a key called the "EMF!" key, and secondly at a file level by a key called the "DKEY" key. AES-256 encrypted, in fact, using a hardware chip for the encryption. The problem in their implementation, however, is that the EMF! and DKEY keys are stored in plaintext on the disk's Block 1, leaving them open to an attacker.
But, back to the use case for the address book data. In iOS 6.1, the AddressBook data is stored in /var/mobile/Library/AddressBook in standard SQLlite data format. The good news is that data is outside of your installed apps' sandboxes, so other apps aren't supposed to be able to get there. The bad news is the Contacts app itself can get there just fine.
In the case of a locked phone, there's an interface between the screen lock, the phone app, and the contacts app by necessity.
That leads me to conclude the bug isn't a fundamental one in Apple's NSFileProtection API. Rather, it is a serious bug in the implementation of one or more of the above app components. To be sure, neither the phone, contacts, nor lock app should ever grant unauthenticated access to that data. But the decision lies in those apps, not at a lower level in the file protection architecture.
Still confused? Come to our next Mobile App Sec Triathlon and we'll discuss in detail how to use both the file protection and keychain protection classes properly. Hope to see you in New York this April!
Cheers,
Ken
Wednesday, February 13, 2013
The front lines of software security wars
There are wars being fought out there, and not just the ones we hear about in the media. I'm talking about "software security wars", and nowhere are they more apparent than in the iOS jailbreaking scene. What's going on there is fascinating to watch as an outsider (or, I'll bet, as an insider!), and could well be paving the future of secure software.
Just over a week ago, the "evad3rs" team released their "evasi0n" jailbreak tool for iOS. It works on most current iOS devices, including the iPhone 5, which had thwarted jailbreaking attempts for a few months. Notably absent from the evasi0n supported devices list is the third generation Apple TV, which was released in March of 2012 and has yet to see a successful jailbreak published.
So what's the big deal? After all, they broke almost all current devices, right? Well, yes they did. But a) the process took months, not weeks or days as we'd seen in prior device and iOS releases, and b) the ATV3 remains unbroken.
Let's take this a bit further. The evasi0n tool had to combine a "cocktail" of five different vulnerability exploits in order to successfully break a device. No single vulnerability unearthed by the evad3rs team was sufficient to accomplish everything needed to do the jailbreak.
Apple has come a long way in hardening its system, indeed. There are a couple of "soft" targets in the system, however, that the jailbreakers are constantly seeking to exploit.
When you put an iOS device into Device Firmware Update (DFU) mode, you can boot from a USB-provided kernel. Clearly, Apple doesn't want you to be able to boot just any old kernel, so they rigorously protect the DFU process to try to ensure that only signed kernels can be loaded. Any flaw in the USBmux communications, and a non-signed kernel could potentially be booted.
In the case of the evasi0n tool, one of the exploits it used involved altering a file inside the sandbox with a symbolic link to a file outside the sandbox -- clearly a significant flaw in Apple's sandboxing implementation!
So then, back to the "war". This battle is raging between two sets of software techies. One builds a strong defense, and then the other searches for weaknesses and exploits them. Of course, there are many such front lines being fought in other software security wars, but this one is pretty tightly focused, which enables us to shine a spotlight on it and really study what both sides are doing.
With each release of iOS, Apple has been upping the ante by adding new security features to make it more difficult to break the system. These include features like address space layout randomization (ASLR) that pretty much eviscerated old-school style stack and heap overflow attacks. The war wages on and on.
Who will win the war? I believe Apple will eventually protect the system to the point that jailbreaking is no longer cost or time effective to the attackers -- at least not to attack teams like the evad3rs. The fact that the current jailbreak took months makes this a fairly safe bet, IMHO. Time will tell.
So, what does all this mean to software developers? Ah, that's really the underlying question here. Once we have an iOS device with adequate authentication (and no, 4-digit PINs are NOT adequate), and that system is on a platform that can't be exploited in a reasonable amount of time, we'll have a platform that is truly trustworthy. For now, we have to continue to apply app-level protections to safeguard our most sensitive app data.
Join Gunnar (@OneRaindrop) and me (@KRvW) at our next Mobile App Security Triathlon event for a deep dive into these issues. New York in April/May!
Just over a week ago, the "evad3rs" team released their "evasi0n" jailbreak tool for iOS. It works on most current iOS devices, including the iPhone 5, which had thwarted jailbreaking attempts for a few months. Notably absent from the evasi0n supported devices list is the third generation Apple TV, which was released in March of 2012 and has yet to see a successful jailbreak published.
So what's the big deal? After all, they broke almost all current devices, right? Well, yes they did. But a) the process took months, not weeks or days as we'd seen in prior device and iOS releases, and b) the ATV3 remains unbroken.
Let's take this a bit further. The evasi0n tool had to combine a "cocktail" of five different vulnerability exploits in order to successfully break a device. No single vulnerability unearthed by the evad3rs team was sufficient to accomplish everything needed to do the jailbreak.
Apple has come a long way in hardening its system, indeed. There are a couple of "soft" targets in the system, however, that the jailbreakers are constantly seeking to exploit.
When you put an iOS device into Device Firmware Update (DFU) mode, you can boot from a USB-provided kernel. Clearly, Apple doesn't want you to be able to boot just any old kernel, so they rigorously protect the DFU process to try to ensure that only signed kernels can be loaded. Any flaw in the USBmux communications, and a non-signed kernel could potentially be booted.
In the case of the evasi0n tool, one of the exploits it used involved altering a file inside the sandbox with a symbolic link to a file outside the sandbox -- clearly a significant flaw in Apple's sandboxing implementation!
So then, back to the "war". This battle is raging between two sets of software techies. One builds a strong defense, and then the other searches for weaknesses and exploits them. Of course, there are many such front lines being fought in other software security wars, but this one is pretty tightly focused, which enables us to shine a spotlight on it and really study what both sides are doing.
With each release of iOS, Apple has been upping the ante by adding new security features to make it more difficult to break the system. These include features like address space layout randomization (ASLR) that pretty much eviscerated old-school style stack and heap overflow attacks. The war wages on and on.
Who will win the war? I believe Apple will eventually protect the system to the point that jailbreaking is no longer cost or time effective to the attackers -- at least not to attack teams like the evad3rs. The fact that the current jailbreak took months makes this a fairly safe bet, IMHO. Time will tell.
So, what does all this mean to software developers? Ah, that's really the underlying question here. Once we have an iOS device with adequate authentication (and no, 4-digit PINs are NOT adequate), and that system is on a platform that can't be exploited in a reasonable amount of time, we'll have a platform that is truly trustworthy. For now, we have to continue to apply app-level protections to safeguard our most sensitive app data.
Join Gunnar (@OneRaindrop) and me (@KRvW) at our next Mobile App Security Triathlon event for a deep dive into these issues. New York in April/May!
Wednesday, February 6, 2013
Buyer Education for Avoiding Mobile Dim Sum Surprise Projects
Recently I did a talk at OWASP Twin Cities on building a mobile app security toolchain. The talk went pretty well, lots of good questions. One takeaway, there are many people in many different kinds of companies struggling with how to do Mobile App Sec. The room was sold out, and so it looks like the OWASP Chapter is organizing a repeat talk some time this month, so if you missed it and want to come, stay tuned.
The basics of the talk are around what does an end to end process look like for Mobile AppSec, what tools are involved, and what dragons are lurking along the way? For the three day training that Ken and I do the second and third days are focused on hands on iOS and Android security issues. The first day is focused on a number of issues like how to fix your back end for mobile, what identity protocols might be used, what new use cases and risks does mobile present, and threat modeling for mobile.
One thing that I have seen is that many mobile projects are outsourced, both development and vulnerability assessment work. Of course, companies outsource lots of things these days, but I would say its more pronounced with Mobile. In part this may be due to a small skillset of mobile talent. And maybe also companies figuring out if mobile is a fad that will go away or if they really need to build out a team. To me, the answers for most companies are - mobile is not going away, build your team, seed it with the right mix of folks and train them.
There's another variable at play here. Outsourcing is fine as far as it goes, but its only as good as your ability to select and target the right consulting firms, teams, and work direction. For mobile vulnerability assessment in particular it can be a real hodge podge, some tools and services left over from the webapp security days (do you still need them? yes, but you need others too), many things that apply on one platform but not on another, and a brand new set of use cases for mobile. In all, its a bit like going to dim sum, things whizz by and you point at something you sort of recognize, only after eating do you know if the choice was any good (ok, but who doesn't like pork belly buns though?).
The full three day class is for hands on developers and security people, we talked about making it only for them, but decided to leave the one day option because there are many design, architecture and other issues that extend to other parts of the organization. Whether directing an internal team or brining in a consulting team, education is important to make more informed decisions. One thing we work to build in the training on day one is to make sure people are educated buyers.The mobile app security process and results should not be a surprise. Don't just point at a menu of services, instead learn to identify what tools and services are most vital to your project, and focus on those.
**
Three days of iOS and Android AppSec geekery with Gunnar Peterson and Ken van Wyk - Training dates NYC April 29-May 1
The basics of the talk are around what does an end to end process look like for Mobile AppSec, what tools are involved, and what dragons are lurking along the way? For the three day training that Ken and I do the second and third days are focused on hands on iOS and Android security issues. The first day is focused on a number of issues like how to fix your back end for mobile, what identity protocols might be used, what new use cases and risks does mobile present, and threat modeling for mobile.
One thing that I have seen is that many mobile projects are outsourced, both development and vulnerability assessment work. Of course, companies outsource lots of things these days, but I would say its more pronounced with Mobile. In part this may be due to a small skillset of mobile talent. And maybe also companies figuring out if mobile is a fad that will go away or if they really need to build out a team. To me, the answers for most companies are - mobile is not going away, build your team, seed it with the right mix of folks and train them.
There's another variable at play here. Outsourcing is fine as far as it goes, but its only as good as your ability to select and target the right consulting firms, teams, and work direction. For mobile vulnerability assessment in particular it can be a real hodge podge, some tools and services left over from the webapp security days (do you still need them? yes, but you need others too), many things that apply on one platform but not on another, and a brand new set of use cases for mobile. In all, its a bit like going to dim sum, things whizz by and you point at something you sort of recognize, only after eating do you know if the choice was any good (ok, but who doesn't like pork belly buns though?).
The full three day class is for hands on developers and security people, we talked about making it only for them, but decided to leave the one day option because there are many design, architecture and other issues that extend to other parts of the organization. Whether directing an internal team or brining in a consulting team, education is important to make more informed decisions. One thing we work to build in the training on day one is to make sure people are educated buyers.The mobile app security process and results should not be a surprise. Don't just point at a menu of services, instead learn to identify what tools and services are most vital to your project, and focus on those.
**
Three days of iOS and Android AppSec geekery with Gunnar Peterson and Ken van Wyk - Training dates NYC April 29-May 1
Thursday, January 31, 2013
The Next Mobile Wave- NYEAABTODADWI
Security departments are getting spun up over BYOD and its younger brother COPE (Company Owned, Personal Enabled). I suggest a new approach that is neither BYOD or COPE, I have even have a catchy slogan that is sure to catch one its called NYEAABTODADWI (Noticing Your Employees Are Already Bringing Their Own Devices And Dealing With It).
WSJ summarizes the issues in How BYOD Became the Law of the Land:
Then there is the server side, Travis Spencer did a round up of some of the core identity issues at play here. From there decisions need to be made on key management, hardening Mobile web services, and implementing Gateways. So there is a lot to do and not much time to lose, because if you look, the risk of your mobile apps - what they are transacting - is pretty high. Another little wrinkle is that many initial mobile app projects are outsourced, so there tends to be this black box - well Company X is responsible. But the security team should really be more actively engaged and in a proactive way to make sure there is a Mobile specific security policy that is backed by guidance, architecture, patterns, and testing that the end product gets the job done. But before we get to all of that, we must NYEAABTODADWI .
**
Three days of iOS and Android AppSec geekery with Gunnar Peterson and Ken van Wyk - Training dates NYC April 29-May 1
WSJ summarizes the issues in How BYOD Became the Law of the Land:
The most challenging adjustment—and one that still has the longest way to go—is the need for better systems to authenticate network users, essentially all of whom now access corporate systems with mobile devices. This is an area of strength for RIM, known for the resilience of its security network. The IT infrastructure to support BYOD "has grown up quickly, with the exception of identity management," Mr. Dulaney said.
CIOs also have shifted the onus of responsibility for the devices and the data they process to the employees themselves. CIOs created new policies spelling out how companies and employees would treat mobile devices and data, and by addressing related questions of liability and insurance. In some cases, companies insist on the right to wipe a device clean of all information, including personal files and data.The initial response from IT security to mobile was MDM, this is fine but nowhere near sufficient. The device level of granularity is not enough to deploy and enforce security policy in the same way that "Laptop user" is not good enough. We need user identity, app identity, and data encryption. And we cannot always assume that the server will be in play. Further, MDM is only applicable for enterprise and does not help with the myriad of customer facing, external mobile apps that are being deployed every day.
Then there is the server side, Travis Spencer did a round up of some of the core identity issues at play here. From there decisions need to be made on key management, hardening Mobile web services, and implementing Gateways. So there is a lot to do and not much time to lose, because if you look, the risk of your mobile apps - what they are transacting - is pretty high. Another little wrinkle is that many initial mobile app projects are outsourced, so there tends to be this black box - well Company X is responsible. But the security team should really be more actively engaged and in a proactive way to make sure there is a Mobile specific security policy that is backed by guidance, architecture, patterns, and testing that the end product gets the job done. But before we get to all of that, we must NYEAABTODADWI .
**
Three days of iOS and Android AppSec geekery with Gunnar Peterson and Ken van Wyk - Training dates NYC April 29-May 1
Tuesday, January 22, 2013
How's your 2013 mobile app security fitness coming along?
In my Computerworld column this month, I described how being secure is in some ways similar to being fit. There's good reason why Gunnar (@oneraindrop) and I (@krvw) chose the name "Mobile App Sec Triathlon" for the training events we do.
So, how are your 2013 security-related resolutions coming along? We're about 2/3 of the way through the first month of the year, after all. Not so good, eh? Well, let's consider a few things to help out a bit.
So, how are your 2013 security-related resolutions coming along? We're about 2/3 of the way through the first month of the year, after all. Not so good, eh? Well, let's consider a few things to help out a bit.
- Be realistic. It's really easy to make a massive list of everything you should be doing, and then simply become overwhelmed by it all. Prioritize what matters most to you, your organization, and your users. The good folks over at OWASP recently did a threat model of mobile devices, from which they derived (yet another) Top 10 list, this time of the risks surrounding mobile devices.
In that project, the two biggest risks that directly impact the client side of things are: 1) Lost or stolen device and 2) Insecure communications.
So, prioritize what you need to do around these things, for starters. Consider how your apps store data on the mobile device. Make an inventory of every file they create or touch, and take a candid assessment of what's there and how that information might be used by an attacker who has access to it.
Consider too how your app communicates with the server (or other comms). How are you securing those connections and protecting the privacy of the information? What data are you sending and receiving, and how might that be used by an attacker who has access to it?
These are great starting points to get your mobile app security efforts launched in the right direction. - Assign responsibilities and/or set clear goals and milestones. It's one thing to come up with a great list of stuff that needs to be done, but who is going to do the work? When is it going to be done? What measurable milestones exist between now and completion?
Sure, these are basic project management 101 sorts of topics, but they're still important. After all, you can't manage what you can't measure. - How are others addressing the issues? Whatever topics you're looking to address, it's worth spending some time to find out how other people have tackled them. While you won't always find a solution, it's quite possible someone has published a book, paper, talk, blog entry, etc., on your topic, or something very similar. If you have interns, launch them at this sort of domain analysis. Also consider seeking community forums where you can go and chat with your peers from other organizations. I've found OWASP Chapter meetings to be hugely useful for that sort of thing. An active OWASP Chapter that meets once a month or so can be a fabulous place to talk with others in the field.
- Don't give up. While tackling app security may seem a Sisyphean task at times, failure is worse.
- Three pillars. Keep in mind the three focus areas necessary for a software security program: risk management, security activities, and knowledge. On risk, you have got to be able to rationalize the business risks associated with your apps, and make design decisions that are commensurate. For activities, look at what activities others are doing. The BSIMM is a great starting point for that. And for knowledge, encourage and incentivize your developers to be sponge all the app security info they can find. Training, of course, is helpful, but that's only one of many sources of knowledge in a balanced knowledge "diet".
The bottom line, as I pointed out in the column, is that becoming secure takes effort. It requires someone to push that rock up the hill day after day, and there are bound to be setbacks.
Still overwhelmed? Here's a concrete thing you can do to get 2013 off to a good start. Register yourself or your developers for our next Mobile App Security Triathlon. Three days of iOS and Android hands-on training, starting on April 29th.
Hope to see you there -- and to have some meaningful discussions about other things you can be doing to bolster your mobile app security efforts.
Cheers,
Ken
Friday, January 11, 2013
What's the Worst Security Posture for Mobile?
To say its early days in Mobile is an understatement. To say its early days in Mobile security is (and I know its only January) an early candidate for understatement of the year. Making sweeping statements about Mobile anything is hard. But there are a number of promising green shoots spring up out of the ground in Mobile security. Will these sprouts grow into mighty oaks or get crushed like so many Orange Books before them? Remains to be seen.
One thing most people agree on, for the moment, is that iOS offers better protection than Android. While Android offers a chance at a more secure environment due to its open platform, this is not always realized in end products. Still there is another dimension to the Android fragmentation problem as it relates to security, which I will get to in a second.
Most mobile projects I have worked on start with excellent developers. The company taps their top devs to tackle and deliver on this new iOS or Android future. However, these developers are usually web gurus. Along the way, they realize things are not quite the same in Mobile. Yes there is HTTP but the client and server implementations work differently. There's additional API rework necessary to build out a Mobile middle tier. And oh did I mention testing?
Let's return to the fragmentation issue I mentioned above in the context of a recent year end review post by Dave Aitel:
You know what didn't pan out? "Mobile attacks" in commercial attack frameworks. The reasons are a bit non-obvious, but deep down, writing Android exploits is fairly hard. Not because the exploit itself is hard, but because testing your exploit on every phone is a nightmare. There's literally thousands of them, and they're all slightly different. So even if you know your exploit is solid as a rock, it's hard to say that you tested it on whatever strange phone your customer happens to have around.
And of course, iOS is its own hard nut to crack. It's a moving monolithic target, and Apple is highly incentivized by pirates to keep it secure. So if you have something that works in a commercial package, Apple will patch it the next day, and all your hard work is mostly wasted.
Just like developers learned, the fragmentation issue is a real one for attackers too. Of course, the rising popularity means this is no long (or even medium) term advantage to the defender, but its an interesting marker along the journey. It does infer an answer to the general question what is the worst position to be in? Perhaps a popular Android device with poorly provisioned security. At least for now.
Of course, that is not the worst security posture. The most dangerous posture, we know from Brian Snow is - to assume you are secure, and act accordingly when in fact you are not secure.
One thing most people agree on, for the moment, is that iOS offers better protection than Android. While Android offers a chance at a more secure environment due to its open platform, this is not always realized in end products. Still there is another dimension to the Android fragmentation problem as it relates to security, which I will get to in a second.
Most mobile projects I have worked on start with excellent developers. The company taps their top devs to tackle and deliver on this new iOS or Android future. However, these developers are usually web gurus. Along the way, they realize things are not quite the same in Mobile. Yes there is HTTP but the client and server implementations work differently. There's additional API rework necessary to build out a Mobile middle tier. And oh did I mention testing?
Let's return to the fragmentation issue I mentioned above in the context of a recent year end review post by Dave Aitel:
You know what didn't pan out? "Mobile attacks" in commercial attack frameworks. The reasons are a bit non-obvious, but deep down, writing Android exploits is fairly hard. Not because the exploit itself is hard, but because testing your exploit on every phone is a nightmare. There's literally thousands of them, and they're all slightly different. So even if you know your exploit is solid as a rock, it's hard to say that you tested it on whatever strange phone your customer happens to have around.
And of course, iOS is its own hard nut to crack. It's a moving monolithic target, and Apple is highly incentivized by pirates to keep it secure. So if you have something that works in a commercial package, Apple will patch it the next day, and all your hard work is mostly wasted.
Just like developers learned, the fragmentation issue is a real one for attackers too. Of course, the rising popularity means this is no long (or even medium) term advantage to the defender, but its an interesting marker along the journey. It does infer an answer to the general question what is the worst position to be in? Perhaps a popular Android device with poorly provisioned security. At least for now.
Of course, that is not the worst security posture. The most dangerous posture, we know from Brian Snow is - to assume you are secure, and act accordingly when in fact you are not secure.