Here is a recent interview I (Gunnar here) did with IBM Security strategist Diana Kelley on Mobile Wallet security. Diana covers what security issues wallet developers need to be aware of and the risk profile for mobile apps
MobAppSecTriathlon
Blog home for Gunnar Peterson (@OneRaindrop) and Ken van Wyk (@KRvW) for topics related to our joint Mobile App Security Triathlon events. For more info, see our website: www.MobileAppSecTriathlon.com Contact us to schedule a MobAppSecTriathlon at your organization.
Wednesday, June 18, 2014
Wednesday, May 28, 2014
Mobile Security: Defending the New Corporate Perimeter
Here is a keynote talk I gave at the Cloud Idenity Summit - I gave the talk awhile back but these topics keep coming up and thought it would be good to share.
A good operating principle for any new technology is "eat what you kill", its been said that when it came out Apple's iPhone did not destroy competitors, it paved over whole segments - portable music players, GPS, PDAs, digital cameras, and more were all thriving multi billion dollar industry segments in their own right until they got subsumed into the iPhone.
If we take the position that identity is the new corporate perimeter, then what is it replacing and how should we proceed?
The perimeter was the DMZ. Good stuff inside the perimeter on the server and outside? Well all bets are off. Unfortunately, over time the quality of protection of the DMZ degraded.
I'd like to begin not by burying the DMZ but to praise it, and ask, what did it, in fact, get right? There is a good list of things the DMZ did well that we can learn from as our security architecture evolves:
However, while the DMZ gave us a structural perimeter, the boundary that really matter is around data access. The data and users longs since diverged from the DMZ structure. That means in turn that our perimeter goes from a structural perimeter to a more behavioral perimeter. The authentication, session management, data protection and other security services have to adhere more to the user, the transactions, and the data. /home is where the data is.
As applications move to mobile how do we process a security architecture that means we are now delivering code, not just data as in the web model? How to account for co-resident malicious applications and lack of control below the stack at the platform level? The DMZ was never designed for these scenarios. I suggest a better model for today's clients is to play by Moscow Rules.
Protocols should be designed an implemented looking at the whole access control chain and chain of responsibilities. Mobile apps cannot deliver a full solution, consequently the API and server side matters a lot. Applications may in fact be servers since Mobile middle tiers send push notifications, texts and other updates down to the "clients". In short we need to go back and fully refresh the playbook of security protocols not just extend and pretend mobile is just another client, a web browser cooler hardware. Its a new way of interacting with users, servers and data.
At the same time, we have to make things simple for users and developers.
Our protocols live inside front end and back end containers. This means that integration is a first class citizen in security protocol design. We have both a First Mile Integration problem on the client side and a Last Mile Integration problem on the server side.
Many of the DMZ guidelines do still apply, attack surface should be reduced, simple, centralized policy enforcement points add value. Normal rules should not apply to perimeters.
Where the biggest distinction lies is the DMZ assumes a separation that no longer exists and that's where security integration patterns need to be worked and fielded to defend mobile apps, assuming not isolation and security but rather usage in actively hostile environments.
A good operating principle for any new technology is "eat what you kill", its been said that when it came out Apple's iPhone did not destroy competitors, it paved over whole segments - portable music players, GPS, PDAs, digital cameras, and more were all thriving multi billion dollar industry segments in their own right until they got subsumed into the iPhone.
If we take the position that identity is the new corporate perimeter, then what is it replacing and how should we proceed?
The perimeter was the DMZ. Good stuff inside the perimeter on the server and outside? Well all bets are off. Unfortunately, over time the quality of protection of the DMZ degraded.
I'd like to begin not by burying the DMZ but to praise it, and ask, what did it, in fact, get right? There is a good list of things the DMZ did well that we can learn from as our security architecture evolves:
- Attack Surface reduction
- Isolation/Separation
- Simplicity
- Centralize policy enforcement and management
- Separate zone in terms of level of scrutiny
- Normal rules do not apply
However, while the DMZ gave us a structural perimeter, the boundary that really matter is around data access. The data and users longs since diverged from the DMZ structure. That means in turn that our perimeter goes from a structural perimeter to a more behavioral perimeter. The authentication, session management, data protection and other security services have to adhere more to the user, the transactions, and the data. /home is where the data is.
As applications move to mobile how do we process a security architecture that means we are now delivering code, not just data as in the web model? How to account for co-resident malicious applications and lack of control below the stack at the platform level? The DMZ was never designed for these scenarios. I suggest a better model for today's clients is to play by Moscow Rules.
Moscow Rules fixes the limitations of the DMZ model, the idea from the Cold war spy novels is a set of guidelines for agents operating in enemy territory. Since you are behind enemy lines, you do not assume isolation like the DMZ does.
The Moscow Rules operating principles are quite instructive for Mobile Application Security. Starting with
"Play it by the book." If you are working with remote agents in hostile territory, you want to minimize room for error - procedures, guidelines, and standards matter.
"Assume nothing" - is the client device rooted? Is it lost or stolen? What else is running locally? Did the developers implement correctly? There is a very long list of assumptions that people make about the efficacy that they cannot and will not be able to make about client operating ecosystems that mobile apps execute in. Turning towards software security architecture, the question to address then becomes as Brian Snow said - if we cannot trust, how can we safely use?
"Murphy's Law" - design for failure
"Everyone is potentially under opposition control" - its not just about delivering data to a client and protecting the server side. More to the point, systems have to be resilient in the face of intelligently patched and programmed client applications that can be turned back against the system. A common design pattern for services is to build the client using the emissary pattern - what software security designers need to cope with is hostile emissaries. Code, method, keys, channels, that the emissary has access to can and will be bent and turned back on the server, the system and its users.
"You are never completely alone" - any time you work on a mobile development project you focus on your app, its natural. But your app lives alongside Angry Birds and many dozens of other app of unknown provenance. What happens to the identity tokens you gave to the mobile device?
"Never carry objects unless they be immediately discarded/destroyed" - web developers mostly needed to care about passwords, data, and session tokens. Mobile apps have a lot more than that. Plus they have a full execution lifecycle. For example, you are doing a sensitive operation on your app and a phone call comes in and suspends? What does your app do with the session and the data when it comes back up? How does your app manage and protect its cache, storage and memory?
So how should software security architects proceed using Moscow rules instead of just DMZ? I think the starting point is integration
"Normally, everything is split up and problems are solved separately. That makes individual problems easy to solve, but the connections between the problems become very complicated, and something simple ends up in a real mess. If you integrate it in the first place, that turns out to be the most simple solution. You have to think ahead and you must always expect the unexpected.”
- Jan Benthem, Schipol airport chief architect
At the same time, we have to make things simple for users and developers.
Our protocols live inside front end and back end containers. This means that integration is a first class citizen in security protocol design. We have both a First Mile Integration problem on the client side and a Last Mile Integration problem on the server side.
Many of the DMZ guidelines do still apply, attack surface should be reduced, simple, centralized policy enforcement points add value. Normal rules should not apply to perimeters.
Where the biggest distinction lies is the DMZ assumes a separation that no longer exists and that's where security integration patterns need to be worked and fielded to defend mobile apps, assuming not isolation and security but rather usage in actively hostile environments.
Wednesday, February 5, 2014
Open Letter to Satya Nadella, Re: Mobile Identity
Dear Satya Nadella,
Congratulations on your new role. I am excited that the board picked not only a tech CEO, but a middleware guy. There's great, latent power in Microsoft technologies and if middleware people know one thing, its connecting stuff together to create value.
I was further heartened by the "mobile first, cloud first" mantra you laid out in your first speech. I know you are busy, but here is one opportunity to consider, and I am pretty confident that customers will appreciate some focus on this issue.
The Mobile app everyone is banging the drum for is Office on mobile devices like iOS. However, I think there's another one that unlocks some more interesting use cases. There is no Active Directory for Mobile, and that is creating problems across basically every enterprise.
So far the bog standard enterprise response to Mobile has been MDM, a useful but limited management technology. Despite the fact that MDM sells like hotcakes, it provides little value to app developers and does not address identity integration. Enterprises that want to solve identity end to end are left to cobble something together themselves from pieces and parts. Would be better to think more like Boeing assembling purpose built components, but instead Mobile Identity is more Sanford and Son.
The industry has collectively been waiting these last five years for an Active Directory for mobile to fill that gap. What if the Active Directory for Mobile was Active Directory? I don't think there are big technical blocking factors to the device management side, and the value on the server/cloud side is a massive integration opportunity waiting to be unlocked.
So what are some of the use cases that customers need help with?
Congratulations on your new role. I am excited that the board picked not only a tech CEO, but a middleware guy. There's great, latent power in Microsoft technologies and if middleware people know one thing, its connecting stuff together to create value.
I was further heartened by the "mobile first, cloud first" mantra you laid out in your first speech. I know you are busy, but here is one opportunity to consider, and I am pretty confident that customers will appreciate some focus on this issue.
The Mobile app everyone is banging the drum for is Office on mobile devices like iOS. However, I think there's another one that unlocks some more interesting use cases. There is no Active Directory for Mobile, and that is creating problems across basically every enterprise.
So far the bog standard enterprise response to Mobile has been MDM, a useful but limited management technology. Despite the fact that MDM sells like hotcakes, it provides little value to app developers and does not address identity integration. Enterprises that want to solve identity end to end are left to cobble something together themselves from pieces and parts. Would be better to think more like Boeing assembling purpose built components, but instead Mobile Identity is more Sanford and Son.
So what are some of the use cases that customers need help with?
- Mobile identity for users on devices, not just devices
- Local authentication, disconnected mode
- Portability - consistent identity and policy on the device and on the cloud
- Granular access control - not just all or nothing access
Enterprises trying to solve these problems today are using duct tape to solve important security and identity problems, they would benefit from identity systems engineered from an end to end perspective. Its a giant open problem and the incumbents do not have incentives, beyond selling more hardware and ads, to solve it. Would be a great help to enterprises if someone solves it, why not Microsoft?
Sincerely,
Gunnar Peterson
Tuesday, April 2, 2013
Mobile Session Management - Which Session?
Session management vulnerabilities are tricky. They are highly dependent on context. Identifying session fixation, session replay and the like means looking at the end to end session lifecycle from creation to use to termination.
On normal webapps this is mostly a straightforward affair including - examine the session cookie, ensure proper cookie hygiene, make sure transport is protected, and that timeouts are set correctly. On normal webapps the server sets the timeout for the session cookie (say 20 minutes), sends to the browser and the server validates the session on the return trip. The session lives as a relationship between the browser client and the web server. But what about mobile sessions? They are pretty different, let's count the ways.
First off the user likely authenticates locally to the mobile app itself, let's call this session #1. Then any time the app needs to do something on the network (like synchronize data or replicate) it authenticates from the mobile app to the server, let's call this session #2. Next the server is very likely an API Gateway with no data or business logic, that is on the backend app servers, so the Mobile API Gateway has authenticate to the backend servers, let's call this session #3.
Now just logging into each of these sessions is a decent bit of work in and of itself. Add onto that the fact that very likely these are three fundamentally different protocols - maybe username/password for #1, OAuth for #2 and SAML for #3. Logging in is where it begins, but that's not where it ends.
How do you ensure consistent policy across these different protocols? When do you timeout the session? What happens if session #1 times out but sessions #2 & 3 are still alive? How do you reinstantiate? What happens when your user logs out?
Today these are mainly exercises left to the implementers to figure out, the tools market is pretty nascent. The above scenario is a pretty simple view compared to some Mobile apps. Enterprises still struggle with sessions management for webapps, ensuring session data isn't easily spoofed or stolen requires careful review, but its vastly more complicated for many mobile apps. Until ready made tools are available, enterprise's time spent on end to end design and testing that the sessions mesh appropriately is time well spent.
Update: Paul Madsen added on in Twitter "and the original SAML session from enterprise IdP" For sure there are many combinations and permutations to consider. What I am seeing though is that a base case Mobile app has at least 3x more compelxity for session management than a base case web app. Considering may webapps still struggle this is food for thought.
**
Three days of iOS and Android AppSec training with Gunnar Peterson and Ken van Wyk - Training dates NYC April 29-May 1
On normal webapps this is mostly a straightforward affair including - examine the session cookie, ensure proper cookie hygiene, make sure transport is protected, and that timeouts are set correctly. On normal webapps the server sets the timeout for the session cookie (say 20 minutes), sends to the browser and the server validates the session on the return trip. The session lives as a relationship between the browser client and the web server. But what about mobile sessions? They are pretty different, let's count the ways.
First off the user likely authenticates locally to the mobile app itself, let's call this session #1. Then any time the app needs to do something on the network (like synchronize data or replicate) it authenticates from the mobile app to the server, let's call this session #2. Next the server is very likely an API Gateway with no data or business logic, that is on the backend app servers, so the Mobile API Gateway has authenticate to the backend servers, let's call this session #3.
Now just logging into each of these sessions is a decent bit of work in and of itself. Add onto that the fact that very likely these are three fundamentally different protocols - maybe username/password for #1, OAuth for #2 and SAML for #3. Logging in is where it begins, but that's not where it ends.
How do you ensure consistent policy across these different protocols? When do you timeout the session? What happens if session #1 times out but sessions #2 & 3 are still alive? How do you reinstantiate? What happens when your user logs out?
Today these are mainly exercises left to the implementers to figure out, the tools market is pretty nascent. The above scenario is a pretty simple view compared to some Mobile apps. Enterprises still struggle with sessions management for webapps, ensuring session data isn't easily spoofed or stolen requires careful review, but its vastly more complicated for many mobile apps. Until ready made tools are available, enterprise's time spent on end to end design and testing that the sessions mesh appropriately is time well spent.
Update: Paul Madsen added on in Twitter "and the original SAML session from enterprise IdP" For sure there are many combinations and permutations to consider. What I am seeing though is that a base case Mobile app has at least 3x more compelxity for session management than a base case web app. Considering may webapps still struggle this is food for thought.
**
Three days of iOS and Android AppSec training with Gunnar Peterson and Ken van Wyk - Training dates NYC April 29-May 1
Friday, March 22, 2013
Security Implications from One Year on Mobile Only
Benjamin Robbins (@PaladorBenjamin) just completed 52 solid weeks working solely on mobile. Of course there were some issues, but he did it and the lessons learned are instructive.
A key takeaway:
Your mobile device is an extension of other things, its not a full replacement. So as someone designing security and identity services for mobile, you have to be able to mesh that identity with the server, the other machines and the directory management systems.
It tempting to think of machines and mobile devices as islands that we need to protect (enterprise archipelago security architect?), but this is to miss the point. The mobile device needs data input from other places (likely by people using keyboards ;-P), need access to documents, and they need server side communications. Users also want something resembling a consistent set of access rights no matter what platform they are using - laptop, webapp, mobile, workstation or tablet. These are unsolved problems in the security and identity industry today.
Still Benjamin Robbins' piece is a great testament to, practical issues aside, how far things have come in a short while for mobile. I continue to expect that we see more mobile apps not less and that the devices will snowball on top of the servers, browsers, services, and desktop/laptop machines you already have to cope with. Design your security services accordingly.
A key takeaway:
From a practical perspective I’ve learned that there are certain needs of human ergonomics that you just can’t engineer your way around no matter how cool the technology. I can say with confidence that a monitor and keyboard are not going anywhere anytime soon.
Your mobile device is an extension of other things, its not a full replacement. So as someone designing security and identity services for mobile, you have to be able to mesh that identity with the server, the other machines and the directory management systems.
It tempting to think of machines and mobile devices as islands that we need to protect (enterprise archipelago security architect?), but this is to miss the point. The mobile device needs data input from other places (likely by people using keyboards ;-P), need access to documents, and they need server side communications. Users also want something resembling a consistent set of access rights no matter what platform they are using - laptop, webapp, mobile, workstation or tablet. These are unsolved problems in the security and identity industry today.
Still Benjamin Robbins' piece is a great testament to, practical issues aside, how far things have come in a short while for mobile. I continue to expect that we see more mobile apps not less and that the devices will snowball on top of the servers, browsers, services, and desktop/laptop machines you already have to cope with. Design your security services accordingly.
**
Three days of iOS and Android AppSec training with Gunnar Peterson and Ken van Wyk - Training dates NYC April 29-May 1
Three days of iOS and Android AppSec training with Gunnar Peterson and Ken van Wyk - Training dates NYC April 29-May 1
Tuesday, March 19, 2013
US FTC fires a warning shot in the mobile software security wars
If you weren't looking carefully, you probably weren't even aware of it. (Indeed, I hadn't seen it until I read John Edwards's piece over at The Mobility Hub.) But, make no mistake about it, this is a big deal for the software industry. The ramifications could be far reaching and could end up touching every company that develops software (at least for US consumers).
What's the big deal? HTC America recently settled a complaint filed against them by the Federal Trade Commission. The terms of the settlement force HTC to develop patches to fix numerous software vulnerabilities in its mobile products, including Android, Windows Mobile, and Windows Phone products.
Blah blah blah, yawn. Right? WRONG!
What makes this case interesting to software developers in the mobile and not-mobile (stationary?) worlds is the litany of issues claimed by the FTC. Among other things, FTC claims that HTC:
What's the big deal? HTC America recently settled a complaint filed against them by the Federal Trade Commission. The terms of the settlement force HTC to develop patches to fix numerous software vulnerabilities in its mobile products, including Android, Windows Mobile, and Windows Phone products.
Blah blah blah, yawn. Right? WRONG!
What makes this case interesting to software developers in the mobile and not-mobile (stationary?) worlds is the litany of issues claimed by the FTC. Among other things, FTC claims that HTC:
- "engaged in a number of practices that, taken
together, failed to employ reasonable and appropriate security in the design and
customization of the software on its mobile devices";
- "failed to implement an adequate program to assess the security of products it shipped
to consumers;"
- "failed to implement adequate privacy and security guidance or training
for its engineering staff;"
- "failed to conduct assessments, audits, reviews, or tests to
identify potential security vulnerabilities in its mobile devices;"
- "failed to follow well-known and commonly-accepted secure programming practices, including secure practices
that were expressly described in the operating system’s guides for manufacturers and
developers, which would have ensured that applications only had access to users’
information with their consent;"
- "failed to implement a process for receiving and addressing security vulnerability reports from third-party researchers, academics or other members of the public, thereby delaying its opportunity to correct discovered vulnerabilities or respond to reported incidents."
Oh, is that all? No, it's not. The FTC complaint provides specific examples and their impacts. The examples include mis-use of permissions, insecure communications, insecure app installation, and inclusion of "debug code". It goes on to claim that consumers were placed at risk by HTC's practices.
Now, I'm certainly no lawyer, but reading through this complaint and its settlement tells me that the US Federal Government is hugely interested in mobile product security -- and presumably other software as well. I don't know the specifics of just what HTC really did or didn't do, but this sure looks to me like a real precedent nonetheless. It should also send a firm warning message to all software developers. There but for the grace of God go I, right?
Reading the complaint, there are certainly some direct actions that the entire industry would be wise to heed, starting with implementing a security regimen that assesses the security of all software products shipped to consumers. Another key action is to implement privacy and security guidance or training for engineering staff. That list should go on to include assessments, audits, reviews, and testing products to identify (and remediate) security vulnerabilities.
There are many good sources of guidance available today regarding this sort of thing. Clearly, we believe mobile app developers could do a lot worse than attending one of our Mobile App Security Triathlon events like the one we're holding in New York during April. But that's just one of many good things to do. Be sure to also look at the Build Security In portal run by the US Department of Homeland Security. OWASP's Mobile Security Project can also be useful in looking for tips and guidance.
Come join us in New York and we'll help you build your mobile app security knowledge, as well as provide many pointers to other useful resources you can turn to so that your organization isn't so likely to find itself in the FTC's crosshairs.
Cheers,
Ken van Wyk
Schneier Says User Awareness: Tired, Dev Training: Wired
Bruce Schneier tackles security training in Dark Reading. He basically says that training users in classic "security awareness" training is a waste of money. Certainly there is a lot of evidence to back up that claim, users routinely click on certificate warnings, for example.
What I found most interesting is what Bruce Schneier recommended to do instead of security awareness training for users:
On the other hand, developers, security people and architects are actually building and running the system. If they know how to avoid mistakes they are in a position to protect across all the app users from a broad range of threats.
This is the essence of what Ken and I focus on in Mobile App Sec Triathlon training. I wrote about it in Why We Train. We want to help developers, security people and architects recognize security problems in design, development and operations; and, crucially, have some concrete ideas on what they can do about them.
Companies are scrambling to get "something" up and running for Mobile, either enterprise side or customer/external facing or both. It really reminds me of the early days of the web. A lot of this is vert fragmented inside of companies. A lot is outsourced, too. Ken and I put a lot of thought into the three day class so that its focused on what companies want and need.
Choose Your Own Adventure
Day one is about mobile threats that apply to all platforms, architecture, and design considerations. We look at threat modeling for Mobile. We drill down on the identity issues for mobile, server side and what makes a Mobile DMZ. The class is setup so that architects and dev managers may choose to just attend day one.
Days two and three are hands on iOS and Android. Depending on what your company is building and/or outsourcing. You come out of these days knowing how to avoid security pitfalls in coding for mobile. Whether you are doing the dev in house or working with a provider, developers and security people will have a deeper understanding of the core security design and development options for building more secure code.
We recently announced scholarship program for students and interns. Based on past trainings, this has proven to be a great way to get fresh perspective on mobile trends. Finally since many companies are launching new mobile projects, we often see whole teams that need to get up to speed on issues rather quickly (before deployment0, so to serve this need we offer a group discount, send three people and the fourth comes free.
Overall our approach is geared towards adapting to the things that are most useful to companies trying to build more secure mobile apps. Training developers on secure coding is not yet a sina qua non, but for those that invest in building up skills and expertise it pays dividends in protecting your users, data, and organization.
**
Three days of iOS and Android AppSec training with Gunnar Peterson and Ken van Wyk - Training dates NYC April 29-May 1
What I found most interesting is what Bruce Schneier recommended to do instead of security awareness training for users:
we should be spending money on security training for developers. These are people who can be taught expertise in a fast-changing environment, and this is a situation where raising the average behavior increases the security of the overall system.Of course I wholeheartedly agree with this. Let's say doing a great job on security awareness training for users, best case, maybe takes the rate of users clicking through cert warnings from 90% to 80%.
If we security engineers do our job right, users will get their awareness training informally and organically, from their colleagues and friends. People will learn the correct folk models of security, and be able to make decisions using them. Then maybe an organization can spend an hour a year reminding their employees what good security means at that organization, both on the computer and off. That makes a whole lot more sense.
On the other hand, developers, security people and architects are actually building and running the system. If they know how to avoid mistakes they are in a position to protect across all the app users from a broad range of threats.
This is the essence of what Ken and I focus on in Mobile App Sec Triathlon training. I wrote about it in Why We Train. We want to help developers, security people and architects recognize security problems in design, development and operations; and, crucially, have some concrete ideas on what they can do about them.
Companies are scrambling to get "something" up and running for Mobile, either enterprise side or customer/external facing or both. It really reminds me of the early days of the web. A lot of this is vert fragmented inside of companies. A lot is outsourced, too. Ken and I put a lot of thought into the three day class so that its focused on what companies want and need.
Choose Your Own Adventure
Day one is about mobile threats that apply to all platforms, architecture, and design considerations. We look at threat modeling for Mobile. We drill down on the identity issues for mobile, server side and what makes a Mobile DMZ. The class is setup so that architects and dev managers may choose to just attend day one.
Days two and three are hands on iOS and Android. Depending on what your company is building and/or outsourcing. You come out of these days knowing how to avoid security pitfalls in coding for mobile. Whether you are doing the dev in house or working with a provider, developers and security people will have a deeper understanding of the core security design and development options for building more secure code.
We recently announced scholarship program for students and interns. Based on past trainings, this has proven to be a great way to get fresh perspective on mobile trends. Finally since many companies are launching new mobile projects, we often see whole teams that need to get up to speed on issues rather quickly (before deployment0, so to serve this need we offer a group discount, send three people and the fourth comes free.
Overall our approach is geared towards adapting to the things that are most useful to companies trying to build more secure mobile apps. Training developers on secure coding is not yet a sina qua non, but for those that invest in building up skills and expertise it pays dividends in protecting your users, data, and organization.
**
Three days of iOS and Android AppSec training with Gunnar Peterson and Ken van Wyk - Training dates NYC April 29-May 1
Monday, March 18, 2013
ANNOUNCING: MobAppSecTri Scholarship Program
For our upcoming three-day Mobile App Sec Triathlon in New York City on April 29 - 1 May, we are once again running a student / intern scholarship program.
We will be giving away a few student / intern tickets to the event absolutely free to a small number of deserving students / interns.
Course details can be found here.
Requirements
To be considered for a student / intern free registration, you will need to submit to us by 31 March 2013 a short statement of: A) Your qualifications and experience in mobile app development and/or information security, and B) Why you deserve to be selected. Candidate submissions will be evaluated by the course instructors, Gunnar Peterson (@OneRaindrop) and me (@KRvW). Decisions will be based solely on the quality of the submissions, and all decisions will be final.
Details
All scholarship submissions are due no later than midnight Eastern Daylight Time (UTC -0400) on 31 March 2013. Submissions should be sent via email to us. Winning entrants will be notified no later than 15 April 2013.
Student / intern ticket includes entrance to all three days of the event, along with all course refreshments and catering. Note that these free tickets do not include travel or lodging expenses.
Wednesday, March 13, 2013
What can/should the mobile OS vendors do to help?
Mobile device producers are missing important areas where they can and should be doing more.
What makes me say this? Well, I was talking with a journalist about mobile device/app security recently when he asked me what the device/OS vendors can do to help with security for end consumers. Good question, and I certainly had a few suggestions to toss in. But it got me thinking about what they can be doing to make things better for consumers. And that got me thinking about what they can be doing to help app developers.
On the consumer side, the sorts of things that would be on my wish list include:
Here are a few of the things I think would be useful to mobile app developers, in no particular order:
What makes me say this? Well, I was talking with a journalist about mobile device/app security recently when he asked me what the device/OS vendors can do to help with security for end consumers. Good question, and I certainly had a few suggestions to toss in. But it got me thinking about what they can be doing to make things better for consumers. And that got me thinking about what they can be doing to help app developers.
On the consumer side, the sorts of things that would be on my wish list include:
- Strong passcode authentication. On iOS, the default passcode is a 4-digit PIN, and many people disable passcodes entirely. Since the built-in file protection encryption key is derived from a combination of the hardware identifier and the user's passcode, this just fails and fails. Even a "protected" file can be broken in just a few minutes using readily available software that brute force guesses all 10,000 (count em) possible passcodes. Well, a stronger passcode mechanism that is still acceptable to end consumers would be a good start. There are rumors of future iOS devices using fingerprint scanners, for example. While biometric sensors aren't without their own problems, they should prove to be a whole lot better than 4-digit PINs.
- Trusted module. Still picking on iOS here... Storing the encryption keys in plaintext on the SSD (NAND) violates just about every rule of safe crypto. Those keys should be stored in hardware in a place that's impossible to get to programmatically, and would require a huge cost to extract forensically.
- Certificates. Whether they are aware of it or not, iOS users use certificates for various trust service on iCloud and others like Apple's Messages app. Since they're already generating user certificates, why not also give all iOS users certificates for S/MIME and other security services. That would also open up to app developers the possibility of stronger authentication using client-side certificates.
Here are a few of the things I think would be useful to mobile app developers, in no particular order:
- Authenticator client for various protocols. There are various ways to build an authenticator into a mobile app. In their various SDKs, it would be useful for device vendors to provide authenticator examples for popular authenticator protocols and services such as Facebook Connect and Google Authenticator.
- Payment services. Similarly, example code for connecting to PayPal and other payment services back-ends would be useful. We're seeing some of those coming from the payment providers themselves, which is great, but it's been a long time coming.
So, I have no inside knowledge at Apple or Google for that matter, but it's always nice to dream. A few relatively small enhancements to the underlying the technology could open up all sorts of possibilities for users and developer alike. As it stands, an app developer writing a business app on iOS app has to build so many things from scratch, as intrinsic options for safe data storage, transmission, etc., are just not acceptable for today's business needs.
How about you? What would you add or change on these lists? What are your pet peeves or wish list items? We'd love to hear them.
How about you? What would you add or change on these lists? What are your pet peeves or wish list items? We'd love to hear them.
Come join Gunnar (@OneRaindrop) and me (@KRvW) for three days of discussing these and many other issues in New York at our next Mobile App Sec Triathlon, #MobAppSecTri.
Cheers,
Ken
What Comprises a Mobile DMZ?
I have a new post on the Intel blog on Mobile DMZs. The post looks at what part of Identity and Access Management, Defensive Services and Enablement are the same for Mobile and what parts adapt?
**
Three days of iOS and Android AppSec geekery with Gunnar Peterson and Ken van Wyk - Training dates NYC April 29-May 1
**
Three days of iOS and Android AppSec geekery with Gunnar Peterson and Ken van Wyk - Training dates NYC April 29-May 1
Wednesday, February 20, 2013
Android adds a Secure Default for Content Providers
Security requires some thought in design, lots of developer attention in secure coding, but there are gaps that the platform can close that can make the designer and the developers lives easier, setting secure defaults. Default Android introduces a number of ways that companies can unwittingly open up vulnerabilities. Jelly Bean offers a number of security improvements, one of the more interesting is adding a new and important Secure Default which protects Content Providers, aka your data. The setting protects against data inadvertently leaked to other apps. Android's permission model is pretty expressive and lets you set fine grained access control policy. Unfortunately, this means that there are many options and so many enterprises that ship with default settings can expose their data to any other app running on the Android device.
Most developers assume that when they create a database for their Android application that its only able to be used by their app. Unfortunately, this assumption is not valid. The security policy defined in the Android Manifest is the place to check to make sure this is set properly. A developer who sees the following may assume their data is protected:
But for Android 4.1 or prior the Manifest has an insecure default for Content Providers in that if read and write permission are not set (turned off) then its assumed that your Content Provider is readable and writeable by other apps. (note - its unlikely but I can imagine why some app might want their data readable by other apps, why there is a default for other apps to write is something I have never understood). In any case, if you have deployed Android apps its pretty likely that you have the defaults set unless someone specifically turned off read and write acces, so you should check the Android security policy and test the app.
How to check
For yours apps, the best place to start is to review your Android Manifest.xml and check that the permissions are set to disallow access that you do not want, such as other apps reading and writing to your apps databases. On 4.1 or prior this has to be set otherwise the permission is granted.
How to test
There are a variety of ways to test for this, the Mercury test suite for Android gives you a way to see what is running:
.. ..:.
..I.. .I..
..I. . . .... . ..I=
.I...I?IIIIIIIII~..II
.?I?IIIIIIIIIIIIIIII..
.,IIIIIIIIIIIIIIIIIIIIIII+.
...IIIIIIIIIIIIIIIIIIIIIIIIIII:.
.IIIIIIIIIIIIIIIIIIIIIIIIIIIIIII..
..IIIIII,..,IIIIIIIIIIIII,..,IIIIII.
.?IIIIIII..IIIIIIIIIIIIIII..IIIIIIII.
,IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII.
.IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII.
.IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII:
.IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII
The heavy metal that poisoned the droid
mercury> connect 127.0.0.1
*mercury> provider
*mercury#provider> info -p null
Package name: com.example.myapp
Authority: com.example.myapp.mydataprovider
Required Permission - Read: null
Required Permission - Write: null
Grant Uri Permissions: false
Multiprocess allowed: false
Package name: com.android.alarmclock
Authority: com.android.alarmclock
Required Permission - Read: null
Required Permission - Write: null
Grant Uri Permissions: false
Multiprocess allowed: false
Package name: com.android.mms
Authority: com.android.mms.SuggestionsProvider
Required Permission - Read: android.permission.READ_SMS
Required Permission - Write: null
Path Permission - Read: /search_suggest_query needs android.permission.GLOBAL_SEARCH
Path Permission - Read: /search_suggest_shortcut needs android.permission.GLOBAL_SEARCH
Grant Uri Permissions: false
Multiprocess allowed: false
(truncated)
Probably most Android apps have null permissions set and do not realize that it is the case or the impact of that omission (that other apps can read and write their data). In the case above the example app is set to allow other applications read and write its data. This happens many times with Android apps that contain sensitive data and the companies do not realize the exposure. This is just a snapshot but the Android permission sets are very much like a Purdy shotgun, great for skilled hunters, but also great for committing suicide.
**
Three days of iOS and Android AppSec geekery with Gunnar Peterson and Ken van Wyk - Training dates NYC April 29-May 1
Most developers assume that when they create a database for their Android application that its only able to be used by their app. Unfortunately, this assumption is not valid. The security policy defined in the Android Manifest is the place to check to make sure this is set properly. A developer who sees the following may assume their data is protected:
<provider android:name=”com.example.ReadOnlyDataContentProvider”
android:authorities=”com.example” />
But for Android 4.1 or prior the Manifest has an insecure default for Content Providers in that if read and write permission are not set (turned off) then its assumed that your Content Provider is readable and writeable by other apps. (note - its unlikely but I can imagine why some app might want their data readable by other apps, why there is a default for other apps to write is something I have never understood). In any case, if you have deployed Android apps its pretty likely that you have the defaults set unless someone specifically turned off read and write acces, so you should check the Android security policy and test the app.
How to check
For yours apps, the best place to start is to review your Android Manifest.xml and check that the permissions are set to disallow access that you do not want, such as other apps reading and writing to your apps databases. On 4.1 or prior this has to be set otherwise the permission is granted.
How to test
There are a variety of ways to test for this, the Mercury test suite for Android gives you a way to see what is running:
.. ..:.
..I.. .I..
..I. . . .... . ..I=
.I...I?IIIIIIIII~..II
.?I?IIIIIIIIIIIIIIII..
.,IIIIIIIIIIIIIIIIIIIIIII+.
...IIIIIIIIIIIIIIIIIIIIIIIIIII:.
.IIIIIIIIIIIIIIIIIIIIIIIIIIIIIII..
..IIIIII,..,IIIIIIIIIIIII,..,IIIIII.
.?IIIIIII..IIIIIIIIIIIIIII..IIIIIIII.
,IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII.
.IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII.
.IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII:
.IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII
The heavy metal that poisoned the droid
mercury> connect 127.0.0.1
*mercury> provider
*mercury#provider> info -p null
Package name: com.example.myapp
Authority: com.example.myapp.mydataprovider
Required Permission - Read: null
Required Permission - Write: null
Grant Uri Permissions: false
Multiprocess allowed: false
Package name: com.android.alarmclock
Authority: com.android.alarmclock
Required Permission - Read: null
Required Permission - Write: null
Grant Uri Permissions: false
Multiprocess allowed: false
Package name: com.android.mms
Authority: com.android.mms.SuggestionsProvider
Required Permission - Read: android.permission.READ_SMS
Required Permission - Write: null
Path Permission - Read: /search_suggest_query needs android.permission.GLOBAL_SEARCH
Path Permission - Read: /search_suggest_shortcut needs android.permission.GLOBAL_SEARCH
Grant Uri Permissions: false
Multiprocess allowed: false
(truncated)
Probably most Android apps have null permissions set and do not realize that it is the case or the impact of that omission (that other apps can read and write their data). In the case above the example app is set to allow other applications read and write its data. This happens many times with Android apps that contain sensitive data and the companies do not realize the exposure. This is just a snapshot but the Android permission sets are very much like a Purdy shotgun, great for skilled hunters, but also great for committing suicide.
**
Three days of iOS and Android AppSec geekery with Gunnar Peterson and Ken van Wyk - Training dates NYC April 29-May 1
Sunday, February 17, 2013
To understand the iOS passcode bug, consider the use case
If you follow any of the iOS-related new sites in the last few days, you'd have to be aware of a security bug that has surfaced in Apple's mobile operating system. After all, a failure in a screen lock / authentication mechanism is a pretty big issue for consumers.
Indeed, there's a lot of uproar in the twitterverse and such over this security failure. And to be fair, it is an important issue and the failure here mustn't be downplayed. But the failure doesn't seem to me to be a failure of their file protection architecture. It seems to me to be a presentation layer issue that can be exploited by a truly bizarre set of circumstances. The end result is still a data exposure, but let's consider things a bit deeper to see where the real problem is.
Apple prides itself on putting the user first. Among their mantras is the notion of delivering products that delight their customers. Great. Let's start there.
In iOS, there are a few ways of protecting data at rest. There's a File Protection API with four different classes of protection. There's also a Keychain Protection API with four different classes of protection. These are used respectively to protect files and keychain data stored on a device.
The reason for the four different protection classes is to accommodate different use cases, and therein lies the key (no pun intended) to understanding this latest iOS security bug.
Consider the following use case: Your iPhone is locked, even immediately following a reboot (yes, that matters in the various protection classes). You have yet to unlock the device during this boot session. The phone is in your pocket and a call comes in.
To handle that call, the phone app by necessity must look into your Contacts / Address Book and compare the incoming Caller ID with your list of people you know. If the caller is in your address book, a photo (optional) is displayed along with the caller's name. If not, just the incoming phone number is displayed.
In order to accomplish that use case, the Address Book can only be protected using the NSFileProtectionNone class. That's the same protection class that is used for the vast majority of files on an iOS solid state disk (NAND). Despite the name, it actually is encrypted: first by the file system itself, which is encrypted with a key called the "EMF!" key, and secondly at a file level by a key called the "DKEY" key. AES-256 encrypted, in fact, using a hardware chip for the encryption. The problem in their implementation, however, is that the EMF! and DKEY keys are stored in plaintext on the disk's Block 1, leaving them open to an attacker.
But, back to the use case for the address book data. In iOS 6.1, the AddressBook data is stored in /var/mobile/Library/AddressBook in standard SQLlite data format. The good news is that data is outside of your installed apps' sandboxes, so other apps aren't supposed to be able to get there. The bad news is the Contacts app itself can get there just fine.
In the case of a locked phone, there's an interface between the screen lock, the phone app, and the contacts app by necessity.
That leads me to conclude the bug isn't a fundamental one in Apple's NSFileProtection API. Rather, it is a serious bug in the implementation of one or more of the above app components. To be sure, neither the phone, contacts, nor lock app should ever grant unauthenticated access to that data. But the decision lies in those apps, not at a lower level in the file protection architecture.
Still confused? Come to our next Mobile App Sec Triathlon and we'll discuss in detail how to use both the file protection and keychain protection classes properly. Hope to see you in New York this April!
Cheers,
Ken
Indeed, there's a lot of uproar in the twitterverse and such over this security failure. And to be fair, it is an important issue and the failure here mustn't be downplayed. But the failure doesn't seem to me to be a failure of their file protection architecture. It seems to me to be a presentation layer issue that can be exploited by a truly bizarre set of circumstances. The end result is still a data exposure, but let's consider things a bit deeper to see where the real problem is.
Apple prides itself on putting the user first. Among their mantras is the notion of delivering products that delight their customers. Great. Let's start there.
In iOS, there are a few ways of protecting data at rest. There's a File Protection API with four different classes of protection. There's also a Keychain Protection API with four different classes of protection. These are used respectively to protect files and keychain data stored on a device.
The reason for the four different protection classes is to accommodate different use cases, and therein lies the key (no pun intended) to understanding this latest iOS security bug.
Consider the following use case: Your iPhone is locked, even immediately following a reboot (yes, that matters in the various protection classes). You have yet to unlock the device during this boot session. The phone is in your pocket and a call comes in.
To handle that call, the phone app by necessity must look into your Contacts / Address Book and compare the incoming Caller ID with your list of people you know. If the caller is in your address book, a photo (optional) is displayed along with the caller's name. If not, just the incoming phone number is displayed.
In order to accomplish that use case, the Address Book can only be protected using the NSFileProtectionNone class. That's the same protection class that is used for the vast majority of files on an iOS solid state disk (NAND). Despite the name, it actually is encrypted: first by the file system itself, which is encrypted with a key called the "EMF!" key, and secondly at a file level by a key called the "DKEY" key. AES-256 encrypted, in fact, using a hardware chip for the encryption. The problem in their implementation, however, is that the EMF! and DKEY keys are stored in plaintext on the disk's Block 1, leaving them open to an attacker.
But, back to the use case for the address book data. In iOS 6.1, the AddressBook data is stored in /var/mobile/Library/AddressBook in standard SQLlite data format. The good news is that data is outside of your installed apps' sandboxes, so other apps aren't supposed to be able to get there. The bad news is the Contacts app itself can get there just fine.
In the case of a locked phone, there's an interface between the screen lock, the phone app, and the contacts app by necessity.
That leads me to conclude the bug isn't a fundamental one in Apple's NSFileProtection API. Rather, it is a serious bug in the implementation of one or more of the above app components. To be sure, neither the phone, contacts, nor lock app should ever grant unauthenticated access to that data. But the decision lies in those apps, not at a lower level in the file protection architecture.
Still confused? Come to our next Mobile App Sec Triathlon and we'll discuss in detail how to use both the file protection and keychain protection classes properly. Hope to see you in New York this April!
Cheers,
Ken
Wednesday, February 13, 2013
The front lines of software security wars
There are wars being fought out there, and not just the ones we hear about in the media. I'm talking about "software security wars", and nowhere are they more apparent than in the iOS jailbreaking scene. What's going on there is fascinating to watch as an outsider (or, I'll bet, as an insider!), and could well be paving the future of secure software.
Just over a week ago, the "evad3rs" team released their "evasi0n" jailbreak tool for iOS. It works on most current iOS devices, including the iPhone 5, which had thwarted jailbreaking attempts for a few months. Notably absent from the evasi0n supported devices list is the third generation Apple TV, which was released in March of 2012 and has yet to see a successful jailbreak published.
So what's the big deal? After all, they broke almost all current devices, right? Well, yes they did. But a) the process took months, not weeks or days as we'd seen in prior device and iOS releases, and b) the ATV3 remains unbroken.
Let's take this a bit further. The evasi0n tool had to combine a "cocktail" of five different vulnerability exploits in order to successfully break a device. No single vulnerability unearthed by the evad3rs team was sufficient to accomplish everything needed to do the jailbreak.
Apple has come a long way in hardening its system, indeed. There are a couple of "soft" targets in the system, however, that the jailbreakers are constantly seeking to exploit.
When you put an iOS device into Device Firmware Update (DFU) mode, you can boot from a USB-provided kernel. Clearly, Apple doesn't want you to be able to boot just any old kernel, so they rigorously protect the DFU process to try to ensure that only signed kernels can be loaded. Any flaw in the USBmux communications, and a non-signed kernel could potentially be booted.
In the case of the evasi0n tool, one of the exploits it used involved altering a file inside the sandbox with a symbolic link to a file outside the sandbox -- clearly a significant flaw in Apple's sandboxing implementation!
So then, back to the "war". This battle is raging between two sets of software techies. One builds a strong defense, and then the other searches for weaknesses and exploits them. Of course, there are many such front lines being fought in other software security wars, but this one is pretty tightly focused, which enables us to shine a spotlight on it and really study what both sides are doing.
With each release of iOS, Apple has been upping the ante by adding new security features to make it more difficult to break the system. These include features like address space layout randomization (ASLR) that pretty much eviscerated old-school style stack and heap overflow attacks. The war wages on and on.
Who will win the war? I believe Apple will eventually protect the system to the point that jailbreaking is no longer cost or time effective to the attackers -- at least not to attack teams like the evad3rs. The fact that the current jailbreak took months makes this a fairly safe bet, IMHO. Time will tell.
So, what does all this mean to software developers? Ah, that's really the underlying question here. Once we have an iOS device with adequate authentication (and no, 4-digit PINs are NOT adequate), and that system is on a platform that can't be exploited in a reasonable amount of time, we'll have a platform that is truly trustworthy. For now, we have to continue to apply app-level protections to safeguard our most sensitive app data.
Join Gunnar (@OneRaindrop) and me (@KRvW) at our next Mobile App Security Triathlon event for a deep dive into these issues. New York in April/May!
Just over a week ago, the "evad3rs" team released their "evasi0n" jailbreak tool for iOS. It works on most current iOS devices, including the iPhone 5, which had thwarted jailbreaking attempts for a few months. Notably absent from the evasi0n supported devices list is the third generation Apple TV, which was released in March of 2012 and has yet to see a successful jailbreak published.
So what's the big deal? After all, they broke almost all current devices, right? Well, yes they did. But a) the process took months, not weeks or days as we'd seen in prior device and iOS releases, and b) the ATV3 remains unbroken.
Let's take this a bit further. The evasi0n tool had to combine a "cocktail" of five different vulnerability exploits in order to successfully break a device. No single vulnerability unearthed by the evad3rs team was sufficient to accomplish everything needed to do the jailbreak.
Apple has come a long way in hardening its system, indeed. There are a couple of "soft" targets in the system, however, that the jailbreakers are constantly seeking to exploit.
When you put an iOS device into Device Firmware Update (DFU) mode, you can boot from a USB-provided kernel. Clearly, Apple doesn't want you to be able to boot just any old kernel, so they rigorously protect the DFU process to try to ensure that only signed kernels can be loaded. Any flaw in the USBmux communications, and a non-signed kernel could potentially be booted.
In the case of the evasi0n tool, one of the exploits it used involved altering a file inside the sandbox with a symbolic link to a file outside the sandbox -- clearly a significant flaw in Apple's sandboxing implementation!
So then, back to the "war". This battle is raging between two sets of software techies. One builds a strong defense, and then the other searches for weaknesses and exploits them. Of course, there are many such front lines being fought in other software security wars, but this one is pretty tightly focused, which enables us to shine a spotlight on it and really study what both sides are doing.
With each release of iOS, Apple has been upping the ante by adding new security features to make it more difficult to break the system. These include features like address space layout randomization (ASLR) that pretty much eviscerated old-school style stack and heap overflow attacks. The war wages on and on.
Who will win the war? I believe Apple will eventually protect the system to the point that jailbreaking is no longer cost or time effective to the attackers -- at least not to attack teams like the evad3rs. The fact that the current jailbreak took months makes this a fairly safe bet, IMHO. Time will tell.
So, what does all this mean to software developers? Ah, that's really the underlying question here. Once we have an iOS device with adequate authentication (and no, 4-digit PINs are NOT adequate), and that system is on a platform that can't be exploited in a reasonable amount of time, we'll have a platform that is truly trustworthy. For now, we have to continue to apply app-level protections to safeguard our most sensitive app data.
Join Gunnar (@OneRaindrop) and me (@KRvW) at our next Mobile App Security Triathlon event for a deep dive into these issues. New York in April/May!
Wednesday, February 6, 2013
Buyer Education for Avoiding Mobile Dim Sum Surprise Projects
Recently I did a talk at OWASP Twin Cities on building a mobile app security toolchain. The talk went pretty well, lots of good questions. One takeaway, there are many people in many different kinds of companies struggling with how to do Mobile App Sec. The room was sold out, and so it looks like the OWASP Chapter is organizing a repeat talk some time this month, so if you missed it and want to come, stay tuned.
The basics of the talk are around what does an end to end process look like for Mobile AppSec, what tools are involved, and what dragons are lurking along the way? For the three day training that Ken and I do the second and third days are focused on hands on iOS and Android security issues. The first day is focused on a number of issues like how to fix your back end for mobile, what identity protocols might be used, what new use cases and risks does mobile present, and threat modeling for mobile.
One thing that I have seen is that many mobile projects are outsourced, both development and vulnerability assessment work. Of course, companies outsource lots of things these days, but I would say its more pronounced with Mobile. In part this may be due to a small skillset of mobile talent. And maybe also companies figuring out if mobile is a fad that will go away or if they really need to build out a team. To me, the answers for most companies are - mobile is not going away, build your team, seed it with the right mix of folks and train them.
There's another variable at play here. Outsourcing is fine as far as it goes, but its only as good as your ability to select and target the right consulting firms, teams, and work direction. For mobile vulnerability assessment in particular it can be a real hodge podge, some tools and services left over from the webapp security days (do you still need them? yes, but you need others too), many things that apply on one platform but not on another, and a brand new set of use cases for mobile. In all, its a bit like going to dim sum, things whizz by and you point at something you sort of recognize, only after eating do you know if the choice was any good (ok, but who doesn't like pork belly buns though?).
The full three day class is for hands on developers and security people, we talked about making it only for them, but decided to leave the one day option because there are many design, architecture and other issues that extend to other parts of the organization. Whether directing an internal team or brining in a consulting team, education is important to make more informed decisions. One thing we work to build in the training on day one is to make sure people are educated buyers.The mobile app security process and results should not be a surprise. Don't just point at a menu of services, instead learn to identify what tools and services are most vital to your project, and focus on those.
**
Three days of iOS and Android AppSec geekery with Gunnar Peterson and Ken van Wyk - Training dates NYC April 29-May 1
The basics of the talk are around what does an end to end process look like for Mobile AppSec, what tools are involved, and what dragons are lurking along the way? For the three day training that Ken and I do the second and third days are focused on hands on iOS and Android security issues. The first day is focused on a number of issues like how to fix your back end for mobile, what identity protocols might be used, what new use cases and risks does mobile present, and threat modeling for mobile.
One thing that I have seen is that many mobile projects are outsourced, both development and vulnerability assessment work. Of course, companies outsource lots of things these days, but I would say its more pronounced with Mobile. In part this may be due to a small skillset of mobile talent. And maybe also companies figuring out if mobile is a fad that will go away or if they really need to build out a team. To me, the answers for most companies are - mobile is not going away, build your team, seed it with the right mix of folks and train them.
There's another variable at play here. Outsourcing is fine as far as it goes, but its only as good as your ability to select and target the right consulting firms, teams, and work direction. For mobile vulnerability assessment in particular it can be a real hodge podge, some tools and services left over from the webapp security days (do you still need them? yes, but you need others too), many things that apply on one platform but not on another, and a brand new set of use cases for mobile. In all, its a bit like going to dim sum, things whizz by and you point at something you sort of recognize, only after eating do you know if the choice was any good (ok, but who doesn't like pork belly buns though?).
The full three day class is for hands on developers and security people, we talked about making it only for them, but decided to leave the one day option because there are many design, architecture and other issues that extend to other parts of the organization. Whether directing an internal team or brining in a consulting team, education is important to make more informed decisions. One thing we work to build in the training on day one is to make sure people are educated buyers.The mobile app security process and results should not be a surprise. Don't just point at a menu of services, instead learn to identify what tools and services are most vital to your project, and focus on those.
**
Three days of iOS and Android AppSec geekery with Gunnar Peterson and Ken van Wyk - Training dates NYC April 29-May 1
Thursday, January 31, 2013
The Next Mobile Wave- NYEAABTODADWI
Security departments are getting spun up over BYOD and its younger brother COPE (Company Owned, Personal Enabled). I suggest a new approach that is neither BYOD or COPE, I have even have a catchy slogan that is sure to catch one its called NYEAABTODADWI (Noticing Your Employees Are Already Bringing Their Own Devices And Dealing With It).
WSJ summarizes the issues in How BYOD Became the Law of the Land:
Then there is the server side, Travis Spencer did a round up of some of the core identity issues at play here. From there decisions need to be made on key management, hardening Mobile web services, and implementing Gateways. So there is a lot to do and not much time to lose, because if you look, the risk of your mobile apps - what they are transacting - is pretty high. Another little wrinkle is that many initial mobile app projects are outsourced, so there tends to be this black box - well Company X is responsible. But the security team should really be more actively engaged and in a proactive way to make sure there is a Mobile specific security policy that is backed by guidance, architecture, patterns, and testing that the end product gets the job done. But before we get to all of that, we must NYEAABTODADWI .
**
Three days of iOS and Android AppSec geekery with Gunnar Peterson and Ken van Wyk - Training dates NYC April 29-May 1
WSJ summarizes the issues in How BYOD Became the Law of the Land:
The most challenging adjustment—and one that still has the longest way to go—is the need for better systems to authenticate network users, essentially all of whom now access corporate systems with mobile devices. This is an area of strength for RIM, known for the resilience of its security network. The IT infrastructure to support BYOD "has grown up quickly, with the exception of identity management," Mr. Dulaney said.
CIOs also have shifted the onus of responsibility for the devices and the data they process to the employees themselves. CIOs created new policies spelling out how companies and employees would treat mobile devices and data, and by addressing related questions of liability and insurance. In some cases, companies insist on the right to wipe a device clean of all information, including personal files and data.The initial response from IT security to mobile was MDM, this is fine but nowhere near sufficient. The device level of granularity is not enough to deploy and enforce security policy in the same way that "Laptop user" is not good enough. We need user identity, app identity, and data encryption. And we cannot always assume that the server will be in play. Further, MDM is only applicable for enterprise and does not help with the myriad of customer facing, external mobile apps that are being deployed every day.
Then there is the server side, Travis Spencer did a round up of some of the core identity issues at play here. From there decisions need to be made on key management, hardening Mobile web services, and implementing Gateways. So there is a lot to do and not much time to lose, because if you look, the risk of your mobile apps - what they are transacting - is pretty high. Another little wrinkle is that many initial mobile app projects are outsourced, so there tends to be this black box - well Company X is responsible. But the security team should really be more actively engaged and in a proactive way to make sure there is a Mobile specific security policy that is backed by guidance, architecture, patterns, and testing that the end product gets the job done. But before we get to all of that, we must NYEAABTODADWI .
**
Three days of iOS and Android AppSec geekery with Gunnar Peterson and Ken van Wyk - Training dates NYC April 29-May 1
Tuesday, January 22, 2013
How's your 2013 mobile app security fitness coming along?
In my Computerworld column this month, I described how being secure is in some ways similar to being fit. There's good reason why Gunnar (@oneraindrop) and I (@krvw) chose the name "Mobile App Sec Triathlon" for the training events we do.
So, how are your 2013 security-related resolutions coming along? We're about 2/3 of the way through the first month of the year, after all. Not so good, eh? Well, let's consider a few things to help out a bit.
So, how are your 2013 security-related resolutions coming along? We're about 2/3 of the way through the first month of the year, after all. Not so good, eh? Well, let's consider a few things to help out a bit.
- Be realistic. It's really easy to make a massive list of everything you should be doing, and then simply become overwhelmed by it all. Prioritize what matters most to you, your organization, and your users. The good folks over at OWASP recently did a threat model of mobile devices, from which they derived (yet another) Top 10 list, this time of the risks surrounding mobile devices.
In that project, the two biggest risks that directly impact the client side of things are: 1) Lost or stolen device and 2) Insecure communications.
So, prioritize what you need to do around these things, for starters. Consider how your apps store data on the mobile device. Make an inventory of every file they create or touch, and take a candid assessment of what's there and how that information might be used by an attacker who has access to it.
Consider too how your app communicates with the server (or other comms). How are you securing those connections and protecting the privacy of the information? What data are you sending and receiving, and how might that be used by an attacker who has access to it?
These are great starting points to get your mobile app security efforts launched in the right direction. - Assign responsibilities and/or set clear goals and milestones. It's one thing to come up with a great list of stuff that needs to be done, but who is going to do the work? When is it going to be done? What measurable milestones exist between now and completion?
Sure, these are basic project management 101 sorts of topics, but they're still important. After all, you can't manage what you can't measure. - How are others addressing the issues? Whatever topics you're looking to address, it's worth spending some time to find out how other people have tackled them. While you won't always find a solution, it's quite possible someone has published a book, paper, talk, blog entry, etc., on your topic, or something very similar. If you have interns, launch them at this sort of domain analysis. Also consider seeking community forums where you can go and chat with your peers from other organizations. I've found OWASP Chapter meetings to be hugely useful for that sort of thing. An active OWASP Chapter that meets once a month or so can be a fabulous place to talk with others in the field.
- Don't give up. While tackling app security may seem a Sisyphean task at times, failure is worse.
- Three pillars. Keep in mind the three focus areas necessary for a software security program: risk management, security activities, and knowledge. On risk, you have got to be able to rationalize the business risks associated with your apps, and make design decisions that are commensurate. For activities, look at what activities others are doing. The BSIMM is a great starting point for that. And for knowledge, encourage and incentivize your developers to be sponge all the app security info they can find. Training, of course, is helpful, but that's only one of many sources of knowledge in a balanced knowledge "diet".
The bottom line, as I pointed out in the column, is that becoming secure takes effort. It requires someone to push that rock up the hill day after day, and there are bound to be setbacks.
Still overwhelmed? Here's a concrete thing you can do to get 2013 off to a good start. Register yourself or your developers for our next Mobile App Security Triathlon. Three days of iOS and Android hands-on training, starting on April 29th.
Hope to see you there -- and to have some meaningful discussions about other things you can be doing to bolster your mobile app security efforts.
Cheers,
Ken
Friday, January 11, 2013
What's the Worst Security Posture for Mobile?
To say its early days in Mobile is an understatement. To say its early days in Mobile security is (and I know its only January) an early candidate for understatement of the year. Making sweeping statements about Mobile anything is hard. But there are a number of promising green shoots spring up out of the ground in Mobile security. Will these sprouts grow into mighty oaks or get crushed like so many Orange Books before them? Remains to be seen.
One thing most people agree on, for the moment, is that iOS offers better protection than Android. While Android offers a chance at a more secure environment due to its open platform, this is not always realized in end products. Still there is another dimension to the Android fragmentation problem as it relates to security, which I will get to in a second.
Most mobile projects I have worked on start with excellent developers. The company taps their top devs to tackle and deliver on this new iOS or Android future. However, these developers are usually web gurus. Along the way, they realize things are not quite the same in Mobile. Yes there is HTTP but the client and server implementations work differently. There's additional API rework necessary to build out a Mobile middle tier. And oh did I mention testing?
Let's return to the fragmentation issue I mentioned above in the context of a recent year end review post by Dave Aitel:
You know what didn't pan out? "Mobile attacks" in commercial attack frameworks. The reasons are a bit non-obvious, but deep down, writing Android exploits is fairly hard. Not because the exploit itself is hard, but because testing your exploit on every phone is a nightmare. There's literally thousands of them, and they're all slightly different. So even if you know your exploit is solid as a rock, it's hard to say that you tested it on whatever strange phone your customer happens to have around.
And of course, iOS is its own hard nut to crack. It's a moving monolithic target, and Apple is highly incentivized by pirates to keep it secure. So if you have something that works in a commercial package, Apple will patch it the next day, and all your hard work is mostly wasted.
Just like developers learned, the fragmentation issue is a real one for attackers too. Of course, the rising popularity means this is no long (or even medium) term advantage to the defender, but its an interesting marker along the journey. It does infer an answer to the general question what is the worst position to be in? Perhaps a popular Android device with poorly provisioned security. At least for now.
Of course, that is not the worst security posture. The most dangerous posture, we know from Brian Snow is - to assume you are secure, and act accordingly when in fact you are not secure.
One thing most people agree on, for the moment, is that iOS offers better protection than Android. While Android offers a chance at a more secure environment due to its open platform, this is not always realized in end products. Still there is another dimension to the Android fragmentation problem as it relates to security, which I will get to in a second.
Most mobile projects I have worked on start with excellent developers. The company taps their top devs to tackle and deliver on this new iOS or Android future. However, these developers are usually web gurus. Along the way, they realize things are not quite the same in Mobile. Yes there is HTTP but the client and server implementations work differently. There's additional API rework necessary to build out a Mobile middle tier. And oh did I mention testing?
Let's return to the fragmentation issue I mentioned above in the context of a recent year end review post by Dave Aitel:
You know what didn't pan out? "Mobile attacks" in commercial attack frameworks. The reasons are a bit non-obvious, but deep down, writing Android exploits is fairly hard. Not because the exploit itself is hard, but because testing your exploit on every phone is a nightmare. There's literally thousands of them, and they're all slightly different. So even if you know your exploit is solid as a rock, it's hard to say that you tested it on whatever strange phone your customer happens to have around.
And of course, iOS is its own hard nut to crack. It's a moving monolithic target, and Apple is highly incentivized by pirates to keep it secure. So if you have something that works in a commercial package, Apple will patch it the next day, and all your hard work is mostly wasted.
Just like developers learned, the fragmentation issue is a real one for attackers too. Of course, the rising popularity means this is no long (or even medium) term advantage to the defender, but its an interesting marker along the journey. It does infer an answer to the general question what is the worst position to be in? Perhaps a popular Android device with poorly provisioned security. At least for now.
Of course, that is not the worst security posture. The most dangerous posture, we know from Brian Snow is - to assume you are secure, and act accordingly when in fact you are not secure.
Thursday, November 1, 2012
Android Hacked in Ethiopia
Now this is a lede:
"What happens if you give a thousand Motorola Zoom tablet PCs to Ethiopian kids who have never even seen a printed word? Within five months, they'll start teaching themselves English while circumventing the security on your OS to customize settings and activate disabled hardware."
Michael Howard said something years back that stuck with me - programming is human against compiler, much easier than security which is human against human.
Of course in this case its not a classic security fail of a malicious threat against asset, in fact the overall story is quite a triumph of human ingenuity:
What it does show from a security perspective though is the limitation of what we can reasonably expect from any access control. Humans with time and determination will find their way around, whatever you're basing your access control scheme on (TLS, Kerberos, SAML, ...) you have to assume it will fail eventually (and not in a good way as in this story) and factor in how the system as a whole survives.
"What happens if you give a thousand Motorola Zoom tablet PCs to Ethiopian kids who have never even seen a printed word? Within five months, they'll start teaching themselves English while circumventing the security on your OS to customize settings and activate disabled hardware."
Michael Howard said something years back that stuck with me - programming is human against compiler, much easier than security which is human against human.
Of course in this case its not a classic security fail of a malicious threat against asset, in fact the overall story is quite a triumph of human ingenuity:
"We left the boxes in the village. Closed. Taped shut. No instruction, no human being. I thought, the kids will play with the boxes! Within four minutes, one kid not only opened the box, but found the on/off switch. He'd never seen an on/off switch. He powered it up. Within five days, they were using 47 apps per child per day. Within two weeks, they were singing ABC songs [in English] in the village. And within five months, they had hacked Android. Some idiot in our organization or in the Media Lab had disabled the camera! And they figured out it had a camera, and they hacked Android."
What it does show from a security perspective though is the limitation of what we can reasonably expect from any access control. Humans with time and determination will find their way around, whatever you're basing your access control scheme on (TLS, Kerberos, SAML, ...) you have to assume it will fail eventually (and not in a good way as in this story) and factor in how the system as a whole survives.
Tuesday, October 16, 2012
You're not counting on your app store, are you?
Today's mobile app stores, like Apple's App Store (via iTunes), review the software in their stores before the public can download them. That curation process, however, is not without its limitations -- and as software developers, we absolutely must not ever rely on the curation process to spot security defects in our apps.
Much has been said about Apple's own App Store, both good and bad. Whatever your preference, their App Store has undoubtedly the most rigorous app review process in the mobile app store business, such as it is. Developers are required to conform with their guidelines in order for their apps to get approved and become available for consumers to purchase.
But even that rigorous review is not in any way intended to be a security review of your apps. Make no mistake about it, Apple is not in the business of ensuring your app is secure.
So then, what do they do? Let's explore a bit -- with the understanding that I have no inside knowledge at Apple, and I'm basing this on my observations and readings.
Much has been said about Apple's own App Store, both good and bad. Whatever your preference, their App Store has undoubtedly the most rigorous app review process in the mobile app store business, such as it is. Developers are required to conform with their guidelines in order for their apps to get approved and become available for consumers to purchase.
But even that rigorous review is not in any way intended to be a security review of your apps. Make no mistake about it, Apple is not in the business of ensuring your app is secure.
So then, what do they do? Let's explore a bit -- with the understanding that I have no inside knowledge at Apple, and I'm basing this on my observations and readings.
- Stability. They verify the app loads and runs as described.
- Functionality. Does the app perform the advertised functionality?
- Play by the rules. Does the app conform to Apple's published API standards? More to the point, is your app using any unpublished APIs, which is perhaps the biggest no-no on the app store.
- Policies. Does the app conform to Apple's policies (good, bad, or otherwise)?
Now, I admit that the above is probably a gross over-simplification of what they actually do. I'd expect they load the app in a controlled test environment. I'd expect they run the app using some profilers and such to look for memory leaks and that sort of implementation faux pas.
But, by and large, if your app conforms to their published APIs and their policies, it's good to go.
OK then, so what sort of things would that process miss? From a security standpoint, pretty much everything. Some of the biggest shortcomings that I would never expect Apple (or others) to find in their review process include:
- Local storage of sensitive data. As long as your app uses published APIs for file input/output, you can store whatever you want to, however you want to. Want to put your users' credentials into a plaintext SQLite database? No problem.
- Secure communications. Again, use published APIs (e.g., NSURL) and the app will fly right on through the review process, irrespective of whether you use SSL to encrypt the network data. Want to send your users' username/password credentials in a JSON bundle to your server's RESTful interface, without any encryption at all? No problem.
- Authentication to back-end services. Published APIs, blah blah blah... Want to authenticate your users against a locally stored username and hashed password? No problem.
- Session management on back-end services. Want to use easily guessable, sequential numbers for your users' sessions? No problem.
- Data input validation. Want to allow untrusted data in and out of your app, without ensuring they're safe from SQL injection, Cross-site scripting, etc? No problem.
- Data output encoding/escaping. Want to pull some data from a database and send it straight to a UIWebView without encoding it for the output context? No problem.
That list can go on for a long time. Apart from these shortcomings, the app review process is just fine. :-)
To Apple's defense, reviewing an app for the sorts of things I've listed here takes a high level of knowledge of the app itself, the business function it provides, the sorts of data it handles, etc. These are things that cannot and must not be performed by a team with no knowledge of the app, like an app store review process.
No, to review an app for common shortcomings like these must be done by someone with deep knowledge of the app. That should happen within the app development team, perhaps with support from an external team to perform some rigorous security testing.
No matter how it's done, the reviewers simply must understand the app and its business. Without that knowledge, no review can be adequate.
We'll discuss ideas for how to do reviews like these -- and prevent the security flaws in the first place -- at our Mobile App Sec Triathlon in San Jose, California on 5-7 November. Join us and let's discuss.
Cheers,
Ken van Wyk
Tuesday, October 9, 2012
Mobile Brings a New Dimension to the Enterprise Risk Equation
In yesterday's blog we looked at Technical Debt, and how its infosec's habit to lag technology innovation. In the big picture, this approach worked pretty well in the Web, early web security was pretty poor but early websites were mainly proof of concepts and brochureware. As the value of the websites increased, infosec was able to mostly get just enough of the job done and played catchup for the whole decade.
But this catchup approach does not work in Mobile, the first apps are not brochureware, they are financial transactions, medical decision making tools, and real dollars flowing through the apps on day zero! That's 180 degrees different from how the Web evolved, with the Web we waded in the shallow end for years, with Mobile we are diving off the high dive with 1.0.
This risk profile should embolden infosec teams to get active way earlier in the process and to be more prescriptive. But it does not stop there, the nature of the engagement has changed as well, case in point:
Here we see another dimension to the risk equation for Mobile that enterprises have little experience facing- they are not just providing a browser front end, they are shipping code (apps) to users. The enterprise security team now needs to not only care about the site working on Firefox, IE, and Chrome. They need to care about a whole array of platform and device specific security considerations; ensuring the application does not introduce vulnerabilities, inadvertantly steal or leak data, location, addresses, and more. And its all specific to each Mobile OS.
Because Mobile is a Balkanized environment, platform specific Security architecture and guidance is required to get the job done. This means more up front work, but its essential to avoid mistakes like apps that can leak data or provide entry points for attackers to the Mobile app and data (bad) or enterprise gateway and backend (worse).
Its time for Infosec to step up
Patch and pray is not good enough, enterprise security teams must roll up their sleeves, do the work required to support security services for iOS, Android apps, data, and identity. Nothing is perfect but there are absolutely better and worse ways to implement here, Infosec *should* play a leading role, as the grown up, in practically navigating these choices.
Take a hard look at the Use Cases your company is going Mobile with, this isn't beta brochureware, this is real data, real transactions, real identity, real risk, and real new technology. Now is the time for Infosec to get smart on iOS and Android, and build security in.
**
Come join two leading experts, Gunnar Peterson and Ken van Wyk, for a Mobile App Security Training - hands on iOS and Android security, in San Jose, California, on November 5-7, 2012.
But this catchup approach does not work in Mobile, the first apps are not brochureware, they are financial transactions, medical decision making tools, and real dollars flowing through the apps on day zero! That's 180 degrees different from how the Web evolved, with the Web we waded in the shallow end for years, with Mobile we are diving off the high dive with 1.0.
This risk profile should embolden infosec teams to get active way earlier in the process and to be more prescriptive. But it does not stop there, the nature of the engagement has changed as well, case in point:
The personal data of about 760,000 people was temporarily leaked onto the Internet through an address book application service for smartphones, information security company NetAgent Co. reported.
The Tokyo Metropolitan Police Department is set to launch an investigation after being informed of the case Saturday by Tokyo-based NetAgent. The application developer said the data leaked online has been deleted.
The latest version of the application, Zenkoku Denwacho (Nationwide Address Book), has been distributed for Google Inc.'s Android operating system for free since mid-September. It enables users to search information listed in a major address book developed by Nippon Telegraph and Telephone Corp., according to NetAgent.
But the application is also designed to send personal data stored in smartphone users' address books, including names and phone numbers, to a rental server.
Such information temporarily became available through the Internet mainly to users of the application, which at least 3,300 people are estimated to have downloaded.
Here we see another dimension to the risk equation for Mobile that enterprises have little experience facing- they are not just providing a browser front end, they are shipping code (apps) to users. The enterprise security team now needs to not only care about the site working on Firefox, IE, and Chrome. They need to care about a whole array of platform and device specific security considerations; ensuring the application does not introduce vulnerabilities, inadvertantly steal or leak data, location, addresses, and more. And its all specific to each Mobile OS.
Because Mobile is a Balkanized environment, platform specific Security architecture and guidance is required to get the job done. This means more up front work, but its essential to avoid mistakes like apps that can leak data or provide entry points for attackers to the Mobile app and data (bad) or enterprise gateway and backend (worse).
Its time for Infosec to step up
Patch and pray is not good enough, enterprise security teams must roll up their sleeves, do the work required to support security services for iOS, Android apps, data, and identity. Nothing is perfect but there are absolutely better and worse ways to implement here, Infosec *should* play a leading role, as the grown up, in practically navigating these choices.
Take a hard look at the Use Cases your company is going Mobile with, this isn't beta brochureware, this is real data, real transactions, real identity, real risk, and real new technology. Now is the time for Infosec to get smart on iOS and Android, and build security in.
**
Come join two leading experts, Gunnar Peterson and Ken van Wyk, for a Mobile App Security Training - hands on iOS and Android security, in San Jose, California, on November 5-7, 2012.
Subscribe to:
Posts (Atom)