Thursday, November 1, 2012

My favorite Windows tools and utilities

Some of these sites have been around for over a decade, and some of the tools hosted there, I've been using for nearly that long, if not longer. I've used these tools as standalone, but many of them have been  indispensable in scripts I've written over the years.

You might notice that all of these tool collections are made for Windows. As the hacker operating system of choice from the beginning, handy tools for integrating into scripts and troubleshooting were never hard to find for Unix and Linux. However, the more commercial and business-oriented Windows was severely handicapped in this regard, and didn't officially get a proper shell environment (out-of-the-box) until 2006. This oversight resulted the developers, like the ones I've listed below, creating some amazing, useful and largely free tools, beginning in the mid-to-late 90's.

Yes, there are many powerful scripting alternatives readily available for Windows nowadays, like Ruby, Python and Powershell. I cut my teeth on Windows shell (command) scripting though, and when I need a quick-and-dirty script to automate something, it ends up being either a bash script on my Mac, or a Windows shell script. Both work in their native environments without any additional downloads, installs or even changes to paths or environment variables.

The Tools


The first, and most impressive collection of tools is that of Sysinternals. Tools like psexec, tcpmon, Process Explorer and Process Monitor are so good that they should be part of Windows. That made it no surprise when Microsoft bought Sysinternals (actually, Winternals Software) and brought the brilliant Mark Russinovich and Bryce Cogswell on board. Mark widely recognized as a tech rockstar these days, and is now a successful fiction author with two novels available!


Nirsoft, like Sysinternals, seemingly has a tool for everything. In fact, one of the available tools, nircmd,   seems to do nearly everything you could imagine needing from a desktop automation standpoint. The latest Nirsoft tool I've been making use of is SiteShoter - a tool that allows you to take screenshots of a website from the commandline, using the native Internet Explorer API. Again, like Sysinternals, the vast amount of tools Nir Sofer (author of everything found on has written is staggering. For anyone writing scripts to automate tasks, both sites are a godsend.


This site is the same type as Nirsoft and Sysinternals - a huge collection of Windows tools that make power users' and administrators' jobs easier. The emphasis of these tools is very heavy on automating tasks related to Microsoft's popular enterprise products, like Active Directory and Exchange.


Part developer, part musician and part philosopher, AnalogX is a bit different from the previous three sites. This is a home for all of this individual's creative ventures, whatever they might be. I've been using some of his tools for 12 years now, and am grateful his site is still around and available. Also, like me, he never throws anything away. One of my old favorites is TextScan, which has often helped me out when I've had a need to do some quick and dirty binary analysis (as long as what I'm looking for is in ASCII!).

Standalone Mentions


Even though the latest versions of Windows Task Scheduler include an integrated email/smtp utility, Blat is still the best tool out there for using the internal open relay to impersonate your coworkers. Not that I'd ever do that...

If you need your Windows script to email you, look no further.


There are probably prettier visualization utilities out there now, but I've yet to find anything as easy to learn and use as Ploticus. It will parse out any file with structured data, and can output the results in a large variety of graph formats.


More than just a group of handy tools, Cygwin is an entire posix-friendly environment you can install on Windows. Setup could have been a nightmare, but instead, it is full of streamlined awesomeness.

Similar to Cygwin, but different. There is no environment or emulation layer here. These are unix utilities ported as native Win32 binaries. No additional requirements.

If you see anything I've missed that belongs on this list, let me know in the comments!

Thursday, May 17, 2012

PCI and Mobile Payment Application Security

So far, the world of mobile payments has been a "Wild West", before the sheriff came to town. The vendors have been making their own rules, though at least a few have been smart, and have prepared for what they guessed would happen. The solution can be expressed in one word.

Encryption. As early in the payment process as possible, all the way to the bank (acquirer).

The PCI Council has issued a press release on mobile payment security, along with an "At a Glance" publication. These usually precede the release of new standards/best practices documents by a few months as fair warning. This post is my attempt to analyze where the Council sits on the matter, and a bit of reading between the lines to try to predict what's coming.

End to end encryption, or point-to-point encryption (P2PE), as the PCI Council calls it, is easily the best solution to securing the explosion of mobile payment applications now on the market. It is ideal because, in most cases, when implemented, it is invisible to the user, the merchant and the application. Apps don't have to be rewritten, the user experience doesn't suffer, and the merchant still has the same level of convenience. Most importantly, when done correctly, it is easily the most secure approach available.

There is a price though, and it is on the merchant. All solutions I've seen offered raise the transaction rate. Such is the price for the convenience of mobile payment acceptance in this case.

Blah blah encryption blah P2PE, what are we really talking about here, Adrian? 

We're talking about encrypting the cardholder data in the same hardware that reads your card. The Android/iOS/Psion/QNX/Whatever mobile operating system never handles unencrypted payment data. Furthermore, in a P2PE environment, the key to decrypt this data should not be present. In most cases, this encrypted data will be sent directly to a payment gateway, and will not be stored. At this point, risk and attack vectors are minimized, and you've added little to no disruption in the sales process.

It takes a lot of work and expense to switch POS solutions, however. For environments already planning to switch, or entering mobile payments for the first time though, it makes sense to get it right the first time, and the Council will soon be publishing P2PE-certified POS solutions,  making it easier to choose a secure, vetted product. Currently, a lot of vendors are offering half-baked solutions that only reduce some of the risk, and it is difficult to separate the pretenders from the real deal. Beware.

If this is such a perfect solution, why isn't everyone already doing it?

  1. Vendor lock-in. Many merchants' POS solution and processing come from the same vendor, and that vendor may not have a P2PE or tokenization solution ready yet.
  2. Cost of new hardware/POS solution.
  3. Increased per-transaction cost - You pay more for using payment gateways, and you'll pay more for a P2PE solution where the processor decrypts your transactions. How much more? Some level 1/level 2 merchants could potentially be going from paying $0.01 to $0.36 or more per transaction! Those kinds of increases really add up for merchants processing 1 million+ transactions annually.
  4. Too early. Most vendors are at 1st Gen or earlier with P2PE products. We're just getting started here, and most established POS vendors don't operate at startup speeds. This is an interesting market to watch however, because there are some very interesting startups popping up in this space!

I think you've been hitting the Council Kool-Aid pretty hard.

A valid perspective, but this isn't just idle speculation from the stands. I've had an opportunity to assess a startup employing a P2PE approach first hand. I got down into the weeds with them, dug into their solution, and issued their ROC. I've used all my security, hacking and pentesting experience to consider all the attack angles. Could have missed something? Absolutely, and there is always room for improvement. 

Throw your concerns, questions and doubts my way, and I'll be happy to address them all. Challenge me, and I'll meet it. We're still in the early stages here, remember. Our money will be going through these solutions, and they need to be challenged (read: hacked) to ensure they are as strong as they should be.

Friday, April 20, 2012

Defining Trust

The other day I joined a Twitter discussion between Rafal Los, Wim Remes and several others over "trust". It struck us that we needed a clear definition of Trust, and that it would take more than 140 characters.

Rafal quickly put together a post, Trust - Making an intelligent, defensible trust valuation, and the debate continued. As I felt myself and Rafal were on the same page, and that some of the commenters weren't quite getting it, I was inspired to contribute a post of my own. I'm a believer in gaining understanding through examples, so I've put together a few scenarios in this post to try to drive the point home. I'd love to hear what you think. Comment here, on Rafal's post, or hit us up on Twitter.

The Question

Is trust binary? Is it a yes/no decision? All or nothing? Are there levels of trust? Go get a burbon, beer or chamomile, and we'll explore this question a bit. I'd urge you to think about this before I muddy the waters. We're not just talking about Trust as it relates to users, information security or IT vendors. There is no reason the answer to this question can't apply to social relationships and other situations.

Trust Fall, by SkinnyAndy

How do we define Trust?

There is an opportunity for trust to come into play any time we lack control over a product, a person's actions, an environment, or situation. I believe trust to be heuristic, requiring many rules that result in various levels. We see evidence of these levels in the simplest of examples: you may trust code you wrote more than that of your vendor's software; you probably trust your own network more than a partner's. I think some good examples and/or scenarios are necessary effectively define what it means to have different levels of trust. 

What should these "trust levels" be? I believe they can be formal or informal, but ultimately, they are the result of rules you use to determine "how much" you choose to trust someone or something. The ones I've come up with are completely arbitrary, and off the top of my head. One could define only two levels, or go up to ten or more. I think four is sufficient for the scenarios I present here. Yes, I realize there are actually five levels listed in the scale below. Note the zero level is not a level of trust, but the absence of it.

Sawaba's Amazing Non-Binary Trust Scale
4 - Full Trust
3 - High Trust
2 - Moderate Trust
1 - Low (initial trust; trust out of necessity or desperation)
0 - Distrust, i.e. no trust

We also need to understand how levels of trust are affected. This list is not all-inclusive, and is geared toward measuring IT products and services, to support the scenarios and examples I'll use later.

Meets promises and expectations
Caught lying
Time without incident or detractors
Missed deadlines or promises
Mishandled or ignored vulnerabilities
Slow response to addressing issues
Quick to address issues
Inaccurate quotes
Ability to test and/or validate product
Breaches or other security incidents
Surprise costs

Scenario 1

Purchasing a software product from a vendor. Let us assume this is a licensed, closed source software product that will install and run on servers/workstations on the local network. Though the customer in this example does not have access to the source code, they can test behavior, performance, capture network traffic, examine logs/output, etcetera.

Trust Level 0 - Haven't dealt with vendor yet. Unaware of reputation.
Trust Level 1 - Initial conversations and demo went well. "Gut check" says things are good so far.
Trust Level 2 - Checked vendor's reputation and tested product. Due diligence processes/procedures have been carried out and yielded positive results. Most people/companies are ready to do business at this "moderate" level of trust, though they may refrain from initially signing long-term contracts. Many consider this a "trial period".
Trust Level 3 - After a year or more, the vendor has "earned" a higher level of trust by consistently meeting expectations over a significant period of time. Most vendor/product relationships need not go past this level, at least by my arbitrary scale. I prefer to reserve the highest level of trust for more extreme situations where human safety and life and death are concerns. Recall, in this scenario, we don't have full control. We can't see source code, so there is always a chance a disgruntled programmer could insert a back door, for example. Perhaps over a very long period of time (10 years or more?) the level of trust could rise even higher.

Scenario 2

Using a piece of open source software.
With the services of an experienced, knowledgeable programmer trained to spot serious security vulnerabilities, stability issues, and performance concerns, a high level of trust can easily be achieved. Spend enough time reviewing and testing (especially when patches, or upgrades are released!), and it is reasonable to consider that full trust in the product could be attained.

I believe you can make the argument that, with 100% control and ability to verify/validate, we have zero need for trust in this case.

Scenario 3

A cloud service, say a Saas sales product, for example.

You can build trust based on
  • interactions with the company
  • reputation
  • a limited ability to test
  • time without incident
However, in this scenario, it is reasonable to believe that the level of trust may not pass the moderate level, due to the lack of transparency and control inherent in the model. Consider:
  • We can't see or review the source code
  • We can't see or review most of the operating environment
  • We may not know if incidents occur
  • We don't know for sure who has access to our data
  • They may say they encrypt our data, but we have no way of validating whether they do it correctly
  • Even if they are audited, and compliant with regulations designed to give assurance, we cannot put full trust in the auditors, especially with a history of varying quality and efficacy in audit practices and the auditors themselves
  • We have to take the vendor's word on the majority of items that present a risk to our data 
As a result, we might take measures to compensate for the lack of trust. To use an example, if we decide to use Dropbox, perhaps we independently encrypt all files before allowing Dropbox to sync them to compensate for a lack of trust. This is a real-world example that resulted after reports came out that many Dropbox employees had access to customer files. This was not previously clearly stated to customers, and resulted in a drop in the level of trust. These reports became a detractor.


There is an opportunity to trust an individual, company or product when either parties lacks control to some extent. When levels of control vary, so do levels of trust. It is therefore, not an "all or nothing" model, though both extremes (0% control and 100% control) can be experienced, and can reasonably occur.

Monday, April 16, 2012

Uncrackable Quantum Encryption, Unicorns and Perpetual Motion

What do these three things have in common?

None of them exist.

Unicorn by James Bowe
I'm only going to address uncrackable quantum encryption though. I'm not touching unicorns or perpetual motion.

This article over at ZDNet was responsible for sending me down this rabbit hole, though I've been rolling my eyes at "Uncrackable Quantum Encryption" articles for at least a decade.

First off, most of the "uncrackable quantum encryption" claims refer to encrypting data for transmission across networks or between endpoints. The idea is that you can make a tamper-evident system due to the nature of quantum mechanics. If an attacker attempts to manipulate or observe data in a quantum system, the data will be altered. Once altered, we're aware of the attacker and can apply countermeasures.

It is more likely that companies and researchers trying to sell the idea of quantum encryption are depending on its Sci-Fi "WOW" factor to sell it as the next big thing in cryptography. In reality there are many issues with quantum cryptography.

1. It is new, and largely untested

When someone claims something is uncrackable, and there are very few people with the knowledge and skills to test that theory, beware. In fact, in the last decade, quantum cryptography has been touted as "uncrackable" many times, and has been cracked just as many times. In fact, somewhat unfortunately, one of the researchers credited with cracking commercial quantum cryptography for the first time is now making this latest "uncrackable" claim!

2. We already have uncrackable encryption...

...Or near enough that the difference doesn't matter in the real world. AES has faithfully served us for over a decade now, and no practical method to crack AES-encrypted data at rest, much less in transit (when used as a stream cipher), has been presented. For any and all practical purposes, AES has fit the bill, so what do we need quantum encryption for?

3. The real problem in most encryption failures is poor implementation

Say someone does come up with a truly uncrackable quantum encryption. Historically, the human factor has been the limiting factor more than the quality of the cryptography. Someone will set it up, configure it or code it incorrectly. Why go through the wall when you can go around it?

4. Aside from researchers, no one is attacking cryptography

Users are the weak point. The person behind the desk and their phone/laptop/desktop is the goal of most attackers, because it is the weakest link, and it works. Even at the server/enterprise level, the low-hanging fruit is code thrown together at the last minute by an overworked developer, not some $200k quantum cryptography endpoint.

Show me some uncrackable quantum encryption that keeps your data safe, and I'll show you the treadmill I use to power my house. He never gets tired.

UPDATE: I noticed the commenters on the ZDNet article that inspired this post state almost all of the same points I make here, which tells me two things: 1) you guys already know better and 2) nobody's buying into quantum BS.

Thursday, April 5, 2012

MintChip: Canada Test Drives a New Payment System

A few years ago, at the DefCon 18 PCI panel, I chuckled as James Arlen sardonically explained to the crowd that the only worthwhile solution to the current credit card security issue was to scrap the current system and start fresh. It wasn't that I didn't agree with James, I think most in information security can agree that the current system is flawed enough to warrant such an extreme approach. I simply thought that there was such a slim chance of the payment brands ever considering such an approach that it was pointless to discuss. Perhaps I was wrong.

The modern payment system, born in the 50's and 60's predated e-commerce by decades. It wasn't until the advent of high-speed Internet access that breaches became commonplace. In the early 2000's, it became obvious that this system was quite vulnerable.

Today, I stumbled upon the Royal Canadian Mint's new MintChip system. In Canada, where debit cards are already free of the five big payment brands' logos, something like MintChip has a chance. From what little information is available, it seems there are hardware and software components to this solution. In fact, it seems the only information available is in the open because the Mint is having a contest to spur MintChip application development.

Indulge me while I fantasize a bit.

If MintChip is successful, there is a chance it could replace credit cards as a dominant form of payment. There is every chance for success also. It will take advantage of the latest technology. It seems well designed and thought out, and finally, has government backing. This is no startup. This is a revolution against an insecure payment system that costs Canadian citizens time and money with every breach. What about visitors and tourists? In addition to changing out your currency for Canadian dollars, you could potentially purchase pre-filled MintChips, like buying a pre-paid phone or gift card. Just look at the slick website, the convincing video, and they even have rainbows and unicorns on the 404 page.

Whew, I had to get that out.

It's all very pretty and hopeful, but in reality, there are a few issues here. First, I'm not Canadian, and realistically, I can only get so excited about a new payment system that has very little chance of popping up in the states any time in my pre-geriatric lifetime. Second, though they've made resources available for developers to come up with apps, it is clear from reading over the site and through the forum posts that there is precious little detail about how this system works. Without some transparency on how this system works from end-to-end, we really won't know if it is better than the credit card payment system in place today.

If you have any other information or opinions on MintChip, I'd be interested to hear about it.

Wednesday, April 4, 2012

Over half a million Macs infected?

Update 5: The Legacy

I wasn't expecting to update this post again, but this Mac botnet is not going away, suggesting that click-happy Mac users that get infected with trojans are less click-happy when it comes to installing Apple's updates.

As of two days ago, the Flashback botnet is just as large as when I first posted this story on April 4th! I suspect there will be a "learning phase" as Mac users get used to having to patch and remove malware. Part of the problem is likely that users don't realize they are infected. I'm not sure Apple's current approach is going to cut it in the long run. Personally, I think Apple should round up the brain trust like Microsoft did in the early 2000s, and come up with a sustainable solution. A future where Mac most users feel like they need to run antivirus would be sad.

Update 4: The Aftermath

  • We received independent confirmation of the numbers reported by Dr.Web.
  • The numbers I've heard report 2-3% of all Macs are infected, or were infected at the peak.
  • Dr.Web has a tool you can use to see if you are infected. Though it is using HTTP, I'm fairly sure the hardware UUID of your Mac isn't intended to be kept secret.
  • A downloadable App is also available to check for infection.
  • An apparent issue with the original java patch for Lion resulted in a second patch being released by Apple three days after the first.
  • Sites everywhere are reporting (some almost celebrating) that Apple's reputation as malware-resistant is dead.
  • Common suggestions to ditch Java are unhelpful and unlikely for the average user. It is far too ubiquitous.
  • Unless there is a huge resurgence in infections caused by variant of Flashback that uses a new vuln/exploit/vector, this will be my last update to this article.

Update 3

Where are we now?
  • Dr.Web claims the number of infected Macs has risen to 600,000, and that a significant number of them (273!) are reporting in from Cupertino.
  • F-Secure has posted instructions for manual removal of the trojan. If you've never done it, manually removing malware is a fun and empowering exercise. Not that I'd recommend getting infected just for an excuse to remove it. Well, maybe on your friend's computer.
  • Mikko Hypponen, F-Secure's Chief Research Officer, has spoken with Dr.Web about their methods, and seems inclined to believe the numbers.
  • I have received messages from people that are infected with the Flashback trojan.
  • I was very careful when opening those messages.
  • Dr.Web and F-Secure detail that the Flashback trojan is sending the Mac's Universally Unique Identifier (UUID) in the payload to the C&C server. This would definitely make it easy to get an accurate count of the number of infected hosts.
  • Mikko also tweeted that the number of infected Macs is now roughly equivalent, in relative terms, to the number of PCs infected at the height of Conficker's reign.

Update 2

Many people seem to think that Dr.Web's statistics came from the current install-base of their anti-virus software, which isn't the case. Dr.Web allegedly used botnet C&C sinkhole tactics, which have been effectively used in the past for the same purpose, and are detailed in this Trend Micro paper.

Update 1

Regardless of whether Dr.Web's results are real or not, I think our main takeaway from this should be that many Mac users have been lured into a false sense of security, and will be, or may already be, in for a rude awakening. Apple's marketing efforts are at least partially responsible for this.

Original Post

Say it isn't so!

Despite what Apple's marketing department would have you believe, Macs are not invulnerable to attacks and malware targeting OS X does exist. Though Macs are popular with security practitioners and hackers, most are well aware the BSD-based operating system isn't a panacea when it comes to security - only less targeted.

Until now, apparently.

If what the Russian security software company, Dr.Web, reports is accurate, a trojan has succeeded in infecting over 550,000 Macs, the majority of which are located in the United States. The trojan, named "Flashback", takes advantage of a vulnerability in Java that was only yesterday addressed in a patch released by Apple.

So far, I haven't seen any other reports numbering the victims of Flashback, but if accurate, such a large infection rate on Macs may change common perception of OS X as "virus-proof" and could result in a spike in Mac anti-virus software sales. However, given that the company reporting these numbers is in the business of selling anti-virus software, I think we need to see their claims corroborated before we get too excited.

It didn't look like an english version of the article was available, so I've included a Google Translate translation below:

"Doctor Web" discovered a botnet of more than 550 000 "Poppies"

 April 4, 2012

Experts of company "Doctor Web" - the Russian developer of IT security - held a special study, which allowed to evaluate a picture distribution Trojan BackDoor.Flashback, infecting computers running the operating system Mac OS X.Now BackDoor.Flashback botnet operates more than 550 000 infected workstations, most of which are located in the United States and Canada. This once again denies claims by some experts that there is no threat to users' Macs. "  

Infection by the Trojan BackDoor.Flashback.39 performed using infected Web sites and intermediate TDS (Traffic Direction System, distribution systems, traffic), redirecting Mac OS X users to a malicious site. These pages, the specialists of "Doctor Web" found quite a lot - they all contain Java-script, which loads the user's browser Java-applet, which in turn contains the exploit. Among the newly detected malicious sites appear, in particular:
According to some sources at the end of March in the Google SERP attended by more than 4 million infected web pages. In addition, Apple users forums reported cases of infection by the Trojan when you visit a site BackDoor.Flashback.39
Beginning in February 2012 attackers were used to spread malicious software vulnerabilities CVE-2011-3544 and CVE-2008-5353, and after March 16, they began to use another exploit (CVE-2012-0507). The fix for this vulnerability, Apple Inc. has released only the April 3, 2012. 
Exploit stores on the infected hard drive "poppy" executable file to download a payload from a remote server control and its subsequent launch. The specialists of "Doctor Web" found two versions of the Trojan: approximately April 1, attackers have used a modified version of BackDoor.Flashback.39. As in previous versions, after running a malicious program checks the hard disk the following components:
  • / Library / Little Snitch
  • / Developer / Applications / / Contents / MacOS / Xcode
  • / Applications / VirusBarrier
  • / Applications / iAntiVirus /
  • / Applications / avast!. App
  • / Applications /
  • / Applications /
  • / Applications / Packet 

If the specified file could not be found, the Trojan creates a particular algorithm list management servers, sends a message has been successfully installed on server statistics created by hackers and performs a serial poll command centers.  

It should be noted that the malware uses a very interesting mechanism for generating addresses of managing servers, allowing, if necessary, dynamically adjust the load between them, switching from one command center to another. After receiving a response management server, BackDoor.Flashback.39 checks passed to the command center at the post match signatures RSA, and then, if the test proves successful, loads and runs on the infected machine payload, as which can be any executable file specified in the resulting Trojan directive.  

Each of the bot sends the management server in the query string unique identifier for the infected computer. Using the method of sinkhole specialists of "Doctor Web" was able to redirect traffic to botnet on their own servers, thus making counting of infected hosts.  

On April 4, bot networks are more than 550,000 infected computers that are running the operating system Mac OS X. In this case it is only a part of a botnet that uses a modification of this Trojan BackDoor.Flashback. Most of the infections accounted for by the United States (56.6%, or 303,449 infected hosts), in second place is Canada (19.8%, or 106,379 infected computers), third place is taken by the United Kingdom (12.8% or 68,577 cases of infection ), in fourth place - Australia with the index 6.1% (32,527 living units).  

In order to protect their computers from the possibility of penetration of the Trojan BackDoor.Flashback.39 specialists, "Dr. Web" recommend Mac OS X users to download and install offered by Apple security update:

Monday, April 2, 2012

Global Payments Breach


Welcome InfoSec Daily Podcast listeners! I'm going to address a few items related to this story that were discussed on last night's show.
  • To the best of my knowledge, participation in VISA's Service Provider Registry is required for all service providers potentially storing VISA cardholder data. Based on my experience, this is primarily a way to track service providers and a marketing tool. Even though Global has been booted off the list, they can still continue to do business, process VISA cards, and sign up new merchants. If anyone has more direct experience or corrections, please comment below.
  • PCI is applied different to service providers like processors. However, it is in the opposite direction from what you were thinking. Service providers actually have more requirements to comply with than a merchant would. They do the full PCI DSS plus a few additional requirements that apply only to service providers. They also have to perform level 1 compliance (full Report on Compliance annually, third party annual audit required) with much fewer annual transactions than a merchant would. I think where this misunderstanding came from is the fact that, traditionally, issuers haven't needed to be PCI compliant. That's changed in recent years.
  • YES, the requirement not to store track data applies equally to processors as it does to merchants. Issuers (financial institutions that actually brand and send out credit cards) are the only ones with a good chance of getting an exception for storing track data, as they are the original source for producing/creating that data.

Here's the original post:

It isn’t so much the size of this breach that is significant, but the fact that one of the largest global payment processors got popped. Visa has allowed them to continue processing credit cards, but dropped them off their service provider registry (which is a BIG deal). The breach only affects North American merchants and cardholders. To give you an idea of how bad a breach at a large credit card processor can be, if a month’s worth of the transactions they handle were exposed, it is entirely possible that over 90% of all cardholders in the US would need new credit/debit cards.

This doesn’t happen often. I only know of two other cases where a processor was hit by a breach. CardSystems Services, as a business, was literally destroyed by their breach. VISA and AMEX revoked processing rights, forcing CardSystems to have to shut down operations and sell off assets almost overnight. Heartland Payment Systems is the most recent case, and the second largest breach ever at 130 million. They were also stripped from the registry, but managed to recover, regain PCI compliance, and get back onto the registry within a year.

Global Payments had a public conference call at 8AM this morning that I didn’t have time to listen to, but has resulted in an explosion of news stories on the breach.

The worst thing I've been able to determine from the details so far, is that it seems Global Payments was storing Track Data. The PCI DSS explicitly forbids storing track data (requirement 3.2.1), and PCI considers the storage of sensitive data to be one of the most serious PCI violations. CardSystems was effectively shut down for a lesser violation, though their breach was much larger.

It will be interesting to see if any of the details of the breach are released. These details are essential for the rest of the industry to learn from Global's mistakes. I'd like to see:
  • The attack vectors used, and the level of sophistication necessary to breach Global.
  • How long the attackers had access to systems
  • If track data really was stored, and what Global's excuse for such a violation is
  • Why the breach was limited to only 1.5 million accounts in North America. A large processor like Global might process 1.5 million transactions in just a few days. Why weren't more accounts stolen? Why only North America? Perhaps some effective segmentation was in place? That would be good news the PCI Council would be happy to point out.
  • And of course, we'll hopefully eventually find out who the perps were, and their level of hacking expertise

Time will tell.

Wednesday, March 21, 2012

Will Geode be Safe to Use?

Thanks to Twitter, I stumbled upon an innovative new solution to reducing or replacing the wallet. You can read the details for yourself, but Geode basically copies your credit card information, and regurgitates it into its own reprogrammable card as you need it.
Without more details about their security procedures, I'd assume this is a big liability in your pocket at this point. I'd urge iCache to use some of that Kickstarter surplus to get some 3rd party validation on their security.
It's not that we can't be nice guys, but in the security world, we deem a product insecure until a third party has an opportunity to validate the robustness and validity of security claims.
There is a lot of room for abuse from where I sit. The software piece of Geode (an iPhone app) appears to be storing track data (should never be stored, according to payment brands) and the CVV/CVC2 codes which are never supposed to exist except on the physical card. That's the whole point of the security codes - they are supposed to prove you have physical possession of the card. I understand the product aims to "replace" your cards, but the payment brands (VISA, MC, DISC, AMEX, JCB) have final say where that is concerned.
The FAQs on the website put a lot of emphasis on the safety of your data from the perspective of an attack that seeks to access the app directly. There is no mention of what an attacker could do with direct access to the phone data, or a forensic image of iPhone data. It also seems that the encryption key is the user's fingerprint.
At a minimum, this needs cryptographer, mobile device security expert and payment brand blessings before I'd be comfortable recommending it to friends or using it myself.