Securing Passwords, One Way Hashes, PBKDF2, PHP and You

Plain text passwords and simple one way hashes are not enough to protect your users. You need salt, pepper, and peanut butter. Am I crazy you ask? Maybe, but read on.

It happens to big huge companies (LinkedIn,, eHarmony), the little guys, and everything in between. Databases get breached and passwords get hacked. It always surprises me when I hear about how many thousands of users had the password “password”, or that the target’s password hashes were cracked in a matter of hours or days- or worse, their passwords were plain text. At this point, it is so easy to make passwords pretty secure with just basic knowledge of cryptography and hashing. As a matter of fact, as a competent developer, you don’t need to know much at all about the how’s and why’s of crypto to secure your users’ data.

First, do not think you are safe because you run your passwords through MD5 or SHA-256. MD5 has been cracked and SHA-256 is barely better than storing their passwords in plain text. Cryptograhic hash functions are NOT password hash functions!

One Way Hashing

A one way hash performs a bunch of mathematical operations that transform input into a (mostly) unique output, called a digest. Because these operations are one way, you cannot ‘decrypt’ the output- you can’t turn a digest into the original input. Good cryptographic hash functions should not generate digests that are the same for different input. Additionally, when the input is changed, just slightly, the resulting digest should be very different.

A typical use case would be when a user signs up for a website and creates a password. The conscientious developer takes the plain text password, runs it through a hashing function (let’s say, MD5) and stores the result in the database. When the user goes to log in the next time they enter their password and the authentication mechanism runs it through MD5 and compares the result against what is stored in the database.

That sounds pretty safe, right? Wrong. It’s akin to locking the door and leaving the window open. If the database was stolen it might make it harder to infer anything about the passwords just by looking at the data, but it doesn’t really make it any harder to guess or “crack” the password.

Password Hash Functions

… are not the same as cryptographic hash functions

Just using a cryptographic function on a plain text password doesn’t defend it very well. There a number of major problems and threats that are not being avoided. The two biggest are speed and recognizability of hashes.

Hashing Speed

Cryptographic hash functions are used for lots of things, most of them have to do with fingerprinting and verifying data. They are designed to be very fast so that the encryption processes isn’t slowed down. This presents a big problem for password hashing. Speed. The faster a function creates a digest, the more frequently an attacker can guess the password and compare the output. MD5, for instance, is so fast that on basic hardware you could guess over 5 billion times per second. Think about it for a second, do you need that speed to allow your users to log in? When it takes 15 seconds to enter your username and password, a few second to log in, and a few seconds of perceived page load time, will they notice the difference between .000001 seconds or 1 second for the authentication mechanism? The answer is no, not to enough of a degree to degrade their experience. For password hashing, slower is good.

Recognizability of Hashes

What happens when 10,000 people all use “password” as their password? Their hashes are all the same! If you just get one account cracked, you automatically crack everyone else with the same hash. If an attacker has a huge, precomputed list of hashes (called a rainbow table), they can scan your database looking for any hashes that match. They’ve already cracked accounts without even guessing a password yet! They could have a huge percentage of your system’s passwords before ever once making a guess.

Fortunately though, there are a few relatively easy things you can do to make their life harder. You don’t need to do anything heroic and the code isn’t even that tricky. Heck, most of it already exists and is free to use.


Talk about low hanging fruit. All you have to do is add some random characters to their password (and keep track of them). A salt is a random sequence of data which is added to the hash function or the password string itself. Say you generated a salt “12345” and had a password “password”, you could put them together “password12345″ and run that through your hash function to produce a digest that wouldn’t be so easily given up. Every password should have its own salt and should be at minimum, 32 characters or more to make it harder to guess the digest.

This is a basic salt generation algorithm. Do NOT use this function for generating salts where you are trying to protect details like credit card numbers, or even email addresses for that matter. It’s a pretty poor implementation, really.

When we create a user password we’ll generate a salt, add it to the password string, hash the password to get a digest, then store the salt and digest in the database. To log the user in subsequently we could use functions like the following:

Password Stretching

Stretching is creating a digest of a digest (of a digest of a digest … of a digest … you get it.) If you create a digest of a password, then create a digest of that X number of times you can no longer simply create a digest (from a rainbow table or otherwise) and compare it directly to the digest that is stored in the database. To compare passwords you’ll have to run the exact same number of iterations if hashing digests to compare passwords. This is useful on multiple fronts: it slows things down and (in conjunction with salted passwords) your hashes no longer look the same as everyone else’s. It stands to reason that if hashing a password once takes X amount of time, hashing it twice will take approximately 2X. You’ve just cut in half the number of times an attacker can guess your passwords. Congratulations! A good system takes so long to process a single digest that guessing a password using brute force will take more than a lifetime.

Let’s modify our password hashing function:

Notice that I have re-salted every hash to add extra randomization to the digest… just another wrinkle to throw at an attacker.


Additionally, you can have an application wide salt, called a pepper. Think of it as a salt for the salt, except this salt is unique only to the application, server, environment, or database.
You could use it like that hash('sha256', $pepper . $password . $salt);

Adaptive Key Derivation

Adaptive key derivation functions generate digests from passwords while applying salts and stretching. They implement many more wrinkles and are tested against attack vectors you may never think of- which is the important part. They are tested against attack vectors. Rolling your own cryptographic functions introduce a lot of unnecessary exposure and take more time than using generally accepted libraries, implementations and functions. I’m going to focus on the one I know best, PBKDF2. There are others such as bcrypt and mcrypt

Peanut Butter Keeps Dogs Friendly Too

PBKDF2 (Password-Based Key Derivation Function) is probably the most widely used derivation function. It is a container for a hash function, e.g. SHA-1 or RIPEMD,. For each input it applies a salt and iterates the hash many times in such a way that not much entropy (length and randomness) is lost. Primarily, it is done in such a way that it is SLOW to generate a single digest. The US government and NSA use this for generating strong encryption keys.

Adaptive keys are great first step, but remember, this is one tiny piece of securing user data.

Below is a very basic class I created that can be used for generating salts and digests through a variety of ways. You can download it here. This file will be updated regularly, so stay in touch!

Measuring Download Speed from Linux Command Line

I recently needed to test the network speed of the ISP from my Ubuntu 10.04 LTS server. I was trying to think of a better way to test it than going out to a Linux Distro's web site and downloading an ISO from them. I stumbled across this post on StackOverflow that had a URL to a test file and my speedtest scripts were born. I created two scripts; one utilizing wget and one utilizing curl. A lot of machines don’t come with curl by default, but it has a lot more output than wget does while downloading.


What Do They Do?

The scripts utilize wget or curl to download the 500M test file and you can view the speed results in real time. This is an entirely unscientific method of testing your speed, but much better than say, going to Ubuntu and downloading their latest ISO via wget. Finally, the output is set to go to /dev/null, which means it simply throws away everything it downloads (no cleanup).


The Code

Download both scripts here

Changing Created By or Author Property in SharePoint 2007

After a frustrating experience, I’d like to share the ‘secret’ of updating the Created By and/or Author SPListItem system property. I was trying set the Created By and/or Author SPListIem system property but found that the changes didn’t take and my object’s values were reset after calling SPListItem.Update();

The Wrong Way

If you’re like me, you probably assumed something similar to this would work just fine:

Not so fast my friend! If you trace/quickwatch your variables you’ll notice that your Created By property is reset after the update. Why? Because you didn’t set all the necessary properties. Oh, you didn’t know you had to set other properties at the same time to get Created By or Author to stick? Neither did I, until now…

The Right Way

Maybe not the correct way, but the way I got it to work:

That’s right, to successfully save the Created By property you must also set the Modified By and Modified properties AND call SPListItem.UpdateOverwriteVersion() to get the Created By property to actually save your new value. Hopefully you find this post sooner than I found my answer.

Creating an SSH Proxy Tunnel with PuTTY

This tutorial is aimed at Windows users and focuses on PuTTY as our SSH client of choice.

Are you stuck behind a firewall or looking to add some privacy to your browsing? Whenever I’m off my own network I fire up an SSH tunnel back to my own servers and send all my browsing information through it. Why? Because big brother may be watching, but I can bet you someone even worse is trying to. Also, it could be incriminating if people knew how often I was checking my 9th (out of 10) place Fantasy Football team stats.

What is Tunneling? The Over Simplified Definition

When your browser (or other client) requests a webpage (or anything off the Internet) it sends a request from your computer through a series of routers, switches, firewalls, and servers owned and monitored by other people, companies, and ISPs until it reaches its destination, then follows the same (or similar) path back to your machine with the kitten pictures you asked for.

Tunneling bypasses some of the rules that these companies or ISPs may be enforcing on you by creating a direct, encrypted, connection to your tunnel server that can’t be easily peered into by prying eyes. This means that web pages that are blocked can be seen and passwords that are sent can’t be looked at.

For a much better definition, please see Wikipedia

Install PuTTY

There are other SSH clients and tools that are designed specifically for SSH tunneling and SOCKS proxying. I prefer this way because PuTTY also gives you an SSH client, which you should no doubt be in possession of anyways.

  1. Download PuTTY here (choose the archive version)
  2. Make a new directory at C:\bin
  3. Extract the contents of the putty archive into C:\bin
  4. An extra step that’s not really necessary- Add C:\bin to your Windows system path (if you don’t know how, skip this or google it)

Configuring PuTTY

  1. Fire up the client and enter the hostname and portPuTTY Hostname
  2. Type in a title under Saved Sessions and press Save
  3. On the left side, go to Connection->SSH->Tunnels
  4. In Source Port enter 8080 (this can be configured to be whatever you want, just remember it)
  5. Choose the Dynamic radio button under DestinationPuTTY Tunnel
  6. Press Add, you should then see D8080 in the box above
  7. Go back to Session on the left side and then press Save to save the changes


To utilize the tunnel to its full benefit, you need to set up a SOCKS proxy in your browser. Will describe how to use the FoxyProxy proxy switching plugin. It works for both FireFox and Chrome on Windows, which are really the only browsers you should be using.

  1. Download FoxyProxy for your browser here.
  2. Once installed, go to the FoxyProxy optionsFoxy Proxy
  3. Click Add New
  4. Click the General tab and enter a name in the Proxy Name box
  5. Make sure Perform remote DNS lookups on hostnames loading through this proxy is checked – we’ll discuss this a little later
  6. Select the Proxy Details tab
  7. Enter localhost in the Host box
  8. Enter 8080 in the Port box
  9. Check SOCKS Proxy? and make sure the SOCKS v5 radio is checked
  10. Press Ok to save
  11. At the Select Mode drop down, choose your freshly created SOCKS Proxy


So long as your PuTTY SSH connection remains connected your proxy tunnel will be open and you will be browsing the internet just as you had before, except without a lot of restrictions placed by firewalls and greater security.

Final Note: Secure DNS Resolution

As far as I understand it Chrome will automatically use your SOCKS proxy for DNS resolution, but Firefox doesn’t by default. This means that firewalls or DNS servers could still block requests to certain websites because they will refuse to tell your browser or client how to look the remote server up. FoxyProxy should fix this due to the installation steps we took, but it doesn’t guarantee that your IM messenger, other browsers, or other internet clients will be able to securely resolve DNS requests when using the SOCKS proxy. For more information on exactly what DNS is, browse over to Wikipedia

I recommend a 3rd party DNS service like OpenDNS to further enhance the safety, speed, and security of your DNS lookups. They can protect from malware and other bad things, but they can also provide you with a ‘less restricted’ internet.

NSLog Conditionally in Debug Mode and NSLog Macros

Using Objective-C Macros to Conditionally Log

During the course of developing and debugging my first iOS apps I’ve realized that there has to be at least a semi-decent way of using log statements for debugging messages as well as error messages without a lot of code overhead and manual changes when switching between building for Release and Debug.

Using macros and compiler settings, you too can quickly separate the statements out and streamline your debugging/logging code.

Creating Your Macros

Find the -Prefix.pch header file for your project and open it for editing. If your project’s name is MyProject you will look for MyProject-Prefix.pch.

Add the following lines to the end of your Prefix header file:

What We Did

I have prefixed each macro with Ash so that there is no confusing them as macros I created. As you can also see, we have created a few different ways to log. We have a standard wrapper for NSLog that we will call instead of NSLog that will only fire if we’ve built using a debug mode flag. We also have two different methods for creating detailed log messages on the fly that will include our log message along with the function and line number the message originates from. The nice thing about these macros is that you can easily change the string format to log in any way that you want.

XCode Settings

  1. Select your project in the Xcode explorer/left pane

    XCode Project

    Xcode 4.2

  2. Select Build Settings in the Xcode center window
  3. Search for preprocessor in the Build Settings section and add DEBUGGING as a Debug Preprocessor Macro.

    Xcode Preprocessor Build Settings

    Set the Debug preprocessor settings

Becoming an iOS Developer

I found a nice, straightforward post by Josh Smith about becoming an iOS developer with a background in other technology stacks. He did a great job laying out the basics; I’m not going to really re-hash it, but add my feelings on the matter.

As someone with a background in .NET and Java (among others) I can definitely feel where he’s coming from when he says:

I warn that it will take a considerable amount of time, effort, and patience to get over the learning curve. If you think that going from WinForms to WPF requires a major mental adjustment, you ain’t seen nothin’ yet.

He’s not lying! Through my years of application development experience I’ve become completely comfortable with the whole MVC pattern and many abstract concepts of ‘good’ application design. I’ve found that all (or at least a lot) still apply to the iOS development world, but the execution is so far different than what you’ve done on (probably) any platform that it’d almost be better starting from scratch. Almost… maybe.

I bounce between Visual Studio, Komodo IDE, and Eclipse every day for various languages and products. I use vim, Notepad++, and TextWrangler just about every day (I dual boot Win7 and OS X and bounce between both, all my servers are Linux). Getting used to yet another IDE seemed like a pain, but Xcode 3 was pretty okay. Nothing though, threw me for a loop like the upgrade from Xcode 3 to Xcode 4; with the introduction of segues, moving Interface Builder into the IDE, ARC, and a number of other things. I’m still finding new things every day. It’s a much better IDE than Xcode 3, yet still hasn’t got much in the way of going toe to toe with Visual Studio.

Apple expects developers to be smarter than Microsoft does. Microsoft works hard to ensure that programming technologies are usable by as broad a range of people as possible. Their tooling and documentation assume you aren’t quite sure what you’re doing. Apple, on the other hand, is not nearly as helpful and pandering.

Whether it is that Apple wants to weed out some of those VB6 types, or just assumes that you’ll figure it out- they certainly do a lot less pandering to the least common denominator.

Final Thought

I love developing for iOS. It is a pretty homogeneous environment, Objective C lets me flex some of those C muscles I haven’t flexed in years, I can pretty rapidly get test ideas and wireframes alive enough to experiment with, and it’s another feather in my cap.

If you’re just starting out, turn off ARC, get used to managing memory on your own, and prepare for a lot of initial frustration if you’re used to picking up languages as easy as I have.

Reduce Three20 Compile Time for iOS (Making Static Libraries)

In using Three20 with Xcode 4, I have found life to be a little more difficult than in Xcode 3.

Project changes have required me to fiddle with the project/target compile settings in order to get my project to build correctly, often seeing the dreaded “Three20/Three20.h” file not found error, and not being able to build for archive have all been recurring issues that I’ve faced while developing some pretty simple iOS apps utilizing the Three20 project.

Finally, when the project is building correctly, I spend almost all my compile time building all the Three20 libraries. In search of a better way I stumbled upon a stackoverflow question, and then later on a blog post that have essentially gotten rid of all these hassles for me.

Step 1: How to build any library for all platforms

  • Open an existing static library project.
  • Select the project file on the top, and then select the target you want to build.
  • Go into the Build Phases section
  • Add a new “Run Script” build phase
  • In this phase, paste the following script:

The following code, which was copied directly from the blog post linked above, tells Xcode to compile all the targets for both Device and Simulator. The resulting output will be a .a library file for use in other projects.

Applying this to Three20

  • Go into {Three20 Root Directory}/src/Three20 and open Three20.xcodeproj
  • Select the Three20 target and add the above script into the appropriate build phase
  • Nothing needs to be done to the “Three20UnitTests” target, we won’t be using it for anything.
  • Expand the Dependencies folder for your Three20 project, in the files list on the left side of Xcode
  • Add the same script phase you did before in all of these dependent projects.
  • Do this for every dependency project. If you don’t do this, Xcode will produce all .a libraries, but only the main Three20 library will be created for all devices and they all need to be created for all devices.
  • Once you have added the script build phase to all projects and dependencies, build the project (and be patient).

Giving Due Credit

Had I not stumbled onto Christos Sotiriou‘s blog post I probably could never have figured this out from end to end on my own. He even offers a direct download to compiled libraries and headers if you want to just get up and running.

Three20 and Xcode 4 – How to solve Three20.h not found

Update: 18-Feb-2012

Even after the upgrade I have found that certain builds and project settings were flaky. To improve your build speed and reduce all the configuration hassles of using Three20 and Xcode 4 together, check out my latest blog post on Building Static Libraries

Update: 05-Feb-2012

After upgrading to Three20 and following the basic install instructions here, I had no problem building without changing any project settings. Your mileage may vary, however.

I’d still consider myself rather novice to the world of iOS development and the transition from Xcode 3 to Xcode 4 has thrown me for a loop in a few areas. I’ve grown to like storyboards and find Xcode as an IDE to be getting better, but there are some things that drive me nuts. For instance, spending the better part of a day trying to figure out how to get Three20 to work in my project (that was working JUST fine under Xcode 3!).


Follow the instructions provided by Three20 to get yourself oriented with what is going on.

Getting the Build Option to Work:

  • In the Project Navigator view, do the following for each of the Three20 projects (e.g. Three20, Three20Core, etc):
  • Click on the project
  • Go to Build Settings
  • Go to the Deployment section and make sure Skip Install is set to YES for all the configurations (Debug, Internal, Release)
  • Click on the project’s target (under the Targets section) and double check that Skip Install is set to YES for each configuration as well
  • Make sure to repeat these steps in each Three20 project in your project tree

Configuring Build Settings

  • In the Project Navigator view, select your project and then Build Settings
  • Add the two following entries to the Header Search paths and make them the first entries in the list:
    • $(BUILT_PRODUCTS_DIR)/../three20
    • $(BUILT_PRODUCTS_DIR)/../../three20
  • Make sure to set it both in Release and Debug configurations, and that the same build settings appear in your project’s target

Setting the Build Location

Go to Xcode4 Preferences [Cmd + ,] > Locations > Derived Data > Advanced and select Place build products in derived data location.

Clean your project, then build it. Voila.

Quickly Clean Up .svn Directories Recursively

Did you forget to export your subversion project before copying it? Inheriting a subversion mess? Here’s how you clean it up real quick on Mac and Linux:
Change to the root of your target project/directory

Authentication Using WordPress and Zend Framework

I recently had the need to implement a Zend Framework web app that could authorize against WordPress without necessarily using WordPress as the front end. I was very relieved to find out it was quite easy to do!

In your application’s index.php (typically found at public/index.php) you need to include the WordPress header file to make sure you have access to the WordPress functions later in your application:

I have the following code in my LoginController.php (application/modules/public/controllers/LoginController.php) file.

doWordpressAuth function