Moose Droppings hintern versohlen lassen

Review: Ignite Woo Wishlist & Gift Registry Pro

Not being entirely familiar with all the trappings of the WooCommerce ecosystem I set about finding a good Registry plugin for a friend’s (soon to launch) eCommerce website. Some initial searching lead me to the Wishlist & Gift Registry Pro plugin developed by Ignitewoo, a purveyor of many WordPress/WooCommerce plugins.

Limited Functionality

I really wish that I would have found done my due diligence before paying them any money- the plugin provides almost no functionality that isn’t offered by the completely free YITH Wishlist plugin. On top of that, there is no Registry functionality offered with this plugin either. It is all about Wishlists- with no real customizable functionality.

No Support

After paying Ignite Woo money I have been completely unable to reach anyone in support to answer a myriad of technical questions. I also had an issue with the license key they provided me and have still had no one reach out to me. I’ve contacted them via email, online chat and Twitter and have had absolutely nothing resolved.

Verdict: AVOID

I typically hate to trash other companies as there is no guaranteeing that the experience I have had with them is indicative of how they treat others. However, I have found that the WordPress forums are full of people who have gotten nothing but radio silence from this company. I would highly suggest you avoid their Wishlist & Gift Registry Pro plugin as it, literally, provides no additional functionality over any of the free alternatives. I would also recommend staying away from IgniteWoo in general- their product code is not very clean, their documentation is terrible and their support is non-existant.

I openly invite one of their representatives to get ahold of me. I’d love to work more closely with them and recant my position. Unfortunately we are two months in, they still have our money and have provided no service worthwhile to us.



DevOps for the Agency

Managing the Process and Tools of Multivariant Software Development

The Problem

DevOps at an agency or consultancy provides challenges not normally encountered at a software product company. Typical product and platform companies have a narrow band of operating systems and technologies they use, whereas agencies are often developing for every conceivable platform, language, and device. This can make the normal explanations of DevOps seem too server or ‘IT’ focused without a lot of help on how to manage order in the chaos matrix. The goal of this document is to explain some of the best practices available to your team(s) and give examples of how to use them in real life and some tools that can help with that.

A normal week in an agency’s development team can include deployments in multiple languages to Windows Server and Linux (of various flavors) running on hosted virtual machines, clustered in the cloud (typically Amazon Web Services, Azure or Rackspace), or even to bare metal. Without any process this is utter chaos, with the bare minimum of process you can mostly avoid egg in the face public failures- but are your servers secure? Can you deploy your projects to production reliably if your rock star engineer hits it big with the next Angry Birds and decides to retire to his own personal island? Probably not, yet.

Applying a good process to your current chaos doesn’t have to be that hard or painful. Material such as Visible Ops and the Information Technology Infrastructure Library’s Change Management guidance apply more broadly to change management at an organizational level including (but not limited to) configuration and hardware changes for servers and other broad topics. They don’t cover the intricacies that are involved with having different project types with vastly different technology stacks and clients and projects with different management methodologies. Process starting with the Software Development and Quality Assurance teams can radiate outwards and eventually encompass every team involved in software development projects at your company, even it if it is only a heightened awareness of the process.

Crawl, Walk, Run

A word of note: bite off meaningful chunks of change and work on instituting those before moving on to the next. These can be pushed from the software development team outward, depending on the organizational structure this would grow to include technical/business analysts, QA and project management. Start with the processes your team can affect and grow the process change to include more as buy-in grows. Don’t try to go from nothing to everything in a month. Your team will likely wind up creating process documentation, sample artifacts (like deployment checklists), and finally your team members and other teams will need to be trained and informed.

Version Control

If your team is not using a version control system (like Subversion or Git), then stop reading this right now and begin the process of integrating a version control system immediately. Without a version control system, no other steps in regulating code and configuration change are possible or meaningful.

The Environmental Landscape

For the purposes of this conversation we’ll assume there are some basic good practices being followed today; like having a development, QA and production environment. Because of the kind of projects I’ve been working on for the last half decade, we’ve typically had one more environment, staging. This is because production is often (at very least a slight majority of the time) not under the control of our agency but the client’s IT or development team. Staging becomes our in-house production and client acceptance environment.

Separation of Environments

It is exceedingly important that each environment (Development, QA, etc.) has it’s own resources. Environments should not share database tables, but could reasonable share database servers. For instance, Development and QA could use the same server, but different databases or schemas, so that the data and content are separated. The same thing with file systems, load balancers and application servers and the like.

The Development Environment as a Test Bed

The development environment should be the first environment deployed to. A continuous integration tool like Jenkins or Travis is fantastic for automatically building the project, running unit tests, and even deploying to the development environment at regular intervals or upon every version control check in. Development servers are for testing integration, server configuration and flushing out the deployment process. In practice, you should assume that development servers are always running the absolute latest code and are full of bugs in progress.

Every single change – code, configuration or otherwise – should be tested in this environment before ever moving up the chain to another environment.

Quality Assurance and Testing Environments

QA should be where completed bugs get tested by the quality assurance team and passed or sent back for further development. Deployments to QA should be at a much slower pace than to development environments but still could be as often as multiple times a day during periods of rapid development and testing (such as the night before a project is due).

Deployments to QA should come with a build number, version control revision number and a deployment log of all the bug tickets that are being addressed as well as notes on any other material changes.

Material changes will always include server configuration changes, database schema modifications and even changes to the deployment process itself. Technical leads, project managers and the QA team should all be made aware and sign off before deployments to this environment. In the absence of a staging environment, the QA environment will also be the environment that client review and approval should happen at.

Upon final approvals all around, this is the time where your team will want to tag the revision in version control, assign a version number to the build artifact and save it in ‘escrow’ somewhere. Any additionaly documentation such as staging deployment checklists or other process documentation should be filled out prior to deploying to staging.

Staging as a Mirror (and a Safety Net)

In the agency world, production is often not entirely under your team’s control, which can make it awfully difficult to troubleshoot. When production is a load balanced, highly scalable multi tier environment and the development environment is a simple, single virtual machine hosting the application and database; trouble shooting development does not necessarily fix production. Even if production isn’t locked behind the client’s IT and your team has unfettered access, a bad deployment in production is as bad as it gets. The whole world (or at very least, the client) will know that you’ve botched the job at the last minute. It’s a terrible feeling if you haven’t had the privilege of it yet.

Staging should be as close of an approximation as is feasible. Instead of a 10 node cluster like production, maybe just two VMs will suffice, for example. If production will be in Amazon Web Services, staging should be up there as well. If the persistence layer will be on a separate device, separate it in staging as well. Certain things like authentication and third party services may require staging to be publicly visible which is a decision your team will have to make. We’re looking for as close as feasible, not necessarily a direct copy. There may need to be some configuration the would have staging use mock services instead of the real thing- these are all things that need to be planned with the team while defining the feature that consumes the services.

Deploying to staging is the final practice before public failure and should be treated just like production, including restricted access and planned deployments and upgrades. It should require approval from the technical lead, QA and the project manager to verify that all the bugs that were to be addressed in the current build have been addressed and no regression was detected in the QA environment. This process should be tightly gated and auditable. Who deployed it, what was deployed and when it was deployed should all be recorded for posterity.

This will be the final pre-production deployment and is where client approval and sign off will happen if this environment exists in your process. Upon final sign off in this environment your team will generate the production deployment checklist and any other documentation that needs to be created in addition to the staging documentation that was created for this particular deployment.

Production – The Final Frontier

This is where client relationships can be destroyed in an otherwise healthy project. Nothing will shake your client’s confidence more than a lack of communication or bad production deploys. Bad production deploys and overall project delivery will always boil down to a simple fact of poor change management. On some projects the poor change management and communication are the fault of the client’s own IT department, other times it will be the fault of your team. It will always be your team’s problem, however. As much as 80%1 of the time spent fixing production delivery problems will be spent finding the root cause. That means that if proper change management procedures are being followed your team can have production running up to five times faster than they otherwise would have been able to.

When it comes time for production delivery, the team should have a well laid out battle plan that will include scheduled maintenance times, documentation and notes for every server configuration change, database scripts and code changes being deployed as well as the issue ticket references (the QA release notes will often suffice). If the environment is complicated with load balancers and multiple nodes, an order of operations should be established and documented and followed to the letter.

Process Flow

An Auditable Series of Steps

The appropriate amount undocumented change in any environment is zero. As dials and knobs are getting twisted in development environments it is important to record the changes being made and why. Application server and database configuration changes can have fantastic and devastating changes and are just as important as the new code and content being pushed. The most important part of each deployment is that exactly what is being deployed is documented and that someone is signing off on the deployment.

Deploying to QA

Going into QA should be a gated step that requires an okay from a technical lead and a QA lead (they may still be testing the last build!) along with some generic release notes. If your team uses a tool like Jira, Trac or other ticket tracking systems, the release notes for this step may simply be a matter of moving all the fixed tickets over to an “In QA” status or something similar.

Deploying to Staging

After QA has passed all the tickets and verified no regression has taken place, the technical lead and project manager should schedule a staging deployment. This should run just like a production deployment. Put things in maintenance mode, restart services, application pool restarts- the whole nine yards. This is your team’s final dress rehearsal and the last chance to validate the deployment process privately. If things don’t go well, document every issue and get a root cause. This will often times result in going back to development environments to test proposed changes and proceeding through QA and back into staging. With an automated build and deployment system this is an almost painless experience and the changes in destination environments is nearly immaterial.

After the deployment to staging, QA will want to do a final regression pass in that environment and any client sign off will need to happen. Prior to client review your team will be looking for obvious issues like content and data differences between environments (development and QA may be full of lorem ipsum and fake content after all) as well as testing that the proposed changes are working in staging as they did in QA. This is where each technology and team has their own flavor of doing things. Given the level of QA effort already provided, it would seem redundant to utilize the same amount of human time in staging- otherwise, why have staging AND QA environments? This is where things like remote unit and integration tests (like Arquillian for Java) can give a quick ‘green light’ to functionality in each new environment.

After that, version numbers need to be generated, version control source needs to be tagged or branched and if possible, the actual binary deployment should be compressed and held as an artifact somewhere. Build/Continuous Integration systems actually have functionality that generates build numbers, tags source and saves artifacts- all from the user interface. The process doesn’t need to be elaborate, just well documented and strictly enforced.

Deploying to Production

Deploying to production environments should follow the same (documented) steps as deploying to staging environments. If there are differences in configuration or environments they should be explicitly listed in a production deployment checklist that has been prepared with all the changes from the last production release.

Production Deployment Checklist

The production (and staging) deployment checklist should be a combination of two things: a standard deployment checklist that reflects all the steps that should be necessary for every production deployment (like where and how to copy the deployment artifacts), and any additional steps that are needed for this particular deployment based on the documentation from the QA and Staging environment deployments and release notes.

For example, database schema changes won’t necessarily occur every time, but when they do they should be completely scripted and ready to run, the deployment checklist would say when to run the scripts, which scripts to run, and detail the expected results.

The creation of this document will force technical leads to stop and think about what differences may exist between the staging deployment and this one as well as any other teams (like the client’s IT team) that will need to be wrapped in to the plan. It will allow for timing and order of operations and minimize the chance of exceeding the maintenance window that was scheduled.

Useful Tools

Tools will vary based upon the core technologies of your company, but if your team is producing a number of projects in various languages there are many tools that exist that can be strung together to build and deploy everything from .NET and Java projects to Python and PHP. In a follow up post I will discuss how I’ve been using these tools in real life to automatically spin up local environments for developers, development, QA, staging and production server environments; one click build and deploy, artifact management and source tagging and a whole host of other automated tasks that can keep the noise to a minimum.

You will want to check back regularly for edits, updates and links to follow ups!

Jenkins

Jenkins is an application that monitors executions of repeated jobs. It can continuously build and and test software projects as well as store build artifacts, generate build numbers and store test results. It also stores project health details and can notify team members when the build breaks.

Apache Ant

Ant is a Java library and command-line tool used for driving build processes that are defined in build files and targets. It is a very versatile tool that can compile software projects, run unit tests, copy (and move) files and do just about anything you would manually do when building a project.

Node.js

Node has a variety of cross platform plugins (like SCP and SSH) that can be very useful for scripting automatic remote deployments, increasing repeatability and decreasing maintenance time.

Vagrant

Vagrant aids in creating and configuring lightweight, reproducible, and portable development environments. Get new developers spun up into a project quickly.

Chef and Puppet

Chef and Puppet are automation platforms for server/virtual environments. They allow your team to define the requirements of an environment and automatically set up and configure new machines without (much or any) user intervention. They can be used to document the target environments and manage any changes that need to be made. Both of these tools exist in the same space and are oustanding in their own right- your team will very likely choose either Chef or Puppet and not both.

Docker

Docker is a newer (as of 2013) entrant into the DevOps space that attempts to treat the environment, application code and configuration as single artifact (a container) to be deployed. It is being used for automating packaging and deployments to the creation of PAAS environments.

Footnotes

1. Behr, K. Kim, G. Spafford, G. The Visible Ops Handbook: Implementing ITIL in 4 Practical and Auditable Steps Information Technology Process Institute


Installing Tomcat 7 with Apache and mod_jk

Installing Java, Tomcat 7, Apache 2 and mod_jk in Ubuntu 12.04

Why would you need to install Apache and mod_jk to run a Java web application on your server? Well, you don’t have to, but there are a lot of reasons why you would want to. Removing :8080 from all the requests, rewriting URLs and handling static assets, performance (loadbalancing/clustering/CDN/etc.) and a whole host of other reasons. This guide assumes you have a brand new server up and running with nothing really installed yet.

The following is an extremely vanilla setup that I would consider my normal baseline configuration, the bare minimum to get Tomcat and Apache talking successfully. This will mainly aid in allowing all your web requests to be funneled through port 80 (instead of the normal Tomcat 8080) and allowing static assets to be served that exist outside of your application’s WAR file- think uploaded user images and such.

Install Java

First things first, without Java installed on your server, none of this works.

Go to the Oracle Java Download page and choose the proper JDK for your environment. Remember: A JRE is not enough for running Tomcat or other Java application/container servers.

Follow the instructions for unpacking and installing the JDK to your file system. This example assumes you installed the JDK to /usr/local

Install Tomcat

Download the latest Tomcat version, for this post we’ll be using Tomcat 7.

Unpack Tomcat and move the contents to where you want them. For this post, we’ll be using /usr/share/tomcat7

Install Apache and mod_jk

For Ubuntu, we’ll simply use the apt package manager to install these:

Configure the default site to use the default AJP worker. For this example, we will be configuring all requests to go to Tomcat

Edit /etc/apache2/sites-available/default:

Server Specific Configurations

These steps are not entirely necessary and can vary from OS to OS depending on how servers in your environment are normally set up.

Update alternatives to point to your Java install:

Edit /etc/environment and add the following:

Create a tomcat user for running the server and assign ownership of $TOMCAT_HOME (this is necessary for the Tomcat init script in the next step):

Install this tomcat7 init.d startup script that I created for better startup/shutdown of your server to /etc/init.d/tomcat7.


Simulating Slow Connections in OS X or Linux

Simulating Slow and Laggy Connections

 

Do you want to simulate how it feels to load your site from a mobile connection (If it’s AT&T just turn off your network for an accurate simulation- I kid, don’t sue. But seriously AT&T, figure it out.) or from a laggy network? In OS X or Linux, you’ve got everything you need already installed: ipfw (IP firewall and traffic shaper control).

Create a Pipe

Configure a pipe with the appropriate bandwidth (I’ve also added a 200 ms response delay in this example).

Attach the Pipe

In this example we’re going to use port 80, but you can also use port 443 or any other port that you may be testing communication on. Additionally, you can attach the pipe to multiple ports.

That’s it!

Wait a second, you say! Now your network connection is completely throttled and everything is running terribly! You need to delete both ipfw entries and the pipe that were previously created.

Here’s how you undo what you’ve just done:


Subversion: Merging a Branch to Trunk

Merging a Subversion Branch Back to Trunk

So, you’ve successfully branched code and have a great new feature set you need to get back to trunk, but how? Easy!

Get the latest revision number of the branch you want to merge

If you already have the branch checked out, cd to that directory and svn update it.

Alternatively, if you don’t have (or don’t want to have) the branch checked out locally, you can do it remotely.

The last line returned contains the first revision of that branch, it’ll look something like this:

Check out trunk

Alternatively, you can just update trunk if you have it checked out

Change to the trunk working directory and do the following:

You should see output like this:

This will update your copy of trunk to the most recent version and tell you the revision you are at. Make note of that number as well (should say “At revision YYYY”; where YYYY is the second number you’ll need to remember).

Perform the merge

Now you’ve got the details you need; the revision at which the branch started and the most current revision.

Change directory to the trunk

Check in the results:

Final thoughts

Described above is how to perform the act of merging a branch back to the trunk, there are however many things I didn’t cover. Before you perform this operation you should do a review of all the changes and potential conflicts if the trunk was also receiving active development.

In a team environment, this would be a great time for a peer/mentor code review as well as an approach review for the bugs/featured covered in the branch.


Deploying New Relic on AWS Elastic Beanstalk with Tomcat

It’s relatively straightforward and New Relic’s docs show you how to do it just fine for Tomcat 6. First, go look over New Relic’s documentation, I’m not going to repeat all the details they show there.
The only difference is the following:
At the Container JVM Command Line Options field, instead of the -javaagent string New Relic recommends, put this in instead

Setting the Logging Environment Name

You can also set the logging environment name in the same line. After the -javaagent string you justed enter, put in this:

If you read through the newrelic.yml file, you can see all the different options that switching environments can give to you besides just a different name in the New Relic dashboard.


Encrypting a tar or gz (gzip) File with OpenSSL

When you have sensitive data that you need to transmit but want to make it easy to encrypt and decrypt it, use some standard tools to get the job done!

I recently had an issue where a client was using OS X laptops running an Admin panel written in PHP on MAMP in an environment that may or may not have an internet connection. The problem was that they needed to be able to dump their database data into an encrypted file so that they could send the data off when they get a connection (via email, upload, who knows). My initial response was to use gpg to encrypt the file and hand out the keys to the people who would eventually be reading the data.

Turns out, this was going to be a nightmare and I needed something ‘easier’. How about encrypting a tar file with OpenSSL? Bingo! This solution uses utilities that are already on the machine and no installations need to be performed. The reason this was such a big deal is because the laptops running this software will be all over the world with various levels of technical acumen and it will be a nightmare to make sure every single laptop has been updated correctly.

Encrypting Your File

tar and gzip the file, then encrypt it using des3 and a secret key.

That simple!

Decrypting Your File

Essentially, just call all the commands in the reverse order.

Download the Utility Scripts

Download them!


Securing Passwords, One Way Hashes, PBKDF2, PHP and You

Plain text passwords and simple one way hashes are not enough to protect your users. You need salt, pepper, and peanut butter. Am I crazy you ask? Maybe, but read on.

It happens to big huge companies (LinkedIn, Last.fm, eHarmony), the little guys, and everything in between. Databases get breached and passwords get hacked. It always surprises me when I hear about how many thousands of users had the password “password”, or that the target’s password hashes were cracked in a matter of hours or days- or worse, their passwords were plain text. At this point, it is so easy to make passwords pretty secure with just basic knowledge of cryptography and hashing. As a matter of fact, as a competent developer, you don’t need to know much at all about the how’s and why’s of crypto to secure your users’ data.

First, do not think you are safe because you run your passwords through MD5 or SHA-256. MD5 has been cracked and SHA-256 is barely better than storing their passwords in plain text. Cryptograhic hash functions are NOT password hash functions!

One Way Hashing

A one way hash performs a bunch of mathematical operations that transform input into a (mostly) unique output, called a digest. Because these operations are one way, you cannot ‘decrypt’ the output- you can’t turn a digest into the original input. Good cryptographic hash functions should not generate digests that are the same for different input. Additionally, when the input is changed, just slightly, the resulting digest should be very different.

A typical use case would be when a user signs up for a website and creates a password. The conscientious developer takes the plain text password, runs it through a hashing function (let’s say, MD5) and stores the result in the database. When the user goes to log in the next time they enter their password and the authentication mechanism runs it through MD5 and compares the result against what is stored in the database.

That sounds pretty safe, right? Wrong. It’s akin to locking the door and leaving the window open. If the database was stolen it might make it harder to infer anything about the passwords just by looking at the data, but it doesn’t really make it any harder to guess or “crack” the password.

Password Hash Functions

… are not the same as cryptographic hash functions

Just using a cryptographic function on a plain text password doesn’t defend it very well. There a number of major problems and threats that are not being avoided. The two biggest are speed and recognizability of hashes.

Hashing Speed

Cryptographic hash functions are used for lots of things, most of them have to do with fingerprinting and verifying data. They are designed to be very fast so that the encryption processes isn’t slowed down. This presents a big problem for password hashing. Speed. The faster a function creates a digest, the more frequently an attacker can guess the password and compare the output. MD5, for instance, is so fast that on basic hardware you could guess over 5 billion times per second. Think about it for a second, do you need that speed to allow your users to log in? When it takes 15 seconds to enter your username and password, a few second to log in, and a few seconds of perceived page load time, will they notice the difference between .000001 seconds or 1 second for the authentication mechanism? The answer is no, not to enough of a degree to degrade their experience. For password hashing, slower is good.

Recognizability of Hashes

What happens when 10,000 people all use “password” as their password? Their hashes are all the same! If you just get one account cracked, you automatically crack everyone else with the same hash. If an attacker has a huge, precomputed list of hashes (called a rainbow table), they can scan your database looking for any hashes that match. They’ve already cracked accounts without even guessing a password yet! They could have a huge percentage of your system’s passwords before ever once making a guess.

Fortunately though, there are a few relatively easy things you can do to make their life harder. You don’t need to do anything heroic and the code isn’t even that tricky. Heck, most of it already exists and is free to use.

Salting

Talk about low hanging fruit. All you have to do is add some random characters to their password (and keep track of them). A salt is a random sequence of data which is added to the hash function or the password string itself. Say you generated a salt “12345″ and had a password “password”, you could put them together “password12345″ and run that through your hash function to produce a digest that wouldn’t be so easily given up. Every password should have its own salt and should be at minimum, 32 characters or more to make it harder to guess the digest.

This is a basic salt generation algorithm. Do NOT use this function for generating salts where you are trying to protect details like credit card numbers, or even email addresses for that matter. It’s a pretty poor implementation, really.

When we create a user password we’ll generate a salt, add it to the password string, hash the password to get a digest, then store the salt and digest in the database. To log the user in subsequently we could use functions like the following:

Password Stretching

Stretching is creating a digest of a digest (of a digest of a digest … of a digest … you get it.) If you create a digest of a password, then create a digest of that X number of times you can no longer simply create a digest (from a rainbow table or otherwise) and compare it directly to the digest that is stored in the database. To compare passwords you’ll have to run the exact same number of iterations if hashing digests to compare passwords. This is useful on multiple fronts: it slows things down and (in conjunction with salted passwords) your hashes no longer look the same as everyone else’s. It stands to reason that if hashing a password once takes X amount of time, hashing it twice will take approximately 2X. You’ve just cut in half the number of times an attacker can guess your passwords. Congratulations! A good system takes so long to process a single digest that guessing a password using brute force will take more than a lifetime.

Let’s modify our password hashing function:

Notice that I have re-salted every hash to add extra randomization to the digest… just another wrinkle to throw at an attacker.

Pepper

Additionally, you can have an application wide salt, called a pepper. Think of it as a salt for the salt, except this salt is unique only to the application, server, environment, or database.
You could use it like that hash('sha256', $pepper . $password . $salt);

Adaptive Key Derivation

Adaptive key derivation functions generate digests from passwords while applying salts and stretching. They implement many more wrinkles and are tested against attack vectors you may never think of- which is the important part. They are tested against attack vectors. Rolling your own cryptographic functions introduce a lot of unnecessary exposure and take more time than using generally accepted libraries, implementations and functions. I’m going to focus on the one I know best, PBKDF2. There are others such as bcrypt and mcrypt

Peanut Butter Keeps Dogs Friendly Too

PBKDF2 (Password-Based Key Derivation Function) is probably the most widely used derivation function. It is a container for a hash function, e.g. SHA-1 or RIPEMD,. For each input it applies a salt and iterates the hash many times in such a way that not much entropy (length and randomness) is lost. Primarily, it is done in such a way that it is SLOW to generate a single digest. The US government and NSA use this for generating strong encryption keys.

Adaptive keys are great first step, but remember, this is one tiny piece of securing user data.

Below is a very basic class I created that can be used for generating salts and digests through a variety of ways. You can download it here. This file will be updated regularly, so stay in touch!


Measuring Download Speed from Linux Command Line

I recently needed to test the network speed of the ISP from my Ubuntu 10.04 LTS server. I was trying to think of a better way to test it than going out to a Linux Distro's web site and downloading an ISO from them. I stumbled across this post on StackOverflow that had a URL to a speedtest.net test file and my speedtest scripts were born. I created two scripts; one utilizing wget and one utilizing curl. A lot of machines don’t come with curl by default, but it has a lot more output than wget does while downloading.

 

What Do They Do?

The scripts utilize wget or curl to download the speedtest.net 500M test file and you can view the speed results in real time. This is an entirely unscientific method of testing your speed, but much better than say, going to Ubuntu and downloading their latest ISO via wget. Finally, the output is set to go to /dev/null, which means it simply throws away everything it downloads (no cleanup).

 

The Code

Download both scripts here

speedtest-wget.sh

speedtest-curl.sh


Pages:123