Minify and Compress all your JavaScript files into One, on the Fly

As my applications have grown in complexity, I’ve followed a path probably quite similar to many of you with respect to .js file maintenance. In the beginning I had one js file to include in the site’s/app’s header, containing just a few basic js functions used across the site.

As my JavaScript codebase grew, I added more .js files, trying to specialize the files and even went through the trouble of including specific files on some pages and not on others. Then I adopted a framework (jQuery in my case) and that is just one more script tag.

At some point I became aware of the minifcation trend for JS and CSS files, and began looking at how much bandwidth I could save per page load by doing so.  Using an online minifier, I began minifying each .js file after every modification.  This became very unmanageable very quickly. I also had to consider the impact of multiple file loads on the browser and how that impacts performance.

I decided to find a way to automate this process.  I stumbled upon the JSmin class PHP class, which is an implementation of Douglas Crockford’s JSMin.  The solution would implement JSmin, but with a wrapper class that would read each .js file, minify (and compress if possible), and then output into a single file.  More helpful ideas were found in this blog article at verens.com.

What I came up with accomplishes the following:

– Given an array of .js filenames, reads and minifies each, writes to a single new file.

– Reads file modification date of each file, if none are newer than the auto-generated output file, the process is skipped.

This results in an on-the-fly minifier that only runs when JavaScript code has been modified in any one of the original files.  This makes code deployment simpler….just sync updated js files to the appropriate directory.

I’ve encountered a couple of negatives which are easily mitigated.  First, in production the process is slow…sometimes 15 seconds. That first user to hit the site after a new js file has been uploaded is going to think the server is down.  Remedy this by uploading at off-peak times and immediately surf to the site yourself, saving an unwitting user the 15 second wait.  Second, I’ve experienced some kind of funky file collision on occasion which resulted in the minification running on every page load (think 15 second page loads for every page, every time), so when syncing from test to prod I will typically delete the generated file from test first, so prod can then generate its own clean file.

So here’s the script:

/**
 * Wrapper class for JSMin javascript minification script
 *
 * based on http://verens.com/archives/2008/05/20/efficient-js-minification-using-php/
 *
 * @author Chris Renner
 */
 
include('JSMin.php');
 
class App_Minifier {
 
    /**
     * Constructor not implemented
     */
    public function __construct() {}
 
    /**
     * Concatenate and minify multiple js files and return filename/path to the merged file
     * @param string $source_dir
     * @param string $cache_dir
     * @param array $scripts
     * @return string
     */
    public static function fetch($source_dir, $cache_dir, $scripts) {
 
        $cache_file = self::get_filename($scripts);
 
        $result = self::compare_files($source_dir, $cache_dir, $scripts, $cache_file);
 
        if(!$result) {
 
            $contents = NULL;
 
            foreach($scripts as $file) {
 
                $contents .= file_get_contents($source_dir . '/' . $file . '.js');
 
            }
 
            // turned off due to performance issues on production 6-9-10
            $code = "";
 
            $minified  = JSMin::minify($contents);
 
            $fp = @fopen($cache_dir . '/' . $cache_file, "w");
            @fwrite($fp, $minified);
            @fclose($fp);
 
	}
 
        return $cache_dir . '/' . $cache_file;
 
    }
 
    /**
     * input array of js file names
     * converts array into string and returns hash of the string
     * as the new filename for the minified js file
     * @param array $scripts
     * @return string
     */
    public static function get_filename($scripts) {
 
        $filename = md5(implode('_', $scripts)) . '.js';
 
        return $filename;
 
    }
 
    /**
     * we're going to compare the modified date of the source files
     * against the hash file if it exists and return true if the hash
     * file is newer and
     * return false if its older or if hash file doesn't exist
     * @param string $source_dir
     * @param string $cache_dir
     * @param array $scripts
     * @param string $cache_file
     * @return boolean
     */
    public static function compare_files($source_dir, $cache_dir, $scripts, $cache_file) {
 
        if(!file_exists($cache_dir . '/' . $cache_file)) {
            return false;
        }
 
        $cache_modified = filemtime($cache_dir . '/' . $cache_file);
 
        foreach($scripts as $source_file) {
 
            $source_modified = filemtime($source_dir . '/' . $source_file . '.js');
 
            if($source_modified > $cache_modified) {
                return false;
            }
 
        }
 
        return true;
 
    }
 
}

And here’s how you would call it in your boostrapping file, etc.

// create array of .js filenames to be minified
$scripts = array('jquery', 'jquery.colorbox', 'jquery.livequery', 'jquery.tipsy', 'jquery.validate', 'functions', 'menu', 'childtables', 'datepicker');
 
// call the fetch static method, supplying the source dir, target dir and the scripts array
$scriptfile = App_Minifier::fetch('scripts', 'temp', $scripts);
 
// put the result in a script tag in your html header

Yes I realize that a static class perhaps wasn’t the best choice, but it works and it keeps memory usage to a minimum. I’d probably write it differently today, and may yet refactor it to remove the static.

The output $scriptfile will be a .js filename, generated by hashing the concatenation of all the filenames in the scripts array. This permits different combinations of files to produce different output files, if that’s something you need.

Also note my comment in fetch() about the gzip feature not being used. This caused problems in my particular environment so I’m not using it, but it may work for some of you and I’d be eager to hear from you if it does. To enable, just change line 50 from

$minified  = JSMin::minify($contents);

to

$minified  = JSMin::minify($code . $contents);

In my specific example I was loading as many as 9 different .js files per page load, totaling 250kb. Now all that JavaScript loads in 1 file measuring 147kb.

Oh, and don’t forget to download JSMin.php from github.

Sunsetting IE6 on your own

Anyone that does front end web development or web design hates Internet Explorer 6.  Why?  First released in 2001, IE6 is dog slow, very buggy, and essentially completely non-standards compliant.

These issues manifest themselves in two primary categories as best I can tell:  javaScript bugs/behaviors, and layout/CSS behaviors or missing features.

So, developers learn to code around these things.  In CSS this means a lot of hacks and html conditional statements to load custom css, js and htc files to fix many CSS-related deficiencies such as the lack of 24-channel PNG transparency.  As to JavaScript, I’ve found that jQuery cures the need for IE6-specific code, but we continue to see IE6-specific bugs/glitches that are difficult or impossible to isolate, duplicate, or fix.

Unfortunately for enterprise developers, our users seem to be behind the curve when compared to browser adoption. Much of our target audience is still on Windows XP, and depending on IT policies and budget constraints, abandonment of IE6 in the enterprise has lagged behind the consumer segment.

In the enterprise environment I work in, my primary application sees about 200 visits per day across the institution.  About a year ago I began a crusade (no, that is not too strong of a word when the context is IE6) to purge IE6 from the realm of users accessing my web apps.  I had three goals in mind: 1) Reduce userland bugs/glitches and errors caused by IE6, 2) Improve the overall user experience via a superior browser, and 3) Reduce the amount of time I have to spend working around IE6’s craptasticness.

I have to give this caveat before continuing further: As my institution has begun deploying Win7 machines, our IE6 usage has plummeted like a brick.  Admittedly this is the biggest single factor in the reduction of IE6 users against my web apps.  However, there is something to be said for encouraging your users to upgrade and warning them that their experience may be poor with IE6.

Step 1 is to determine the user’s browser when they hit your site.  On my web app’s login page, I use a simple conditional similar to the below example to display a formatted message to the user:

if(eregi('MSIE 6.0', $_SERVER['HTTP_USER_AGENT'])) {
    echo 'You are accessing this site using Internet Explorer 6.0.  We make every reasonable attempt to ensure this site is compatible with IE6, but due to the age and lack of web standards compliance in IE6, you may experience some errors and bugs that are beyond our control.  For best performance, we STRONGLY recommend a more modern browser, such as <a href="http://www.mozilla.org/firefox">Mozilla Firefox</a>, <a href="http://www.apple.com/safari/download">Apple Safari</a>, or <a href="http://www.microsoft.com/windows/Internet-explorer/default.aspx">Microsoft Internet Explorer 8</a>, which are all available for free.  Firefox and Safari are recommended for the best overall experience and performance, though Internet Explorer 7 and 8 are also fully compatible. Less than 7.5% of traffic on this site is from users on IE6.  Therefore, at some point we will make a decision not to support IE6 any longer, as it takes considerable effort to maintain this backward compatibility. PLEASE UPGRADE YOUR BROWSER SOON!';
}

Wrap that message with some bold styling to grab the user’s attention. Then start watching your browser traffic with Google Analytics.

Once IE9 is out of beta, I’ll change the message to include it in the recommended browsers, but for now I want to funnel my users into a browser with some CSS3 support, because I’ve incorporated a lot of it into my app and I think it improves the user experience a great deal.

What I’ve seen in the past few months is a steady reduction in IE6 traffic from the mid 30% range this time last year, to 10% a couple of months ago, down to 7.1% this past week. I believe the biggest reduction has been deployment of Win7 desktops, but the incremental drive from 10% down to current levels seems to be due to the above user warning.

The other thing I’ve been doing is verbally encouraging use of Firefox any time the issue comes up. When I get a call or email about a bug, if I can’t replicate it in Safari then it is almost always a browser specific issue, and when asked the user almost always informs me he or she is using IE6.

Once IE6 usage hits 5% I am going to change the warning message to say that IE6 is no longer supported. Of course I’ll continue to make accommodations, but I’m not going to go out of my way to make IE6 users comfortable in the application–it just doesn’t make sense any longer.

Smashing Mag: How to Support IE and Still be Cutting Edge

Smashing Magazine is more for the web “design” and photoshop crowd, but any developers working with GUI/front ends at all (which is most of us) will find tons of great information there.  Chris Blatnik says that it is the GUI that makes or breaks an app (after all, the users never see the code, no matter how great the developer thinks it is).

Their latest post is about supporting IE on your websites while still utilizing the latest web technologies, such as CSS3.

The payoff:

Remember that the purpose of this post is not to teach you how to hack IE or deal with its quirks or even how to achieve effects by resorting to JavaScript. Rather, it is to explain how we can design and build websites knowing that differences will arise between browsers.

You won’t see people rioting over the lack of rounded corners on Twitter or WordPress; they aren’t even upset by it, because those differences don’t fundamentally break the websites. People can still use the websites and have a good experience. In most cases, they won’t even notice it!

IE7.js a universal solution to IE6 craptastic-ness?

More thoughts on this later, but for now, just check out Dean Edwards’ IE7.js javascript library, which causes IE to behave like a standards-compliant browser. Eric Meyer has some great thoughts about IE7.js as well.  Note that both IE7.js and Eric’s blog post have been around a while, but this is still good stuff for anyone having to suffer with a large IE6 installed base (myself included).

PNG transparency fixes for IE6

If you’re a web developer, chances are you’ve grown to hate Internet Explorer 6 due to its inconsistencies with respect to CSS, JavaScript, and standards in general.  One additional annoyance is IE6’s lack of support for PNG transparency.

At my institution, we still have a very large installed base for IE6 (over 70% of our machines), so I cannot get around coding for IE6.  I’ve been able to avoid the issue in the past by just not using PNG images.  But at some point, GIFs won’t cut it, particularly in the realm of application icons (See Fam Fam Fam Silk icon set).

I have employed two separate approaches to address this problem.  The first is a JavaScript “PNG Fix” that the browser will apply to each image on the page.  Essentially the JS script will loop through every PNG, applying the “progid:DXImageTransform” function to each and thereby modifying the opacity.

There are several “fixes” written to do this, all very similar in function.  The one I like the most is from Daniel McDonald at Project Atomic.  Why do I like it?  Its simple (only 25 lines of code) and therefore quite fast.  If you have a lot of PNG files on your pages, you can sometimes see each file get transformed as the script slow-walks down the page.  Speed is of the utmost importance in any web application.  Just include the JS file, a “blank.gif” image, and include a couple of lines in your page header, and you’re done.

The second approach I’ve used to tackle PNG transparency issues in IE6 is really an outcome of the first.  The Project Atomic fix works great, unless your PNG is part of a CSS element such as a background image.

The real issue at hand is that IE6 doesn’t support 24-bit transparency.  It does, however, support 8-bit.  So why not change your 24-bit PNGs into 8-bit?  For an entire application, this could become a huge amount of work for very little value added, hence implementation of the Project Atomic fix. However, for those less common instances of PNGs as background images, I have followed Matthew Capewell’s tutorial to transform the 24-bit PNGs into IE6-compatible 8-bit images.

We’re stuck with IE6 for at least a few more years (When Microsoft finally stops providing XP to PC makers for new machines), good Web Developers should be able to deal with IE6 rather than just ignore it.

Note: The Project Atomic Fix is actually in-use on this site, in case you were wondering.