Refactoring As You Go

The “Boy Scout rule” in programming states that we should:

Always check a module in cleaner than when you checked it out.

I follow this rule when I make updates on existing codebases, and I use updates as an opportunity to refactor a section of a codebase that I’m already modifying. Sometimes these refactors are small, and only amount to updating a specific function, or even a few lines in a function. Sometimes, however, I will refactor an entire file. In those cases, it’s helpful to follow a few rules:

  1. A refactor should not change the public API. This means that any functions or class methods that are exposed for use by other code should not change their method signature or return values.
  2. All tests that were previously passing should pass after the refactor without needing to modify the tests themselves (unless there was an error in the test that was exposed by the refactor, which happens more often than you’d think).
  3. If you’re taking the time to do a refactor, ensure that whatever you touch is brought up to the latest coding standards, including adding or modifying docblocks.

One trick I use when refactoring an entire file is to add a line to mark my place, since I tend to move method definitions around to group them by visibility (in classes) and sort them alphabetically (because I’m a pedant). Most IDEs will highlight TODO lines in a different, brighter color, so I use the following to mark my place:

It’s also useful to refactor one method at a time, and run tests after each, to catch any breaking changes right away.

Happy refactoring!

Risks of Using NPM for Front-End Packages

A few years ago, there was a big push to use npm instead of Bower as the preferred package manager for JavaScript packages. The thought was, “it’s all JavaScript, why not put it all in the same place?” What followed was much pooh-poohing of Bower, and many front-end devs jumped on the npm bandwagon. At the time, I was wary of such a move, since npm is the Node package manager, and Bower is the web package manager. However, for the most part, devs have been able to use npm to manage JavaScript dependencies for Node, browser, and isomorphic applications without much difficulty.

Until recently.

Node 4 LTS reached end-of-support in April of this year, which opened the floodgates to packages incorporating more ES6 features into their npm packages destined for Node. I bumped against this recently when using camelcase, which had a recent update that included dropping support for Node 4, which included a move to using ES6 arrow functions. This was problematic in my case, because we were using camelcase on a web project, and Sindre Sorhus doesn’t believe in transpiling code to ES5 before publishing on npm. His rationale for this is sound—he’s publishing a Node module to npm with explicit version requirements listed in the package, and there’s no need to transpile the package to work with the listed Node versions. The problem is when developers use that package in a web context, for which it was not designed, and for which it will not natively work in any browser that requires it to be transpiled.

Many (most?) Babel configurations ignore the node_modules directory, because a) most of what has historically been loaded from npm has not required transpilation in order to work in the browser and b) project standards differ, so if you run node_modules through Babel and, for example, a module fails a strict standards check, it can fail your build. Plus, transpiling everything in node_modules is expensive, and slows down build and deployment tasks. It is therefore up to the individual developer to know which modules require transpilation, and to whitelist them in the Babel configuration.

This problem also exists in the other direction. Take, for example, the whatwg-fetch package, which is a polyfill for window.fetch() that only works in the browser, a fact which is noted about a quarter of the way down the page on

This project doesn’t work under Node.js environments. It’s meant for web browsers only. You should ensure that your application doesn’t try to package and run this on the server.

(For those interested in a polyfill that works in both the Node and browser contexts, you should check out isomorphic-fetch.)

With the sunset of Node 4, and what I am sure will be an increase in the number of npm packages that ship using ES6 code that is supported by Node 6 that wasn’t supported by Node 4, I predict that this problem is going to get worse. At this point, it’s unlikely that a proposal to split Node packages from browser packages will gain much traction among JavaScript developers, especially since so many of them can be used in isomorphic contexts. Therefore, I would advocate that, at a minimum, include a feature to tag modules as “intended for Node” or “intended for the browser” or “intended for both (isomorphic).” This idea could be further extended by making better use of the browser property in package.json, which indicates that the module is intended specifically for the browser, and providing warnings when including Node modules that are not ready for the browser out-of-the-box.

Until and unless tooling catches up, I would recommend performing IE11 testing after every feature build to catch these issues early. Including untranspiled ES6 in a minified JS bundle yields an extremely unhelpful “syntax error on line 1, column 1623452” error in IE, which does little to point to the specific package that is causing the problem.

Using Jest to Validate JSON Data Shape

I’ve worked on a few projects now that involve storing data in JSON files. The projects were small enough in scope or were slated for inclusion in a larger application, and we couldn’t justify the need for an external data source, so we bundled the data as a JSON file.

However, this led to a challenge—typically, when dealing with an external data store, there are methods in place to validate and enforce data shape, which don’t exist in a freeform JSON file. So, I decided to use Jest (which we were using for writing unit and integration tests already) to test the shape of the data in the JSON file.

Here is an abbreviated example of what I’m talking about, which could easily be extended and modified for different data structures:


Batch Converting Excel Files to CSV Using LibreOffice

I use LibreOffice Calc for working with CSV files. In my experience, it has the best support for CSV file formats, and it’s extremely fast. You can also use LibreOffice from the command line to run batch operations, such as converting Excel files to CSV format.

I use a Mac, so LibreOffice is installed in an app bundle. The command-line program that you will need to use is within the bundle. I’m assuming that LibreOffice is installed in the standard /Applications directory for this tutorial.

It’s important to note that you need to ensure that LibreOffice is not running when you execute this command. Otherwise, it will fail silently. This includes having the application open with no open windows. If you command+tab and see the application icon, you will need to tab to it and quit it fully.

The above command will convert all “.xlsx” files on your desktop to CSV files and will place them in a “csv” directory on your desktop. This approach is much easier than opening each one and doing an export, and can be used in test scripts when programmatically checking output.

Running WordPress PHPUnit Tests in PhpStorm

Update 2019-01-02: Added details on configuring the include_path to point to PHPUnit.

I use a virtual machine for doing local development (using Vagrant) and PhpStorm as my IDE. I write unit tests in PHPUnit and want to be able to run those tests after making changes to code. The problem is that the tests need to be run in the virtual machine, so setup isn’t as straightforward as just pointing PhpStorm to my test configuration. It’s not terribly difficult though, and the rewards are well worth it.

1. Add a Remote CLI Interpreter

In order for the command-line PHPUnit process to run inside of your Vagrant box and send results to PhpStorm, you have to give PhpStorm some information on how to connect to the Vagrant box to run commands.

  1. Ensure your Vagrant box is running.
  2. Go to Preferences > Languages & Frameworks > PHP.
  3. Click the three dots next to CLI Interpreter to add a new remote interpreter.
  4. Click the plus icon to add a new CLI interpreter.
  5. Set the name to whatever you want. I usually do not check the box for Visible only for this project because I use the same interpreter across multiple projects housed on the same Vagrant box.
  6. Under the Remote section, select Vagrant.
  7. Set the Vagrant Instance Folder to the directory that contains your Vagrantfile.
  8. Set the Vagrant Host URL to ssh://vagrant@
  9. Map the PHP executable to the path of the php executable on the Vagrant box. In my case, this was /usr/bin/php.
  10. You may need to add PHPUnit explicitly to the include_path. In my case, I needed to click on the folder next to Configuration Options, add a configuration directive for include_path, and set the value to .:/usr/local/lib/php:/usr/local/src/composer/vendor/phpunit/phpunit.
  11. Click OK and PhpStorm should validate your settings for you.

2. Add a Run/Debug Configuration for PHPUnit

In order to run PHPUnit tests with a click of a button in PhpStorm, you have to give PhpStorm some information about what you want to run and where you want to run it. This is accomplished by creating a run/debug configuration, which can be done through the Run > Edit Configurations menu item.

  1. Click the plus icon to add a new configuration and select PHPUnit as the type.
  2. Give the configuration a name. I usually name mine Unit Tests or some such.
  3. Set the Test scope to Defined in the configuration file.
  4. Check the box for Use alternative configuration file and point it at the phpunit.xml file in your project directory.
  5. Click the three dots next to Environment Variables. If there are any environment variables defined in your bash profile in Vagrant, they won’t be available as part of the run configuration, unless you specify them here. On my system, I needed to set the following:
  6. Click Apply and then close the window.
  7. In the top right, you should now be able to select your unit tests configuration from the dropdown in the debug area, then click the play icon to start the test suite.

3. Running Specific Tests

I will often iterate heavily on a single test, and I don’t want to run the entire test suite every time I make a modification to one test, so I create targeted test configurations to only affect a particular set of tests. To do this:

  1. Go to Run > Edit Configurations.
  2. Select your main unit test configuration and click the copy button.
  3. Update the name of the configuration to include the file you want to test.
  4. Change the Test scope to Class.
  5. Select the File that contains the class you want to test.
  6. Select the Class in the file that you want to test.
  7. Click OK.
  8. You should now be able to select the new configuration from the debug drop down and click the play icon to run just that test.