Refactoring As You Go

The “Boy Scout rule” in programming states that we should:

Always check a module in cleaner than when you checked it out.

I follow this rule when I make updates on existing codebases, and I use updates as an opportunity to refactor a section of a codebase that I’m already modifying. Sometimes these refactors are small, and only amount to updating a specific function, or even a few lines in a function. Sometimes, however, I will refactor an entire file. In those cases, it’s helpful to follow a few rules:

  1. A refactor should not change the public API. This means that any functions or class methods that are exposed for use by other code should not change their method signature or return values.
  2. All tests that were previously passing should pass after the refactor without needing to modify the tests themselves (unless there was an error in the test that was exposed by the refactor, which happens more often than you’d think).
  3. If you’re taking the time to do a refactor, ensure that whatever you touch is brought up to the latest coding standards, including adding or modifying docblocks.

One trick I use when refactoring an entire file is to add a line to mark my place, since I tend to move method definitions around to group them by visibility (in classes) and sort them alphabetically (because I’m a pedant). Most IDEs will highlight TODO lines in a different, brighter color, so I use the following to mark my place:

It’s also useful to refactor one method at a time, and run tests after each, to catch any breaking changes right away.

Happy refactoring!

Risks of Using NPM for Front-End Packages

A few years ago, there was a big push to use npm instead of Bower as the preferred package manager for JavaScript packages. The thought was, “it’s all JavaScript, why not put it all in the same place?” What followed was much pooh-poohing of Bower, and many front-end devs jumped on the npm bandwagon. At the time, I was wary of such a move, since npm is the Node package manager, and Bower is the web package manager. However, for the most part, devs have been able to use npm to manage JavaScript dependencies for Node, browser, and isomorphic applications without much difficulty.

Until recently.

Node 4 LTS reached end-of-support in April of this year, which opened the floodgates to packages incorporating more ES6 features into their npm packages destined for Node. I bumped against this recently when using camelcase, which had a recent update that included dropping support for Node 4, which included a move to using ES6 arrow functions. This was problematic in my case, because we were using camelcase on a web project, and Sindre Sorhus doesn’t believe in transpiling code to ES5 before publishing on npm. His rationale for this is sound—he’s publishing a Node module to npm with explicit version requirements listed in the package, and there’s no need to transpile the package to work with the listed Node versions. The problem is when developers use that package in a web context, for which it was not designed, and for which it will not natively work in any browser that requires it to be transpiled.

Many (most?) Babel configurations ignore the node_modules directory, because a) most of what has historically been loaded from npm has not required transpilation in order to work in the browser and b) project standards differ, so if you run node_modules through Babel and, for example, a module fails a strict standards check, it can fail your build. Plus, transpiling everything in node_modules is expensive, and slows down build and deployment tasks. It is therefore up to the individual developer to know which modules require transpilation, and to whitelist them in the Babel configuration.

This problem also exists in the other direction. Take, for example, the whatwg-fetch package, which is a polyfill for window.fetch() that only works in the browser, a fact which is noted about a quarter of the way down the page on

This project doesn’t work under Node.js environments. It’s meant for web browsers only. You should ensure that your application doesn’t try to package and run this on the server.

(For those interested in a polyfill that works in both the Node and browser contexts, you should check out isomorphic-fetch.)

With the sunset of Node 4, and what I am sure will be an increase in the number of npm packages that ship using ES6 code that is supported by Node 6 that wasn’t supported by Node 4, I predict that this problem is going to get worse. At this point, it’s unlikely that a proposal to split Node packages from browser packages will gain much traction among JavaScript developers, especially since so many of them can be used in isomorphic contexts. Therefore, I would advocate that, at a minimum, include a feature to tag modules as “intended for Node” or “intended for the browser” or “intended for both (isomorphic).” This idea could be further extended by making better use of the browser property in package.json, which indicates that the module is intended specifically for the browser, and providing warnings when including Node modules that are not ready for the browser out-of-the-box.

Until and unless tooling catches up, I would recommend performing IE11 testing after every feature build to catch these issues early. Including untranspiled ES6 in a minified JS bundle yields an extremely unhelpful “syntax error on line 1, column 1623452” error in IE, which does little to point to the specific package that is causing the problem.

Using Jest to Validate JSON Data Shape

I’ve worked on a few projects now that involve storing data in JSON files. The projects were small enough in scope or were slated for inclusion in a larger application, and we couldn’t justify the need for an external data source, so we bundled the data as a JSON file.

However, this led to a challenge—typically, when dealing with an external data store, there are methods in place to validate and enforce data shape, which don’t exist in a freeform JSON file. So, I decided to use Jest (which we were using for writing unit and integration tests already) to test the shape of the data in the JSON file.

Here is an abbreviated example of what I’m talking about, which could easily be extended and modified for different data structures:


Preserving Nested Class Names in CSS Modules

If you’re using CSS Modules to locally scope your classes, but you need to preserve specific nested class names, you need to enclose them in a :global block:

The default behavior of css-loader is to take any class name found in a file and scope it locally with the addition of a partial hash. You can nest selectors to tags without them being modified, but if you nest selectors to other class names, they will be changed during compilation. To prevent this behavior, simply wrap the class names that you don’t want to be modified in the :global block, as seen above. This approach is particularly useful when you can’t control the class names that are being applied to elements, such as when using a separate module that applies its own class names to your markup.

Batch Converting Excel Files to CSV Using LibreOffice

I use LibreOffice Calc for working with CSV files. In my experience, it has the best support for CSV file formats, and it’s extremely fast. You can also use LibreOffice from the command line to run batch operations, such as converting Excel files to CSV format.

I use a Mac, so LibreOffice is installed in an app bundle. The command-line program that you will need to use is within the bundle. I’m assuming that LibreOffice is installed in the standard /Applications directory for this tutorial.

It’s important to note that you need to ensure that LibreOffice is not running when you execute this command. Otherwise, it will fail silently. This includes having the application open with no open windows. If you command+tab and see the application icon, you will need to tab to it and quit it fully.

The above command will convert all “.xlsx” files on your desktop to CSV files and will place them in a “csv” directory on your desktop. This approach is much easier than opening each one and doing an export, and can be used in test scripts when programmatically checking output.