Nullability in GraphQL

Whether you prefer “schema first” or “resolver first”, GraphQL development should definitely be “types first”. One aspect of planning your data types is nullability, and this is important to get right. Nullability in GraphQL is different than how we handle null values in other environments, like REST or gRPC APIs.

Whether you prefer “schema first” or “resolver first”, GraphQL development should definitely be “types first”. One aspect of planning your data types is nullability, and this is important to get right. Nullability in GraphQL is different than how we handle null values in other environments, like REST or gRPC APIs. You see, we expect a RESTful endpoint to either return an object, a list, or other piece(s) of data as a whole – or not at all. If a REST GET operation fails, we expect the entire request to have failed – not just a subset of the operation.

GraphQL is inherently different

However, that’s not how GraphQL rolls. It’s entirely acceptable and likely that most of your data graph will load successfully, except a small part will fail due to the composition of the many micro-services responsible for populating all of that data. When all of the fields in a type are marked as “non-nullable” (using the ! ), then that no longer allows a failure to occur gracefully.

Said differently, if any one non-nullable field defined in your type is ever null, or its resolver throws an exception, the entire query fails. Not cool.

 

Consider this example …

We’ve started a new schema for our app, and have defined a VacationRental  and its associated types.  Notice how everything is non-nullable by default, a common yet problematic approach when creating a new schema:

Let’s look at some of the problems this can lead to, and how we might better approach this initial schema creation …

Nullability is important to users

Kinda. They don’t care what it is, but it definitely affects their user experience. Using nullable fields in your GraphQL projections means that when (not if) one or more parts of a screen fail, the rest of the screen may remain perfectly usable (yay!).  But it’s a real problem if a less-critical portion of your data becomes unavailable due to a database outage, or a hiccup in your network, or a good ol’ act of god, and suddenly your user cannot be successful in achieving whatever one thing was most important to them on that screen.

Allow your screen to fail gracefully

Think about our Vacation Rental example …

Some parts of the app fail, while others succeedIf the Owner of a vacation rental property is viewing their Property Editor screen, attempting to write an impactful headline and illustrative description for their new property, it’s not important that something small like an Inbox Notification shown in the Top Nav Bar fails. If the Notifications service is down for 20 mins, and the inboxNotifications  field returns null, the rest of the query still succeeds and the Owner can accomplish their primary goal on that screen — playfully word-smithing the content that guests will see when viewing their property.

Let’s update our example to allow inboxNotifications to have a null value, meaning if the Notifications service fails, the property owner can still edit their listing:

If your entire query fails, your UI can’t allow your user to have at least some success, or to be shown a specific and helpful message. When the entire query fails, all you can do is tell your user “Something broke, please try again” (even though you and I both know it won’t work when they try again.)

Users care that your app is resilient, and nullable fields on your GraphQL types allow portions of your screen to fail gracefully while the rest of the screen remains usable.

Think of nullable fields as error boundaries, which will likely align with the API service boundaries behind your GraphQL server   🤔

Offer a frictionless user flow

If we want to allow a property owner to incrementally save their progress while listing their gorgeous home for rent, we can’t require every field to be populated at once.  If a user first enters their listing’s title, then description, then address — and each time the screen auto-saves their progress for a frictionless experience and a saved backup in case their computer and/or network starts acting up — then you must allow those fields to be nullable.

Let’s update our VacationRental type to be more flexible and enable that user experience …

Now our typedef allows each field within our UI to be saved automatically as a user progresses through their workflow.  Neato!

Describe your domain

Nullable fields in GraphQL should be used to accurately describe the business domain rules.  For example, a company may require all Owners to enter their name, email, and phone  so that Guests can get a hold of them.

But perhaps you don’t require every Owner to have a twitterHandle , because not everybody twitters.  Let’s make that change, too:

Facebook’s best practices

In the GraphQL docs, the Best Practices page advises to begin defining types using nullable fields, and only later might you decide to indicate specific fields as non-nullable when that guarantee can actually be made. But by default, allowing fields to be nullable improves the resiliency of the larger data graph:

“… in a GraphQL type system, every field is nullable by default. This is because there are many things which can go awry in a networked service backed by databases and other services. A database could go down, an asynchronous action could fail, an exception could be thrown.”

Yes, this might result in more null checks throughout your system, but you should probably be doing some of those anyway 🙂

If you’re working with JavaScript, things will get easier soon. Optional Chaining is at Stage-2 in TC39, and there’s a Babel plugin if you just can’t wait (some languages already have this feature, like Swift). And it’s much more painful trying to ensure that never once will any of your field data return null. (Good luck with that, though!)

But non-nullable is good, too!

Yup, it sure is!  Let’s talk about when it’s a good idea to begin your schema design with non-nullable fields, because I’m definitely not preaching we should “nullable all the things!”

Some types don’t make sense without required fields

Let’s look at the GeoLocation  and Notification  types in the example … 

A GeoLocation  makes no sense at all with just a latitude , or just a longitude.  And a Notification  within your system will always contain a timestamp  and a message , because it’s enforced in your nice type-safe backend.  The amount  of a Price  isn’t enough to generate a transaction if you don’t know what currency  is to be used.

There’s no reason to ever allow nulls in those fields, so we’ll leave those marked as non-nullable.  This reduces your cyclomatic complexity, which is important to the amount of null checks (read: lines of code) you must write in your front-end to avoid errors, as well as the number of unit tests you must write to achieve good test coverage.

This increased verbosity, complexity and required tests is a great reason to always ask yourself, “Is it worth it to allow this field to actually be null?”  🤷‍♂️

Go faster, and go backwards

When you’re spinning up a new data graph, your project is probably evolving quickly with a team of people. Leaving GraphQL fields nullable frees you up to iterate faster, be flexible, and allow you to defer decisions about your data until the last responsible moment.

Some of those decisions are harder to -Z  than others.  Beginning your data graph with nullable type fields, then later on converting some fields to non-nullable is backwards-compatible. However, when you mark those fields as non-nullable from the start, it’s a lot more difficult to make them nullable later on, as there may be code in the wild that calls that data without null checks and your apps could crash.

This might be the opposite of how your input parameters work, but you’re probably still safer to start off with nullable parameters and enforce those required fields in your code if necessary.

The final schema

So after much pondering, we’ve made some important decisions about our schema.  Here’s what I had as the first version in my head, is it what you would’ve done?

(Got something to say? Just leave me a comment, please and thank you …)

ESLint: “Parsing error: Unexpected token” in Visual Studio Code

While adding the plumbing for a new JavaScript website project, I knew it needed an ESLint config to keep my code linted and clean. So I installed ESLint the usual way, answered a few questions to customize my install, and I went along my merry way.

Here’s how to fix “Parsing error: Unexpected token” errors from ESLint when working in Visual Studio Code …

While adding the plumbing for a new JavaScript website project, I knew it needed an ESLint config to keep my code linted and clean. So I installed ESLint the usual way:

Note: The npx command requires [email protected]  Alternatively you can run ./node_modules/.bin/eslint --init.

After being prompted with a few questions to customize my install, I went along my merry way creating files and writing some modern ES6 and ES7 code.  However, the ESLint plugin in Visual Studio Code was giving me odd errors like this in my React JSX code:

  • Parsing error: Unexpected token = 
  • Parsing error: Unexpected token { 
  • Parsing error: Unexpected token / 

Screen shot of error message

The solution

Unexpected token  errors are caused by incompatibilities in your parser options and the code you’re writing.  In this case, I’m using a number of ES6 language features like arrow functions, destructured variables, and such.

The solution is to specify the parser to use in our ESLint configuration – babel-eslint.  Because ESLint doesn’t support the more modern JavaScript syntax, we need to use the babel-eslint  plugin to automatically export a compatible version of our code that ESLint can read.

Here’s an example of my .eslintrc.json  file with the appropriate parser  specified:

Note: Your ESLint configuration file may also be named .eslintrc , or .eslintrc.js , or .eslint.yml  depending on the format of your config file.

And that’s how I fixed that.  Simple, yeah?  Well it took me an hour to figure it out, so I hope this post helps you fix it faster than I did!

ESLint and EditorConfig in VSCode

To kick off 2019, I wanted to start the new year off with cleaner code, with more automation and less effort.  This post should help you and your team kick your new year off with consistently beautiful code, too!

Most importantly, read through to the end to find out how to turn on the “auto-format on save” settings for ESLint, which allows auto-fixing of many problems every time a file is saved!

To kick off 2019, I wanted to start the new year off with cleaner code, with more automation and less effort.  This post should help you and your team kick your new year off with consistently beautiful code, too!

Most importantly, read through to the end to find out how to turn on the “auto-format on save” settings for ESLint, which allows auto-fixing of many problems every time a file is saved!

Most of this post refers specifically to Visual Studio Code, but works similarly in many code editors.

Why?

ESLint is a popular JavaScript linter tool for identifying and reporting on patterns found in ECMAScript/JavaScript code.  EditorConfig helps maintain consistent coding styles for multiple developers working on the same project across various editors and IDEs.

Both are useful for teams of developers to write clean code with consistent styling, and these tools can help you identify potential problems with your code while developing right within any editor via plugins, and may even be able to auto-fix some issues for you.

Here’s what you get:

  • See inline help for eslint errors right inside your IDE
  • Automatically fix problems and format files on save
  • Consistent code formatting across both developers and code editors
  • Better code hinting can help you catch accessibility (and other) issues before your code goes to production

Install VSCode plugins

This post focuses specifically on Visual Studio Code, but most of these concepts can be used similarly with other popular code editors.

  1. Install the EditorConfig plugin for VSCode
  2. Install the ESLint plugin for VSCode
  3. Reload VSCode to enable the new plugins.  Bonus: the  January 2019 (version 1.31) no longer requires a restart when installing/updates extensions!

Install NPM dependencies

You’ll need to install some NPM modules as devDependencies to get everything working in your project’s workspace.

Above, the @latest tag is added to each package to ensure the latest version is installed, even if it is already declared in your package.json.

Configuration

There’s a bit of configuration necessary in order to make the magic happen, but a few commands and some copy/paste is all it takes to get both EditorConfig and ESLint working in VSCode.

EditorConfig

EditorConfig helps maintain consistent coding styles for multiple developers working on the same project across various editors and IDEs. The EditorConfig project consists of a file format for defining coding styles and a collection of text editor plugins that enable editors to read the file format and adhere to defined styles. EditorConfig files are easily readable and they work nicely with version control systems.

EditorConfig.org

Couldn’t have said it better myself.  Sounds great, yeah?  Many IDEs and code editors (including VSCode) only need a simple plugin installed to get the benefits of consistent styling.

You should have already installed the EditorConfig plugin in the first part of this post.  If you don’t already have an .editorconfig file, create one with the following content:

The above code ensures that EditorConfig stops looking for more .editorconfig files at your project’s root folder, and not outside your project directory.  It also accommodates proper semantics for adding line breaks in Markdown files.

ESLint

Code linting is a type of static analysis that is frequently used to find problematic patterns or code that doesn’t adhere to certain style guidelines.

ESLint.org

This is what helps you and your team write consistent, high-quality code.  You agree on some rules for how code should be formatted, check that configuration into an application’s source control, install the appropriate IDE extensions, and then every developer on the project can more easily adhere to those rules in an automated fashion.

ESLint helps us write higher quality code because it performs static analysis on a codebase to find common errors, typos, potential performance gains, opportunities to remove redundancy, and other potential problems.

So let’s get started.  You should have already installed the ESLint plugin in the first part of this post.  We’re going to build upon the Airbnb style guide, and use the eslint-config-airbnb shared configuration for our base set of rules.  Then I’ll show you how to personalize your rules a bit more.

How to Configure ESLint

  1. Install eslint-config-airbnb and its peer dependencies.If you are using npm >= 5, run the following command in your terminal:

    If you’re using npm < 5, follow the official instructions on npmjs.org.
  2. If you don’t already have an .eslintrc file in your project’s root directory, create one with the following content:

Use Babel as your ESLint parser

If you’re writing modern ES6 and newer JavaScript, you’ll need to specify Babel as your parser so that ESLint can understand your code and analyze it.

First, install the babel-eslint  NPM module by running this in your terminal:

Then, specify  babel-eslint as the parser in your .eslintrc file:

Specify custom rules

The Airbnb ESLint configuration is pretty reasonable, but many folks will want to override those rules with ones that make more sense for them and their applications.  Here are some custom rules which I prefer for React and ternary operators, just as an example:

You can read the docs for more about custom ESLint rules.

Auto-fix ESLint errors when saving files

The easiest way to adhere to your ESLint rules is to set your editor to auto-fix warnings and errors when files are saved.

In Visual Studio Code, here’s how to do that …

Update your user settings file Cmd + , on Mac) so that files are formatted on save, and do not conflict with default VSCode settings:

Store and share your configuration

Be sure to commit your new .eslintrc  and .editorconfig  files to source control so that they are shared by other team members working in your project.

Try it out!

Add some simple errors to your JavaScript code – for example, remove a few semi-colons or add some crazy whitespace.  Then save your file, and watch many or most of those problems become resolved automatically!

You can also bring up the “Problems” toolbox from VSCode’s status bar, view the identified ESLint issues, and even right-click and fix them with minimal effort.

You may notice some additional unrelated changes when making commits as your ESLint now auto-fixes style problems, but after some time this will subside because your files are now styled more consistently  😀

Nginx all the things!

Developers are increasingly working across multiple projects, and we need a sane method of changing contexts within our local development environment very quickly – or running them all at once – to remain efficient. Keep reading to learn how to run an Nginx proxy on your local machine to shepherd requests to port 80 to other apps running on various ports, using the path of the request to determine which app to forward to.

Why would we wanna do that?  Because developers are increasingly working across multiple projects, and we need a sane method of changing contexts within our local development environment very quickly – or running them all at once – to remain efficient.  Our approach is to use an Nginx proxy to forward all requests on localhost:80 to our various applications, each running on their own unique port.

As coders make contributions to upstream dependencies and neighboring apps alike, and as we write more end-to-end automated browser tests that cross application boundaries, running a local Nginx proxy will be a requirement.

Ok, so let’s get you up and running …

Installing Nginx

We’ll use HomeBrew to install Nginx on our Mac.  This makes it easy to upgrade in the future, doesn’t require much manual configuration or running make, and doesn’t mess with any system files.

HomeBrew

Install nginx using the HomeBrew tap and formula.

Below is an example of the output you should expect after running this command, including some helpful tips for starting/stopping the server.

Configure Permissions

By default, Nginx will run on port 8080 so as to not require sudo.  But we’re going to want to run on port 80 to better simulate the production experience and route all requests properly, so we’ll need to set permissions on our directory appropriately.

Configure nginx.conf

You’ll need a robust configuration file to dynamically map incoming requests based on URL path to the appropriate apps running on different ports.  Many apps run on port 8080 by default, and some apps are easier than others to change, but we’ll need to run each app on its own port.

If you installed nginx via Homebrew, your nginx.conf file can be found at /usr/local/etc/nginx/nginx.conf.

Generate Self-Signed SSL Certificates

You’ll need to create a directory for your certs, then run the commands to generate them.  Notice these files are being created in the ssldirectory noted in your nginx.conf.

Start the Server

From your terminal, let’s start up nginx and make sure there are no errors returned:

Start Your Nginx Proxy

Each time you make changes to your nginx.conf file, you’ll need to reload the web server and ensure no errors were returned:

Reload Your Nginx Proxy

To stop the server, send the “stop” signal:

Stop Your Nginx Proxy

Start Up Your Apps

You should now be able start up each of your apps concurrently!  However, to do so you may still need to start those apps on the ports you specified with location directives, so check the your app’s README for how to do that.

As an example, within a typical Node.js app, it’s as simple as setting the port environment variable:

That’s the gist of it, we’re simply using Nginx to proxy all requests on port 80 to the various apps running locally on alternate ports.

Got questions, or even a better way to do any of this?  Please let me know in the comments section below!

Run a single unit test with Mocha/Chai

Sure, you can add file globbing patterns to a CLI arg to run a single JavaScript test, or group of tests, but it’s not super convenient and often requires a trip to your README to remember how to do it. Here’s a quicker way.

Sure, you can add file globbing patterns to a CLI arg to run a single JavaScript test, or group of tests, but it’s not super convenient and often requires a trip to your README to remember how to do it. Here’s a quicker way.

it.only()

Instead of hunting for the exact CLI params and globs, just add .only  to your it()  as shown below:

Take note:  If you use .only()  on more than one it() , it will only run the last test case.

describe.only()

And it works with a single group of tests, as well! Just add .only to your describe() as shown below:

Wrap up

There you have it! Single test groups and/or cases without screwing with CLI parameters.

Be sure to remove all instances of .only from your describe()  and it()  statements before committing to a repo, or you risk having your CI/CD pipeline run only a subset of your tests!

How to fix this Atom linter error: “Make sure`phpmd` is installed and on your PATH”

The linter-phpmd plugin for Atom is popular with PHP and WordPress developers, but it relies on having phpmd installed and available on your PATH.  Without it, you might see an error: “[Linter] Error running PHPMD Error: Failed to spawn command `phpmd`. Make sure `phpmd` is installed and on your PATH”

If you’ve seen this error in your Atom Developer Tools, the fix is quite simple …

The linter-phpmd plugin for Atom is popular with PHP and WordPress developers, but it relies on having phpmd installed and available on your PATH.  Without it, you might see an error:

If you’ve seen this error in your Atom Developer Tools, the fix is quite simple.  You just need to:

  1. Install phpmd – we’ll use Composer for this.
  2. Add it to your path – this makes it available anywhere on your command line.

Is Composer already installed globally?

If so, consider running composer self-update  to be sure you’re on the latest version.

If not, jump over to getcomposer.org to install it globally on your local system.  Then come back and continue with the steps in this post.

You can check if it’s installed by attempting to get the version.  It should return something like “Composer version 1.4.2 2017-05-17 08:17:52”.

Install phpmd

Installing phpmd globally is required to ensure it is available for Atom to use.  Start by running the following command:

You can now run phpmd using the full path to the installed binary:

That’s helpful, but let’s make sure the command is available everywhere without the full path.

Add phpmd to your PATH

You could add just phpmd to your path, but I find it’s much easier to add all of your globally-installed Composer binaries to your path, so that they are always available to you.  (Hey, they’re supposed to be global, right?!)

To do that, use your favorite terminal editor to edit your ~/.bash_profile (or ~/.bashrc) file.

You’ll need to add the following lines somewhere in the file:

Then, be sure to reload your environment variables in your terminal by running:

Verify it works!

Try to run the raw phpmd command to get the currently installed version:

Those pesky Atom phpmd linter errors should now have melted away 🙂

WaitFor in WebDriver.io

Have you seen your WebDriver tests fail with vague errors or timeouts when trying to locate and interact with elements on your page?

Following the best practices in this post will help you minimize random failures while running automated browser tests via WebDriver.io and Selenium.

Have you ever seen your WebDriver test fail with generic errors such as element is not clickable at point, other element would receive the click, or maybe just a timeout when trying to locate an element on your page?

Well whether you have or you haven’t (yet), this post will help you minimize random failures while running automated browser tests via WebDriver.io and Selenium.

Best practices

There are some best practices in WebDriver.io that you should follow when interacting with elements on your pages.  Below are some suggestions for improving the quality and stability of your e2e tests.

Consider the following basic Page Object example which references a “Next” button and an “Email” input box:

You must wait for an element to be visible before you interact with it.

The example above has a function which simulates the “Next” button’s click, however, it doesn’t check to make sure the element is on the page.  In WebDriver, an element can exist, but not be visible, and therefore not able to be interacted with.

If our “Next” button is hidden by default, and perhaps only shown after a user interacts with other parts of the page, we’ll need to wait for the element to be shown before we click it using the element.waitForVisible() method.

Let’s improve our clickNextButton() function:

You must wait for an element to be enabled before you click or type.

There are times when you must wait for other conditions to be true, as well.  For example, we can’t call the setValue() method on an input element if it’s disabled, the same way a user can’t type in a disabled textbox.  We’ll use the element.waitForEnabled() accordingly.

Let’s improve our setEmailInputValue() function:

Above, we’ve now made the setEmailInputValue() more stable because our interaction method checks to be sure the textbox is both visible and enabled before it attempts to set the value.

Set custom timeouts when using waitFor*()

There is a default timeout set for your WebDriver session, but for certain operations, you may need to increase the timeout when waiting for elements.  In this case, simply specify the number of ms to wait as the first parameter, as shown below.

Scrolling to elements

There are also times when the element is off the page and must be scrolled to in order to click.  Consider using the element.scroll()function to ensure the element is within the viewport before interacting with it.

waitFor() all the things

There are plenty more waitFor*() functions built into WebDriver, just check out the API for a comprehensive list.  And should one of those convenient utility functions not fit your particular use case, you can further customize things by using the waitUntil() function to wait for any condition you like.

Next up: disabling CSS animations

In my next post, I’ll cover how to disable CSS animations in your app to improve stability and reduce the number of random failures in WebDriver.io.

Review a PR then get back to work … faster!

I never mind reviewing PRs from my coworkers, but I do want to minimize interruptions knowing I typically have other tasks in-flight. To help make the overhead of switching contexts (and branches) more efficient, try using the git pr command found in https://github.com/tj/git-extras …

I never mind reviewing PRs from my coworkers, but I do want to minimize interruptions knowing I typically have other tasks in-flight.  To help make the overhead of switching contexts (and branches) more efficient, try using the git pr  command found in https://github.com/tj/git-extras!

You’ll probably want to install git-extras before you continue with this post.  (It’s super quick and easy using Homebrew, as described below.)

Installation

For most Mac users, this is all you need:

Note: Installing from Homebrew will not give you the option omit certain git-extras  if they conflict with existing git aliases. To have this option, build from source.

Show me how!

Let’s say you’re chugging along on your own feature branch and you’ve reached a stopping point where you can safely switch contexts and help keep your team’s PRs from piling up …

  1. Commit or stash your changes.  You don’t want to lose any work, and trying to checkout another branch will result in an error, so finish up what you’re doing and commit/stash first.  Also, you might consider that if you’re not ready to commit the code changes you’ve made, you might not be cognitively ready to switch contexts anyway  🤓
  2. Pull the PR down for review.  Just run $ git pr 123  from within your existing working directory, replacing 123  with the appropriate PR number, or even the full URL to the PR.If you’re working from a fork, you can also specify a different remote, e.g., $ git pr 123 upstream .  (See the git pr docs for more information.)  Doing so will automatically checkout the 123 branch as pr/123 .Bonus!  If no new packages have been added/updated in package.json, you might not even need to restart your web server when checking out other branches. (Lingua changes may still require a restart, however.)
  3. Do your review thang.  Bring the site up, test it thoroughly, leave good feedback on GitHub … (you know the drill)
  4. Switch back to your branch.  You’ll need to switch back to your own feature branch before cleaning up the PR branch you pulled down.
  5. Clean it up.  Run $ git pr clean  to delete the PR branch and continue with your own feature work.

You should now be back in your own lane after helping to get your team’s commits reviewed and merged, having a strong sense of pride and accomplishment, and a minimum amount of time spent.

Further reading

Check out the entire list of commands and aliases added by git-extras, or check out this Vimeo screencast on some of the more popular git-extras commands.

“The box ‘bento/ubuntu-16.04’ could not be found” error when spinning up a new Trellis project

I was spinning up a new website using one of my favorite WordPress stacks built on Trellis and Vagrant, when I encountered the following error: “The box ‘bento/ubuntu-16.04’ could not be found or could not be accessed in the remote catalog.”

I had recently updated Vagrant from 1.8.5 to 1.8.7, and had also recently started using Ubuntu 16.04 for my new projects, updating from the previous LTS version 14.04 I had relied on for years.

Here is how I fixed it …

I was spinning up a new website using one of my favorite WordPress stacks built on Trellis and Vagrant, when I encountered the following error: The box ‘bento/ubuntu-16.04’ could not be found or could not be accessed in the remote catalog.

I had recently updated Vagrant from 1.8.5 to 1.8.7, and had also recently started using Ubuntu 16.04 for my new projects, updating from the previous LTS version 14.04 I had relied on for years.  So that gave me two avenues to go down – was it the newer Vagrant version or the updated Ubuntu version that was breaking things?

Troubleshooting

The Trellis docs say that “Vagrant >= 1.8.5” is required, so my new version 1.8.7 should work just fine.  On the Roots.io discussion forum, many users found that rolling back to Vagrant 1.8.5 worked for them.  But I typically want to use the latest version of software, so I didn’t stop there.

Then I came across this issue on Vagrant’s GitHub page, detailing how Vagrant’s embedded version of curl was causing a conflict with macOS Sierra on my laptop, and many folks have found that removing or linking that embedded version to my Mac’s version was a good solution.

The Solution(s)

To work around this error, you’ll want to either remove that embedded curl file, or re-link it to your Mac’s version.  Here’s how to do that …

Remove Vagrant’s embedded curl

This simply removes the embedded curl library, and seems to cause Vagrant to fall back to the macOS version.

or …

Link Vagrant’s embedded curl to the Mac host

This more specifically forces Vagrant to call the macOS version of curl directly using a symlink.

Either of these workarounds should fix your issue.  However, if you continue to have problems starting up a new Vagrant/Trellis box, please leave a comment below!

Why I chose Ember over React for my next web app project

A week ago, I decided to spend 7 full days learning a couple of the more popular MV** frameworks, after which I would write a little about my learnings and make a choice for my newest project. I had watched a presentation called Comparing Hot JavaScript Frameworks: AngularJS, Ember.js and React.js by Matt Raible, and was inspired to quantify my framework selection a little more thoroughly, even if inevitably I make my choice from the gut. Read more about my experiment with JS frameworks.

The Ember vs React discussion is still quite lively.  Recently I began work on a redesign of a responsive mobile app, and I was once again faced with the decisions about which frameworks, tools and methods I would commit to using.   I firmly believe in using the right tools for the right jobs, but there are a whole lot of factors to consider – each of them having a varying amount of importance or relevance to any new project.  If you are in the same spot as I am, you might find my discoveries helpful to your own learning journey.

A one-week experiment

A week ago, I decided to spend 7 full days learning a couple of the more popular MV** frameworks, after which I would write a little about my learnings and make a choice for my newest project.  I had watched a presentation called Comparing Hot JavaScript Frameworks: AngularJS, Ember.js and React.js by Matt Raible, and was inspired to quantify my framework selection a little more thoroughly, even if inevitably I make my choice from the gut.

So I performed my experiment on two of the top three JavaScript frameworks – Ember.js and React.js.  I’ve played in the sandbox with AngularJS in the past, and have attended a number of developer meetups and sessions as well, so I already had an idea of what developing with AngularJS was like.  Here are the factors I considered:

  • Learning Experience (LX)
  • Developer Experience (DX)
  • Testability
  • Security
  • Build process/tools
  • Deployment
  • Debugging
  • Scalability
  • Maintainance
  • Community (aka Sharability)

This is the same list Matt used in his presentation, and it works great.  The important thing to recognize is that these factors will weigh differently for you than for me.  Consider assigning each of the 10 factors a decimal weight between 0 and 1 based on fit for any project before you actually fill out your own matrix, and then apply that weighting to your final scores.  Doing this with basic formulas in an Excel doc or Google Spreadsheet is trivial, so use one of those to make it easy on yourself.

I spent 3 days creating the first screens of my new app in React, and then 3 days creating the same screens in Ember (and for those doing the math, 1 day composing this post.)  For 10+ hours each day I enveloped myself in tutorials, videos, docs and podcasts in an effort to learn as much as I could about each framework.  It’s important to actually write code during your evaluation process!  Don’t assume that because a framework works well for others that it will be the best fit for you or your projects.

TLDR; Ember won.  The areas where Ember really racked up points in my selection matrix were in Developer Experience and Community, and instead of talking about why React and Angular didn’t win, I’d like to talk more about why I chose Ember as the best fit for my application redesign.

Developer Experience

DX is a play on the UX term, and as you can imagine, it refers to how a particular tool or library is designed to make developers’ lives easier.  The Learning Experience factor is heavily entwined with the DX factor, especially as your knowledge grows and you move into more advanced code and real-life challenges.

Here’s how Ember.js ♥’s developers.

ember-cli

The ember-cli tool didn’t seem to be a huge benefit for me when I first looked at Ember.js, but being able to ember serve  a new project immediately was encouraging.  As I began generating new routes, templates and models I realized how much time the CLI tool was saving me.

The command above will generate code for your User class’ route, template, and model (with properties) saving you from writing, copy/pasting and search/replacing code manually.

What’s interesting is that ember-cli has become the expected way of generating code, testing apps, and even serving the development environment.  And much or most of the documentation for Ember.js 2.5 has been updated to use the CLI tool, so the Ember.js team is betting big on it.  I expect that I’ll put the tool to through its paces more as I begin setting up tests and adding more to my build process.

LTS release channel

When I first began building and maintaining production Linux environments – especially clustered environments with multiple layers of load-balancing, reverse proxies, caching and lots of dependencies – I learned the importance of Long Term Support (LTS) releases in Ubuntu.  When the Heartbleed and POODLE vulnerabilities in SSL surfaced, for example, patching OpenSSL was critical.  But if you were running on a non-LTS 13.10 version of Ubuntu like I was, suddenly patching a security vulnerability meant upgrading your entire operating system.  Yikes!

Ember has adopted the mantra of “stability without stagnation”, and this resonates loudly with me.  Ember 2.4 was the first LTS release, and every fourth release will also be LTS.

LTS releases will receive critical bugfixes for 6 release cycles (36 weeks). This means we will support Ember 2.4 LTS with critical bugfixes until Ember 2.10.0 is released, around November 4, 2016.

LTS releases will receive security patches for 10 release cycles (60 weeks). This means we will support Ember 2.4 LTS with security patches at least until Ember 2.14.0 is released, around April 21, 2017.

An LTS release channel means addon developers know where to concentrate their efforts, and Ember.js users can upgrade less frequently and more confidently, while still having access to the latest features, bug fixes and security patches.

The Learning Team

Because learning is more than just docs.  The Ember.js Learning Team is responsible for all the different ways that users learn and grow with Ember, and for ensuring that the learning experience remains a central component of future releases.  At EmberConf 2016, Yehuda Katz & Tom Dale announced the role of the Core Team and the subteams.

Watch a video of Recardo Mendes talking about the new Ember.js Learning Team at EmberConf 2016.

Browser dev tools

The Ember Dev Tools make understanding how your application is working a snap, and they are available as extensions for both Chrome and Firefox.  You can also use a bookmarklet for other browsers.

Want those same dev tools on your mobile device?  Check out the ember-cli-remote-inspector which makes it easy to debug your application on your mobile device using your Ember browser extension through websockets.

ES6

I like TypeScript but it doesn’t feel like I’m writing JavaScript, and I never really got into CoffeeScript, but I really enjoy the idea of writing standard ES6 code.  Since that is the direction that browsers and the community are heading anyways, why not start writing that code now?  Tools like Babel.js make authoring in ES6 or ES7 easier by transpiling to JavaScript that current browsers can understand, and without requiring full browser support for the latest versions of the language.

The Ember.js Community

Here’s where I’ve fallen in love with Ember.js – the community.  As an active WordPress developer, WordCamp speaker and blogger, I’ve become accustomed to the WordPress community, and maybe even spoiled by it.  The community constantly amazes me with the number of willing contributors to core, to plugins, and to documentation and learning.

Learning

There are some great community resources for learning Ember.js:

Search Meetup.com and you’ll find Ember groups near you who meet often to learn from each other and freely share their knowledge and love of Ember.js.

Forums

Check out http://discuss.emberjs.com/ for active forums using the robust Discourse forum platform (my favorite forum software, and also built on Ember.js!)

Watch the video

Take a look at the video, it’s long but it does a great job demonstrating the thought process process behind selecting the right JavaScript framework for any project or organization.  The weighting of the factors for Matt or myself may be very different from yours, so be sure to go through the exercise yourself and see which framework is best for your requirements.