Why I chose Ember over React for my next web app project

A week ago, I decided to spend 7 full days learning a couple of the more popular MV** frameworks, after which I would write a little about my learnings and make a choice for my newest project. I had watched a presentation called Comparing Hot JavaScript Frameworks: AngularJS, Ember.js and React.js by Matt Raible, and was inspired to quantify my framework selection a little more thoroughly, even if inevitably I make my choice from the gut. Read more about my experiment with JS frameworks.

The Ember vs React discussion is still quite lively.  Recently I began work on a redesign of a responsive mobile app, and I was once again faced with the decisions about which frameworks, tools and methods I would commit to using.   I firmly believe in using the right tools for the right jobs, but there are a whole lot of factors to consider – each of them having a varying amount of importance or relevance to any new project.  If you are in the same spot as I am, you might find my discoveries helpful to your own learning journey.

A one-week experiment

A week ago, I decided to spend 7 full days learning a couple of the more popular MV** frameworks, after which I would write a little about my learnings and make a choice for my newest project.  I had watched a presentation called Comparing Hot JavaScript Frameworks: AngularJS, Ember.js and React.js by Matt Raible, and was inspired to quantify my framework selection a little more thoroughly, even if inevitably I make my choice from the gut.

So I performed my experiment on two of the top three JavaScript frameworks – Ember.js and React.js.  I’ve played in the sandbox with AngularJS in the past, and have attended a number of developer meetups and sessions as well, so I already had an idea of what developing with AngularJS was like.  Here are the factors I considered:

  • Learning Experience (LX)
  • Developer Experience (DX)
  • Testability
  • Security
  • Build process/tools
  • Deployment
  • Debugging
  • Scalability
  • Maintainance
  • Community (aka Sharability)

This is the same list Matt used in his presentation, and it works great.  The important thing to recognize is that these factors will weigh differently for you than for me.  Consider assigning each of the 10 factors a decimal weight between 0 and 1 based on fit for any project before you actually fill out your own matrix, and then apply that weighting to your final scores.  Doing this with basic formulas in an Excel doc or Google Spreadsheet is trivial, so use one of those to make it easy on yourself.

I spent 3 days creating the first screens of my new app in React, and then 3 days creating the same screens in Ember (and for those doing the math, 1 day composing this post.)  For 10+ hours each day I enveloped myself in tutorials, videos, docs and podcasts in an effort to learn as much as I could about each framework.  It’s important to actually write code during your evaluation process!  Don’t assume that because a framework works well for others that it will be the best fit for you or your projects.

TLDR; Ember won.  The areas where Ember really racked up points in my selection matrix were in Developer Experience and Community, and instead of talking about why React and Angular didn’t win, I’d like to talk more about why I chose Ember as the best fit for my application redesign.

Developer Experience

DX is a play on the UX term, and as you can imagine, it refers to how a particular tool or library is designed to make developers’ lives easier.  The Learning Experience factor is heavily entwined with the DX factor, especially as your knowledge grows and you move into more advanced code and real-life challenges.

Here’s how Ember.js ♥’s developers.

ember-cli

The ember-cli tool didn’t seem to be a huge benefit for me when I first looked at Ember.js, but being able to ember serve  a new project immediately was encouraging.  As I began generating new routes, templates and models I realized how much time the CLI tool was saving me.

The command above will generate code for your User class’ route, template, and model (with properties) saving you from writing, copy/pasting and search/replacing code manually.

What’s interesting is that ember-cli has become the expected way of generating code, testing apps, and even serving the development environment.  And much or most of the documentation for Ember.js 2.5 has been updated to use the CLI tool, so the Ember.js team is betting big on it.  I expect that I’ll put the tool to through its paces more as I begin setting up tests and adding more to my build process.

LTS release channel

When I first began building and maintaining production Linux environments – especially clustered environments with multiple layers of load-balancing, reverse proxies, caching and lots of dependencies – I learned the importance of Long Term Support (LTS) releases in Ubuntu.  When the Heartbleed and POODLE vulnerabilities in SSL surfaced, for example, patching OpenSSL was critical.  But if you were running on a non-LTS 13.10 version of Ubuntu like I was, suddenly patching a security vulnerability meant upgrading your entire operating system.  Yikes!

Ember has adopted the mantra of “stability without stagnation”, and this resonates loudly with me.  Ember 2.4 was the first LTS release, and every fourth release will also be LTS.

LTS releases will receive critical bugfixes for 6 release cycles (36 weeks). This means we will support Ember 2.4 LTS with critical bugfixes until Ember 2.10.0 is released, around November 4, 2016.

LTS releases will receive security patches for 10 release cycles (60 weeks). This means we will support Ember 2.4 LTS with security patches at least until Ember 2.14.0 is released, around April 21, 2017.

An LTS release channel means addon developers know where to concentrate their efforts, and Ember.js users can upgrade less frequently and more confidently, while still having access to the latest features, bug fixes and security patches.

The Learning Team

Because learning is more than just docs.  The Ember.js Learning Team is responsible for all the different ways that users learn and grow with Ember, and for ensuring that the learning experience remains a central component of future releases.  At EmberConf 2016, Yehuda Katz & Tom Dale announced the role of the Core Team and the subteams.

Watch a video of Recardo Mendes talking about the new Ember.js Learning Team at EmberConf 2016.

Browser dev tools

The Ember Dev Tools make understanding how your application is working a snap, and they are available as extensions for both Chrome and Firefox.  You can also use a bookmarklet for other browsers.

Want those same dev tools on your mobile device?  Check out the ember-cli-remote-inspector which makes it easy to debug your application on your mobile device using your Ember browser extension through websockets.

ES6

I like TypeScript but it doesn’t feel like I’m writing JavaScript, and I never really got into CoffeeScript, but I really enjoy the idea of writing standard ES6 code.  Since that is the direction that browsers and the community are heading anyways, why not start writing that code now?  Tools like Babel.js make authoring in ES6 or ES7 easier by transpiling to JavaScript that current browsers can understand, and without requiring full browser support for the latest versions of the language.

The Ember.js Community

Here’s where I’ve fallen in love with Ember.js – the community.  As an active WordPress developer, WordCamp speaker and blogger, I’ve become accustomed to the WordPress community, and maybe even spoiled by it.  The community constantly amazes me with the number of willing contributors to core, to plugins, and to documentation and learning.

Learning

There are some great community resources for learning Ember.js:

Search Meetup.com and you’ll find Ember groups near you who meet often to learn from each other and freely share their knowledge and love of Ember.js.

Forums

Check out http://discuss.emberjs.com/ for active forums using the robust Discourse forum platform (my favorite forum software, and also built on Ember.js!)

Watch the video

Take a look at the video, it’s long but it does a great job demonstrating the thought process process behind selecting the right JavaScript framework for any project or organization.  The weighting of the factors for Matt or myself may be very different from yours, so be sure to go through the exercise yourself and see which framework is best for your requirements.

 

npm install returns an “Unsupported URL Type” error

While working on an Ember.js app today, I hit an odd snag.  Running a standard npm install was bombing terribly on my Mac.  And 10 minutes of my google-fu wasn’t turning up any good leads.  I had searched for “npm install ERR! Unsupported URL Type“, upon other variants, but I had missed something.

The Solution

What I had missed was a woefully out of date npm version.  Node.js was already at latest (4.4.3 at the time of this post), but I remember screwing around with re-installing npm on this machine the other day and somehow I was running npm v2.15.1.

Updating to the latest version of npm fixed the errors I encountered with npm install.  And I’m sure there’s another lesson in there somewhere about nuking at reinstalling core software tools, but maybe that will be detailed in a future post.

TLDR; — I won’t be detailing that in a future post.

PHP class not found while using Composer

[error] FastCGI sent in stderr: “PHP message: PHP Fatal error: Class ‘gkn\App’ not found.

Here’s how to solve one of the more frustrating auto-loader errors I’ve encountered in Composer.

Composer is more of a dependency manager than a package manager because it only manages packages on a per-project basis.  That being said, it’s almost always made my life easier when referencing other PHP libraries as well as my own.  Until I did something stupid.

The problem

Earlier today, an updated build script overwrote the “vendor” folder on the remote server I was deploying to.  Suddenly my app started throwing HTTP 500 errors, and I found the following in the nginx log file:

Hmm, Composer is supposed to have created an auto-loader for this, so I simply tried to re-install my dependencies via Composer on the remote server.  I typed out:

The installation and update completed normally with no errors. But the class not found error persisted when refreshing my web app in the browser. The problem is that Composer generates a PHP ClassLoader class to auto-load your files, and that auto-generated PHP class has a unique name. Open your vendor/autoload.php file and you’ll see an example:

The solution

A colleague pointed out my error almost immediately via Slack, it’s obviously something he’s run up against before (thanks, Paul Frazee!)  Simply put, I also needed to force a new dump of the autoload files by typing:

The composer dump-autoload –optimize command regenerates your autoload files without having to run install or update.  The --optimize argument is especially important when running in your production environment for performance reasons. The optimization will speed up the time it takes for the auto-loader to load your classes, which can be relatively slow and cause your pages’ performance to suffer.  It does this by converting the PSR-0/4 packages into classmap packages, which isn’t as friendly while in development, but gives you faster requests on the web server.

Have a similar issue, or found a better solution?  Leave a comment below!

The .1 Release is the Listening Release

Developers love to launch.  It’s the culmination of weeks or months of work (if it’s years, you better be building an operating system) and the public is about to see what you’ve created.  But it’s far from the end of your big release.

And if you’re juggling multiple projects, it’s tempting to wipe your hands clean after a site or app launches, and change your full focus to a new project.

Developers love to launch.  It’s the culmination of weeks or months of work (if it’s years, you better be building an operating system) and the public is about to see what you’ve created.  But it’s far from the end of your big release.

And if you’re juggling multiple projects, it’s tempting to wipe your hands clean after a site or app launches, and change your full focus to a new project.  This is especially true when your team has truly done their best to maintain quality throughout the design and development process.  You’ve thought of everything, tested with a small group of trusted users, and you only had so much time booked in the calendar, so you call it done.

What may be missing from your schedule is the “.1 release”.

A recent launch of an intranet portal is a great example.  Our team did a fantastic job of innovating during the design & dev phases, as well as leveraging some cool new interaction methods and technical integrations (new to us anyway!)  We estimated our development time reasonably well, and set the expectations with ourselves that we were delivering a 1.0 release, and we’d come back soon to address all of the experiential things that we knew users would have to tolerate for a bit (e.g., slower-than-ideal authentication/authorization on first load) while we focused on a different project with a totally different set of end users.

That was almost 6 months ago.

The problem is that because there was another high-priority project waiting in the queue, we consciously skipped the .1 release of the previous project.  But the .1 release is the release where you monitor the application and the experience your users are having, and respond to their needs quickly.  The .1 release is where we make good on all those IOUs we made while disclaiming “we must deliver something now, we committed to a date and we’ll get in trouble if we miss it”, and that each of their individual needs will be addressed in the next version.  This is all fairly standard and valid, as long as you actually plan the time in your project schedule necessary to listen to your users and improve their experience.

When we constantly deliver at 100% (or greater), we don’t give ourselves enough buffer.  Not just setting aside time to mitigate known and unknown risks throughout a project (the latter is frequently forgotten), but buffer to sit back and listen after a major release.  We should have confidence that we couldn’t possibly have gotten everything perfect for every user, so allocating time to ensure that even their smaller needs are addressed before we move on to another project is imperative.  The small things may determine whether or not a user finds value in your product, and a healthy user growth curve is supported by happy users.

The .1 Release is the “Listening Release”

Once you’ve launched that first or other major release, celebrate the win!  Get a good night’s sleep.  Then come to work and just listen.  Certainly we’ve all built multiple feedback loops into our applications, right?  We have New Relic analytics for our server performance, exceptions logged to a database for analysis, a feedback form and forums for users to contact us … but why did we build all of that into our app in the first place?  We did so to enable us to know more about how our app is doing, and more importantly how our users are doing, so that we could take action where necessary.  Listening but failing to follow up with improvements and fixes right away could potentially be worse than not listening at all because you’ve set users’ expectations incorrectly.

There are very objective reasons for dedicating time and effort to listen to, and fully understand, your users. Such as meeting expectations better, and being more efficient by avoiding rework. But there’s also an emotional component to the exercise itself — it builds trust with users, and demonstrates a genuine empathy for their needs.

Failing to Listen Guarantees You Are Forever Playing Catch-Up

Most of us don’t get to set all of our own priorities – they are dictated or influenced by customers, managers, colleagues, holiday shopping seasons, and such.  We have multiple projects and products in our workshop, and we constantly switch context to whatever is most important right now.  And while there are some environments where that works well, context-switching in software development is expensive in terms of time.  The constant drumbeat of “deliver, deliver, deliver” often means that important phases of an application’s life cycle get omitted.

Think about who’s priorities we’re catering to when the consequences of delivering later versus earlier are greater than the consequences of not delivering the best experience to our users.  (Hint: it’s not your users’ priorities you’re optimizing for …)

When we don’t schedule the time in the project plan to follow up with a .1 release, users feel unheard and abandoned.  Sometimes for a long time.  Yes, you hit your deadline, and you’re on to put out the next fire (because every project is the most important project, right?)  However, we’ve only served our own priorities at the expense of ignoring those of our users.  And as long as we continue to release without listening, and without following up immediately with a “listening release”, we are guaranteed to always have a growing backlog and worse, a growing number of frustrated users.

As Ron Swanson said, “Never half-ass two things.  Whole-ass one thing.”

A Better Experience for Everyone

And I do mean everyone!  The best way to ensure that what you’ve created is actually improving the lives people (because at our core, makers of all disciplines want to build things that impact people) is to release your best work, then take the time to listen to users, set correct expectations, and address their needs as best as you can.  Don’t declare “mission accomplished” and move on before your users are happy.  This builds trust with your users, which is a very necessary component to user adoption.

Knowing the triple constraint exists in project management, increasing quality isn’t free.  But deciding early on that the definition of success is based on a level of quality and user acceptance and not just the delivery date, and that you will spend the additional time and cost required to make users happy will make building and using your product a much more enjoyable experience for everyone.

Just remember who you are ultimately designing for, and spend most of your time serving their needs and priorities.

(Featured image courtesy of Flickr.com/Rumpleteaser)

How to fix “PHP Fatal error: Call to undefined function imageconvolution()”

If you’re having trouble with uploading images to WordPress (or other PHP frameworks) and seeing blank spots where images should be, read more to discover a possible fix!

If you’re having trouble with uploading images to WordPress (or other PHP frameworks) and seeing blank spots where images should be, you may need to be sure that LibGD is installed.  Check your Apache or Nginx logs for fatal PHP errors which occur when trying to call undefined functions, for example:

  • PHP Fatal error:  Call to undefined function imageconvolution()
  • PHP Fatal error:  Call to undefined function imagerotate()
  • PHP Fatal error:  Call to undefined function imagecreatefromjpeg()
  • … and other newer GD functions

Here’s an example I gathered from an Nginx log after attempting to rebuild some thumbnails in WordPress using the AJAX Thumbnail Rebuild plugin:

So, let’s fix it!

Even if you’re on the latest Ubuntu 14.04, you may find yourself in a situation where your installed version of PHP was not compiled with libgd, or it is not installed via your package manager.  In that case, it’s easy to install!  For the most part, you shouldn’t have to recompile PHP, so try installing/updating libgd using your package manager.

From the libgd FAQs page:

If you want gd for a PHP application, just do (for Fedora):

Or, for Red Hat Enterprise Linux or CentOS:

Then do:

If your system is Debian based (Debian/Ubuntu/…) then you need to install php5-gd package:

Backticks and end-of-line characters in vvv-init.sh

If you’ve ever chased your tail for half an hour on vague errors in your vvv-init.sh files, here’s a few things to check: escaped backticks in SQL statements within shell scripts, and correct Unix/Linux/OSX line endings.

If you’ve ever chased your tail for half an hour on vague errors in your vvv-init.sh files in the popular Varying Vagrant Vagrants development tool for WordPress, here’s a few things to check.

Backticks in SQL statements

If the MySQL statements you’re executing have object names with special characters, you’ll most likely need to use backticks ( </span>) to enclose them within your script.  Be sure to escape those backtick characters with backslashes (<span class="lang:default decode:true crayon-inline ">\\) to ensure errors aren’t generated while VVV is executing your provisioning script.

Here’s an example of escaped backticks in vvv-init.sh that works:

If you don’t escape backticks, you could likely see errors like this:

Unix/Linux/OSX line ending characters

If you’re editing your vvv-init.sh file in Windows, make sure you’re saving those files with the correct Unix-style line endings, and not Windows-style (or the older Mac-style).  If you don’t escape backticks, you could likely see errors like this:

Cluster Fudge: Recipes for WordPress in the Cloud (WordCamp Austin 2014)

About a month ago, I gave a talk at WordCamp Austin 2014 about running enterprise-class WordPress in clustered, cloud-hosted environments. Thanks to all who attended, and for your great questions! While it was standing room only for “Cluster Fudge: Recipes for WordPress in the Cloud”, I hope that everyone who wanted to get in was able to see my presentation.

I’d love to keep the discussion going, so feel free to offer your own best practices and tips for success in running WordPress in the cloud at scale! You can leave comments at the bottom of the page.

About a month ago, I gave a talk at WordCamp Austin 2014 about running enterprise-class WordPress in clustered, cloud-hosted environments.  Thanks to all who attended, and for your great questions! While it was standing room only for “Cluster Fudge: Recipes for WordPress in the Cloud“, I hope that everyone who wanted to get in was able to see my presentation.

I’d love to keep the discussion going, so feel free to offer your own best practices and tips for success in running WordPress in the cloud at scale!  You can leave comments at the bottom of the page.

Overview

Your self-hosted WordPress site is quickly growing in popularity and page views. Or maybe you want to get away from that costly enterprise CMS currently on your plate and adopt a delectable, open-source platform. There are many reasons you might need the performance and redundancy of a clustered server solution, and I’ll show you how to mix up the ingredients needed to throw together a successful cloud-hosted WordPress environment that’s right for you.

We’ll talk about common multi-server configurations, from cheap and quick for the cost-conscious business, to robust and complex for the high level of control an enterprise demands. You will leave with a better knowledge of which web server makes sense for your requirements, and learn some tips and tricks to better caching without sacrificing the dynamic nature of WordPress.

Downloadable code snippets and example config files will help get you started in your own cloud environment.

Example Code and Configuration Files

Visit my GitHub repo for examples of server configuration files optimized for WordPress, and PDF versions of my slides with speaker notes.

Slides

If you missed it, you can view my slides embedded from SlideShare below.

 

Adjusted Bounce Rate WordPress Plugin released!

I’ve just released a new WordPress plugin for better tracking of adjusted bounce rate, time on page, and time on site metrics in Google Analytics!

The problem is that Google Analytics does not properly track some important engagement metrics like Avg Time on Site, Avg Session Duration, and Bounce Rate. This plugin enhances a commonly-accepted JavaScript method of improving the accuracy of these stats, but with some extra features and options.

Adjusted Bounce Rate WordPress Plugin

I’ve just released a new WordPress plugin for better tracking of adjusted bounce rate, time on page, and time on site metrics in Google Analytics!

The problem is that Google Analytics does not properly track some important engagement metrics like Avg Time on Site, Avg Session Duration, and Bounce Rate. This plugin enhances a commonly-accepted JavaScript method of improving the accuracy of these stats, but with some extra features and options.

This plugin addresses the issues as identified by the Google Analytics team at http://analytics.blogspot.com/2012/07/tracking-adjusted-bounce-rate-in-google.html.

Features

  • Set the engagement tracking event interval. (Defaults to 10 secs.)
  • Set the max engagement time, which allows you to customize when the session should be considered abandoned. (Defaults to 20 mins.)
  • Set the minimum engagement time, which can be used to set an initial amount of time required to count the user has having engaged. (Defaults to 10 secs.)
  • Customize the event Category, Action and Label names to be displayed in Google Analytics.
  • Uses either the old pageTracker code, the newer asynchronous code, or the newest Universal Analytics code.
  • Choose header or footer placement for the code.
  • Compatible with Yoast’s Google Analytics for WordPress. For example, detects if analytics were loaded, or if they are disabled because of the currently logged in user’s role.

Download

This plugin is available from the WordPress Plugin Repository at http://wordpress.org/plugins/adjusted-bounce-rate/, and from GitHub at https://github.com/grantnorwood/adjusted-bounce-rate. Please submit issues to the GitHub repo for the fastest response!

Screen Shot

Adjusted Bounce Rate WordPress Plugin

A fix for outbound & download links not working in Yoast’s Google Analytics for WordPress

The Problem

I just noticed on one of my sites that some download links to PDF, zip, and other assets had an almost total drop in download events in Google Analytics after a recent Yoast Google Analytics for WordPress plugin update.  So I began to troubleshoot …

Track outbound clicks and downloads is enabled in the Yoast plugin settings
Track outbound clicks and downloads is enabled in the Yoast plugin settings.

The firing of trackEvent() is handled by the popular Yoast Google Analytics plugin, which automatically adds onClick() javascript handlers to fire the correct event for <a /> tags, and uses the GA Event Category of “download”, and the domain or full URL to the file (determined by your plugin settings) as the Event Action.  It had worked for over a year, and is working normally on our other web properties, all of which use the same latest version of the plugin Google Analytics for WordPress plugin.

I enabled Debug Mode with the Yoast Google Analytics plugin and set the option to log all users. I typically prefer not to skew my reporting with administrator traffic, but in order to see the Debug Mode output in your browser’s javascript console, you must select the “Ignore no-one” option on the settings page.

Set ignore users option to "Ignore no-one".
Set ignore users option to “Ignore no-one”.
Enable debug mode for browsers with Javascript consoles, or alternatively, you can use the Firebug Lite feature.
Enable debug mode for browsers with Javascript consoles, or alternatively, you can use the Firebug Lite feature.

I was then able to monitor the console and see the Google Analytics pageview event fire, however, I did not see the “download” event fire.  I was stuck.

The Solution

It was a partner of mine, Colin Alsheimer from Weber Shandwick (@colinize on Twitter) that figured out the root cause:  relative URLs in the download links.

The tracking beacon finally fires successfully when using fully qualified URLs, not relative URLs.
The tracking beacon finally fires successfully when using fully qualified URLs, not relative URLs.

When I viewed the source code of my site, I could see that only the download & outbound links with fully-qualified URLs had onclick handlers attached in order to properly fire _trackEvent() attached, and not the links with relative URLs. After updating my page to use the full URLs, those download links immediately began working again, and I was able to see the event fire in the debug console as well as show up in my GA real-time event tracking.

Success!  The root cause of this seems to be a bug in the Yoast plugin as GA should allow any text string – relative URL or not – as an event action, and I’ve reported the issue to them in the WordPress forums.  It seems that the “link sanitization” feature that rewrites relative URLs with full URLs was added to the plugin in v4.0.2, however since then it has stopped adding the click event handlers to links with relative URLs.

The moral of the story is that relative links rarely come back and bite you, and they are so convenient when moving content and code between environments.  But the cost of that convenience is a very small chance that not using a full URL will break something, and it may or may not fit your individual tolerance for risk.

Have a comment about using relative links in Google Analytics event tracking?  Help others out by posting below!

Googlebot can’t access your site (Scary, right?)

Yesterday, many Google Webmaster Tools users received unpleasant notifications that their websites were suddenly inaccessible to Google.  After 2 hours of aggressive troubleshooting last night, and another couple hours spent this morning, it seems that this may be an issue on Google’s side.  Search Engine Roundtable just posted an article confirming more reports of problems with Google accessing robots.txt.

In my own case, the naked msdf.org/robots.txt URL is accessible from every other browser, device, and third-party tool in my arsenal, yet Google has about a 80% error rate in accessing my robots.txt file.  While the www version is working perfectly with no crawl errors or problems fetching as Googlebot, the non-www version is having much less success.  (Please note, you may also receive duplicate “Googlebot can’t access your site” errors for both www and non-www versions.)

Attempting to use the Fetch as Google tool within Webmaster Tools was helpful in understanding the problem, but ultimately the problem seems to be with Google, and your site’s index status is likely just fine. (Whew!)  But use caution in writing off warnings from Google, you could very well be receiving these email warnings for good reason, especially if you received them before yesterday (on or before April 25, 2013).

Matt Cutts commented on the Google forum discussion earlier, acknowledging this could be an issue with Google, so I recommend you check that forum thread out.  I’ll keep an eye on this until a resolution comes around, but if you’ve already tested your site in the Fetch as Google tool and all other bots are working normally, you may actually be able to do something you almost never (ever) want to do – ignore a Google Webmaster Tools alert.

 

UPDATE  (2013-Apr-27)

Google’s John Mueller has indicated that this issue should now be resolved.  https://productforums.google.com/d/msg/webmasters/mY75bBb3c3c/ARQqAWOf_6YJ

 

%d bloggers like this: