Run a single unit test with Mocha/Chai

Sure, you can add file globbing patterns to a CLI arg to run a single JavaScript test, or group of tests, but it’s not super convenient and often requires a trip to your README to remember how to do it. Here’s a quicker way.

Sure, you can add file globbing patterns to a CLI arg to run a single JavaScript test, or group of tests, but it’s not super convenient and often requires a trip to your README to remember how to do it. Here’s a quicker way.

it.only()

Instead of hunting for the exact CLI params and globs, just add .only  to your it()  as shown below:

Take note:  If you use .only()  on more than one it() , it will only run the last test case.

describe.only()

And it works with a single group of tests, as well! Just add .only to your describe() as shown below:

Wrap up

There you have it! Single test groups and/or cases without screwing with CLI parameters.

Be sure to remove all instances of .only from your describe()  and it()  statements before committing to a repo, or you risk having your CI/CD pipeline run only a subset of your tests!

How to fix this Atom linter error: “Make sure`phpmd` is installed and on your PATH”

The linter-phpmd plugin for Atom is popular with PHP and WordPress developers, but it relies on having phpmd installed and available on your PATH.  Without it, you might see an error: “[Linter] Error running PHPMD Error: Failed to spawn command `phpmd`. Make sure `phpmd` is installed and on your PATH”

If you’ve seen this error in your Atom Developer Tools, the fix is quite simple …

The linter-phpmd plugin for Atom is popular with PHP and WordPress developers, but it relies on having phpmd installed and available on your PATH.  Without it, you might see an error:

If you’ve seen this error in your Atom Developer Tools, the fix is quite simple.  You just need to:

  1. Install phpmd – we’ll use Composer for this.
  2. Add it to your path – this makes it available anywhere on your command line.

Is Composer already installed globally?

If so, consider running composer self-update  to be sure you’re on the latest version.

If not, jump over to getcomposer.org to install it globally on your local system.  Then come back and continue with the steps in this post.

You can check if it’s installed by attempting to get the version.  It should return something like “Composer version 1.4.2 2017-05-17 08:17:52”.

Install phpmd

Installing phpmd globally is required to ensure it is available for Atom to use.  Start by running the following command:

You can now run phpmd using the full path to the installed binary:

That’s helpful, but let’s make sure the command is available everywhere without the full path.

Add phpmd to your PATH

You could add just phpmd to your path, but I find it’s much easier to add all of your globally-installed Composer binaries to your path, so that they are always available to you.  (Hey, they’re supposed to be global, right?!)

To do that, use your favorite terminal editor to edit your ~/.bash_profile (or ~/.bashrc) file.

You’ll need to add the following lines somewhere in the file:

Then, be sure to reload your environment variables in your terminal by running:

Verify it works!

Try to run the raw phpmd command to get the currently installed version:

Those pesky Atom phpmd linter errors should now have melted away 🙂

WaitFor in WebDriver.io

Have you seen your WebDriver tests fail with vague errors or timeouts when trying to locate and interact with elements on your page?

Following the best practices in this post will help you minimize random failures while running automated browser tests via WebDriver.io and Selenium.

Have you ever seen your WebDriver test fail with generic errors such as element is not clickable at point, other element would receive the click, or maybe just a timeout when trying to locate an element on your page?

Well whether you have or you haven’t (yet), this post will help you minimize random failures while running automated browser tests via WebDriver.io and Selenium.

Best practices

There are some best practices in WebDriver.io that you should follow when interacting with elements on your pages.  Below are some suggestions for improving the quality and stability of your e2e tests.

Consider the following basic Page Object example which references a “Next” button and an “Email” input box:

You must wait for an element to be visible before you interact with it.

The example above has a function which simulates the “Next” button’s click, however, it doesn’t check to make sure the element is on the page.  In WebDriver, an element can exist, but not be visible, and therefore not able to be interacted with.

If our “Next” button is hidden by default, and perhaps only shown after a user interacts with other parts of the page, we’ll need to wait for the element to be shown before we click it using the element.waitForVisible() method.

Let’s improve our clickNextButton() function:

You must wait for an element to be enabled before you click or type.

There are times when you must wait for other conditions to be true, as well.  For example, we can’t call the setValue() method on an input element if it’s disabled, the same way a user can’t type in a disabled textbox.  We’ll use the element.waitForEnabled() accordingly.

Let’s improve our setEmailInputValue() function:

Above, we’ve now made the setEmailInputValue() more stable because our interaction method checks to be sure the textbox is both visible and enabled before it attempts to set the value.

Set custom timeouts when using waitFor*()

There is a default timeout set for your WebDriver session, but for certain operations, you may need to increase the timeout when waiting for elements.  In this case, simply specify the number of ms to wait as the first parameter, as shown below.

Scrolling to elements

There are also times when the element is off the page and must be scrolled to in order to click.  Consider using the element.scroll()function to ensure the element is within the viewport before interacting with it.

waitFor() all the things

There are plenty more waitFor*() functions built into WebDriver, just check out the API for a comprehensive list.  And should one of those convenient utility functions not fit your particular use case, you can further customize things by using the waitUntil() function to wait for any condition you like.

Next up: disabling CSS animations

In my next post, I’ll cover how to disable CSS animations in your app to improve stability and reduce the number of random failures in WebDriver.io.

Review a PR then get back to work … faster!

I never mind reviewing PRs from my coworkers, but I do want to minimize interruptions knowing I typically have other tasks in-flight. To help make the overhead of switching contexts (and branches) more efficient, try using the git pr command found in https://github.com/tj/git-extras …

I never mind reviewing PRs from my coworkers, but I do want to minimize interruptions knowing I typically have other tasks in-flight.  To help make the overhead of switching contexts (and branches) more efficient, try using the git pr  command found in https://github.com/tj/git-extras!

You’ll probably want to install git-extras before you continue with this post.  (It’s super quick and easy using Homebrew, as described below.)

Installation

For most Mac users, this is all you need:

Note: Installing from Homebrew will not give you the option omit certain git-extras  if they conflict with existing git aliases. To have this option, build from source.

Show me how!

Let’s say you’re chugging along on your own feature branch and you’ve reached a stopping point where you can safely switch contexts and help keep your team’s PRs from piling up …

  1. Commit or stash your changes.  You don’t want to lose any work, and trying to checkout another branch will result in an error, so finish up what you’re doing and commit/stash first.  Also, you might consider that if you’re not ready to commit the code changes you’ve made, you might not be cognitively ready to switch contexts anyway  🤓
  2. Pull the PR down for review.  Just run $ git pr 123  from within your existing working directory, replacing 123  with the appropriate PR number, or even the full URL to the PR.If you’re working from a fork, you can also specify a different remote, e.g., $ git pr 123 upstream .  (See the git pr docs for more information.)  Doing so will automatically checkout the 123 branch as pr/123 .Bonus!  If no new packages have been added/updated in package.json, you might not even need to restart your web server when checking out other branches. (Lingua changes may still require a restart, however.)
  3. Do your review thang.  Bring the site up, test it thoroughly, leave good feedback on GitHub … (you know the drill)
  4. Switch back to your branch.  You’ll need to switch back to your own feature branch before cleaning up the PR branch you pulled down.
  5. Clean it up.  Run $ git pr clean  to delete the PR branch and continue with your own feature work.

You should now be back in your own lane after helping to get your team’s commits reviewed and merged, having a strong sense of pride and accomplishment, and a minimum amount of time spent.

Further reading

Check out the entire list of commands and aliases added by git-extras, or check out this Vimeo screencast on some of the more popular git-extras commands.

“The box ‘bento/ubuntu-16.04’ could not be found” error when spinning up a new Trellis project

I was spinning up a new website using one of my favorite WordPress stacks built on Trellis and Vagrant, when I encountered the following error: “The box ‘bento/ubuntu-16.04’ could not be found or could not be accessed in the remote catalog.”

I had recently updated Vagrant from 1.8.5 to 1.8.7, and had also recently started using Ubuntu 16.04 for my new projects, updating from the previous LTS version 14.04 I had relied on for years.

Here is how I fixed it …

I was spinning up a new website using one of my favorite WordPress stacks built on Trellis and Vagrant, when I encountered the following error: The box ‘bento/ubuntu-16.04’ could not be found or could not be accessed in the remote catalog.

I had recently updated Vagrant from 1.8.5 to 1.8.7, and had also recently started using Ubuntu 16.04 for my new projects, updating from the previous LTS version 14.04 I had relied on for years.  So that gave me two avenues to go down – was it the newer Vagrant version or the updated Ubuntu version that was breaking things?

Troubleshooting

The Trellis docs say that “Vagrant >= 1.8.5” is required, so my new version 1.8.7 should work just fine.  On the Roots.io discussion forum, many users found that rolling back to Vagrant 1.8.5 worked for them.  But I typically want to use the latest version of software, so I didn’t stop there.

Then I came across this issue on Vagrant’s GitHub page, detailing how Vagrant’s embedded version of curl was causing a conflict with macOS Sierra on my laptop, and many folks have found that removing or linking that embedded version to my Mac’s version was a good solution.

The Solution(s)

To work around this error, you’ll want to either remove that embedded curl file, or re-link it to your Mac’s version.  Here’s how to do that …

Remove Vagrant’s embedded curl

This simply removes the embedded curl library, and seems to cause Vagrant to fall back to the macOS version.

or …

Link Vagrant’s embedded curl to the Mac host

This more specifically forces Vagrant to call the macOS version of curl directly using a symlink.

Either of these workarounds should fix your issue.  However, if you continue to have problems starting up a new Vagrant/Trellis box, please leave a comment below!

Why I chose Ember over React for my next web app project

A week ago, I decided to spend 7 full days learning a couple of the more popular MV** frameworks, after which I would write a little about my learnings and make a choice for my newest project. I had watched a presentation called Comparing Hot JavaScript Frameworks: AngularJS, Ember.js and React.js by Matt Raible, and was inspired to quantify my framework selection a little more thoroughly, even if inevitably I make my choice from the gut. Read more about my experiment with JS frameworks.

The Ember vs React discussion is still quite lively.  Recently I began work on a redesign of a responsive mobile app, and I was once again faced with the decisions about which frameworks, tools and methods I would commit to using.   I firmly believe in using the right tools for the right jobs, but there are a whole lot of factors to consider – each of them having a varying amount of importance or relevance to any new project.  If you are in the same spot as I am, you might find my discoveries helpful to your own learning journey.

A one-week experiment

A week ago, I decided to spend 7 full days learning a couple of the more popular MV** frameworks, after which I would write a little about my learnings and make a choice for my newest project.  I had watched a presentation called Comparing Hot JavaScript Frameworks: AngularJS, Ember.js and React.js by Matt Raible, and was inspired to quantify my framework selection a little more thoroughly, even if inevitably I make my choice from the gut.

So I performed my experiment on two of the top three JavaScript frameworks – Ember.js and React.js.  I’ve played in the sandbox with AngularJS in the past, and have attended a number of developer meetups and sessions as well, so I already had an idea of what developing with AngularJS was like.  Here are the factors I considered:

  • Learning Experience (LX)
  • Developer Experience (DX)
  • Testability
  • Security
  • Build process/tools
  • Deployment
  • Debugging
  • Scalability
  • Maintainance
  • Community (aka Sharability)

This is the same list Matt used in his presentation, and it works great.  The important thing to recognize is that these factors will weigh differently for you than for me.  Consider assigning each of the 10 factors a decimal weight between 0 and 1 based on fit for any project before you actually fill out your own matrix, and then apply that weighting to your final scores.  Doing this with basic formulas in an Excel doc or Google Spreadsheet is trivial, so use one of those to make it easy on yourself.

I spent 3 days creating the first screens of my new app in React, and then 3 days creating the same screens in Ember (and for those doing the math, 1 day composing this post.)  For 10+ hours each day I enveloped myself in tutorials, videos, docs and podcasts in an effort to learn as much as I could about each framework.  It’s important to actually write code during your evaluation process!  Don’t assume that because a framework works well for others that it will be the best fit for you or your projects.

TLDR; Ember won.  The areas where Ember really racked up points in my selection matrix were in Developer Experience and Community, and instead of talking about why React and Angular didn’t win, I’d like to talk more about why I chose Ember as the best fit for my application redesign.

Developer Experience

DX is a play on the UX term, and as you can imagine, it refers to how a particular tool or library is designed to make developers’ lives easier.  The Learning Experience factor is heavily entwined with the DX factor, especially as your knowledge grows and you move into more advanced code and real-life challenges.

Here’s how Ember.js ♥’s developers.

ember-cli

The ember-cli tool didn’t seem to be a huge benefit for me when I first looked at Ember.js, but being able to ember serve  a new project immediately was encouraging.  As I began generating new routes, templates and models I realized how much time the CLI tool was saving me.

The command above will generate code for your User class’ route, template, and model (with properties) saving you from writing, copy/pasting and search/replacing code manually.

What’s interesting is that ember-cli has become the expected way of generating code, testing apps, and even serving the development environment.  And much or most of the documentation for Ember.js 2.5 has been updated to use the CLI tool, so the Ember.js team is betting big on it.  I expect that I’ll put the tool to through its paces more as I begin setting up tests and adding more to my build process.

LTS release channel

When I first began building and maintaining production Linux environments – especially clustered environments with multiple layers of load-balancing, reverse proxies, caching and lots of dependencies – I learned the importance of Long Term Support (LTS) releases in Ubuntu.  When the Heartbleed and POODLE vulnerabilities in SSL surfaced, for example, patching OpenSSL was critical.  But if you were running on a non-LTS 13.10 version of Ubuntu like I was, suddenly patching a security vulnerability meant upgrading your entire operating system.  Yikes!

Ember has adopted the mantra of “stability without stagnation”, and this resonates loudly with me.  Ember 2.4 was the first LTS release, and every fourth release will also be LTS.

LTS releases will receive critical bugfixes for 6 release cycles (36 weeks). This means we will support Ember 2.4 LTS with critical bugfixes until Ember 2.10.0 is released, around November 4, 2016.

LTS releases will receive security patches for 10 release cycles (60 weeks). This means we will support Ember 2.4 LTS with security patches at least until Ember 2.14.0 is released, around April 21, 2017.

An LTS release channel means addon developers know where to concentrate their efforts, and Ember.js users can upgrade less frequently and more confidently, while still having access to the latest features, bug fixes and security patches.

The Learning Team

Because learning is more than just docs.  The Ember.js Learning Team is responsible for all the different ways that users learn and grow with Ember, and for ensuring that the learning experience remains a central component of future releases.  At EmberConf 2016, Yehuda Katz & Tom Dale announced the role of the Core Team and the subteams.

Watch a video of Recardo Mendes talking about the new Ember.js Learning Team at EmberConf 2016.

Browser dev tools

The Ember Dev Tools make understanding how your application is working a snap, and they are available as extensions for both Chrome and Firefox.  You can also use a bookmarklet for other browsers.

Want those same dev tools on your mobile device?  Check out the ember-cli-remote-inspector which makes it easy to debug your application on your mobile device using your Ember browser extension through websockets.

ES6

I like TypeScript but it doesn’t feel like I’m writing JavaScript, and I never really got into CoffeeScript, but I really enjoy the idea of writing standard ES6 code.  Since that is the direction that browsers and the community are heading anyways, why not start writing that code now?  Tools like Babel.js make authoring in ES6 or ES7 easier by transpiling to JavaScript that current browsers can understand, and without requiring full browser support for the latest versions of the language.

The Ember.js Community

Here’s where I’ve fallen in love with Ember.js – the community.  As an active WordPress developer, WordCamp speaker and blogger, I’ve become accustomed to the WordPress community, and maybe even spoiled by it.  The community constantly amazes me with the number of willing contributors to core, to plugins, and to documentation and learning.

Learning

There are some great community resources for learning Ember.js:

Search Meetup.com and you’ll find Ember groups near you who meet often to learn from each other and freely share their knowledge and love of Ember.js.

Forums

Check out http://discuss.emberjs.com/ for active forums using the robust Discourse forum platform (my favorite forum software, and also built on Ember.js!)

Watch the video

Take a look at the video, it’s long but it does a great job demonstrating the thought process process behind selecting the right JavaScript framework for any project or organization.  The weighting of the factors for Matt or myself may be very different from yours, so be sure to go through the exercise yourself and see which framework is best for your requirements.

 

npm install returns an “Unsupported URL Type” error

While working on an Ember.js app today, I hit an odd snag.  Running a standard npm install was bombing terribly on my Mac.  And 10 minutes of my google-fu wasn’t turning up any good leads.  I had searched for “npm install ERR! Unsupported URL Type“, upon other variants, but I had missed something.

The Solution

What I had missed was a woefully out of date npm version.  Node.js was already at latest (4.4.3 at the time of this post), but I remember screwing around with re-installing npm on this machine the other day and somehow I was running npm v2.15.1.

Updating to the latest version of npm fixed the errors I encountered with npm install.  And I’m sure there’s another lesson in there somewhere about nuking at reinstalling core software tools, but maybe that will be detailed in a future post.

TLDR; — I won’t be detailing that in a future post.

PHP class not found while using Composer

[error] FastCGI sent in stderr: “PHP message: PHP Fatal error: Class ‘gkn\App’ not found.

Here’s how to solve one of the more frustrating auto-loader errors I’ve encountered in Composer.

Composer is more of a dependency manager than a package manager because it only manages packages on a per-project basis.  That being said, it’s almost always made my life easier when referencing other PHP libraries as well as my own.  Until I did something stupid.

The problem

Earlier today, an updated build script overwrote the “vendor” folder on the remote server I was deploying to.  Suddenly my app started throwing HTTP 500 errors, and I found the following in the nginx log file:

Hmm, Composer is supposed to have created an auto-loader for this, so I simply tried to re-install my dependencies via Composer on the remote server.  I typed out:

The installation and update completed normally with no errors. But the class not found error persisted when refreshing my web app in the browser. The problem is that Composer generates a PHP ClassLoader class to auto-load your files, and that auto-generated PHP class has a unique name. Open your vendor/autoload.php file and you’ll see an example:

The solution

A colleague pointed out my error almost immediately via Slack, it’s obviously something he’s run up against before (thanks, Paul Frazee!)  Simply put, I also needed to force a new dump of the autoload files by typing:

The composer dump-autoload –optimize command regenerates your autoload files without having to run install or update.  The --optimize argument is especially important when running in your production environment for performance reasons. The optimization will speed up the time it takes for the auto-loader to load your classes, which can be relatively slow and cause your pages’ performance to suffer.  It does this by converting the PSR-0/4 packages into classmap packages, which isn’t as friendly while in development, but gives you faster requests on the web server.

Have a similar issue, or found a better solution?  Leave a comment below!

The .1 Release is the Listening Release

Developers love to launch.  It’s the culmination of weeks or months of work (if it’s years, you better be building an operating system) and the public is about to see what you’ve created.  But it’s far from the end of your big release.

And if you’re juggling multiple projects, it’s tempting to wipe your hands clean after a site or app launches, and change your full focus to a new project.

Developers love to launch.  It’s the culmination of weeks or months of work (if it’s years, you better be building an operating system) and the public is about to see what you’ve created.  But it’s far from the end of your big release.

And if you’re juggling multiple projects, it’s tempting to wipe your hands clean after a site or app launches, and change your full focus to a new project.  This is especially true when your team has truly done their best to maintain quality throughout the design and development process.  You’ve thought of everything, tested with a small group of trusted users, and you only had so much time booked in the calendar, so you call it done.

What may be missing from your schedule is the “.1 release”.

A recent launch of an intranet portal is a great example.  Our team did a fantastic job of innovating during the design & dev phases, as well as leveraging some cool new interaction methods and technical integrations (new to us anyway!)  We estimated our development time reasonably well, and set the expectations with ourselves that we were delivering a 1.0 release, and we’d come back soon to address all of the experiential things that we knew users would have to tolerate for a bit (e.g., slower-than-ideal authentication/authorization on first load) while we focused on a different project with a totally different set of end users.

That was almost 6 months ago.

The problem is that because there was another high-priority project waiting in the queue, we consciously skipped the .1 release of the previous project.  But the .1 release is the release where you monitor the application and the experience your users are having, and respond to their needs quickly.  The .1 release is where we make good on all those IOUs we made while disclaiming “we must deliver something now, we committed to a date and we’ll get in trouble if we miss it”, and that each of their individual needs will be addressed in the next version.  This is all fairly standard and valid, as long as you actually plan the time in your project schedule necessary to listen to your users and improve their experience.

When we constantly deliver at 100% (or greater), we don’t give ourselves enough buffer.  Not just setting aside time to mitigate known and unknown risks throughout a project (the latter is frequently forgotten), but buffer to sit back and listen after a major release.  We should have confidence that we couldn’t possibly have gotten everything perfect for every user, so allocating time to ensure that even their smaller needs are addressed before we move on to another project is imperative.  The small things may determine whether or not a user finds value in your product, and a healthy user growth curve is supported by happy users.

The .1 Release is the “Listening Release”

Once you’ve launched that first or other major release, celebrate the win!  Get a good night’s sleep.  Then come to work and just listen.  Certainly we’ve all built multiple feedback loops into our applications, right?  We have New Relic analytics for our server performance, exceptions logged to a database for analysis, a feedback form and forums for users to contact us … but why did we build all of that into our app in the first place?  We did so to enable us to know more about how our app is doing, and more importantly how our users are doing, so that we could take action where necessary.  Listening but failing to follow up with improvements and fixes right away could potentially be worse than not listening at all because you’ve set users’ expectations incorrectly.

There are very objective reasons for dedicating time and effort to listen to, and fully understand, your users. Such as meeting expectations better, and being more efficient by avoiding rework. But there’s also an emotional component to the exercise itself — it builds trust with users, and demonstrates a genuine empathy for their needs.

Failing to Listen Guarantees You Are Forever Playing Catch-Up

Most of us don’t get to set all of our own priorities – they are dictated or influenced by customers, managers, colleagues, holiday shopping seasons, and such.  We have multiple projects and products in our workshop, and we constantly switch context to whatever is most important right now.  And while there are some environments where that works well, context-switching in software development is expensive in terms of time.  The constant drumbeat of “deliver, deliver, deliver” often means that important phases of an application’s life cycle get omitted.

Think about who’s priorities we’re catering to when the consequences of delivering later versus earlier are greater than the consequences of not delivering the best experience to our users.  (Hint: it’s not your users’ priorities you’re optimizing for …)

When we don’t schedule the time in the project plan to follow up with a .1 release, users feel unheard and abandoned.  Sometimes for a long time.  Yes, you hit your deadline, and you’re on to put out the next fire (because every project is the most important project, right?)  However, we’ve only served our own priorities at the expense of ignoring those of our users.  And as long as we continue to release without listening, and without following up immediately with a “listening release”, we are guaranteed to always have a growing backlog and worse, a growing number of frustrated users.

As Ron Swanson said, “Never half-ass two things.  Whole-ass one thing.”

A Better Experience for Everyone

And I do mean everyone!  The best way to ensure that what you’ve created is actually improving the lives people (because at our core, makers of all disciplines want to build things that impact people) is to release your best work, then take the time to listen to users, set correct expectations, and address their needs as best as you can.  Don’t declare “mission accomplished” and move on before your users are happy.  This builds trust with your users, which is a very necessary component to user adoption.

Knowing the triple constraint exists in project management, increasing quality isn’t free.  But deciding early on that the definition of success is based on a level of quality and user acceptance and not just the delivery date, and that you will spend the additional time and cost required to make users happy will make building and using your product a much more enjoyable experience for everyone.

Just remember who you are ultimately designing for, and spend most of your time serving their needs and priorities.

(Featured image courtesy of Flickr.com/Rumpleteaser)

How to fix “PHP Fatal error: Call to undefined function imageconvolution()”

If you’re having trouble with uploading images to WordPress (or other PHP frameworks) and seeing blank spots where images should be, read more to discover a possible fix!

If you’re having trouble with uploading images to WordPress (or other PHP frameworks) and seeing blank spots where images should be, you may need to be sure that LibGD is installed.  Check your Apache or Nginx logs for fatal PHP errors which occur when trying to call undefined functions, for example:

  • PHP Fatal error:  Call to undefined function imageconvolution()
  • PHP Fatal error:  Call to undefined function imagerotate()
  • PHP Fatal error:  Call to undefined function imagecreatefromjpeg()
  • … and other newer GD functions

Here’s an example I gathered from an Nginx log after attempting to rebuild some thumbnails in WordPress using the AJAX Thumbnail Rebuild plugin:

So, let’s fix it!

Even if you’re on the latest Ubuntu 14.04, you may find yourself in a situation where your installed version of PHP was not compiled with libgd, or it is not installed via your package manager.  In that case, it’s easy to install!  For the most part, you shouldn’t have to recompile PHP, so try installing/updating libgd using your package manager.

From the libgd FAQs page:

If you want gd for a PHP application, just do (for Fedora):

Or, for Red Hat Enterprise Linux or CentOS:

Then do:

If your system is Debian based (Debian/Ubuntu/…) then you need to install php5-gd package: