Cheat sheet for systemd

This post is intended to give you the basics of using systemd to start and stop services, adjust how existing services boot, how to override system services, and how to create new services. systemd is certainly complex and powerful, but the above probably describes 90% of what an admin is looking to do.

systemctl supersedes service, example:
systemctl restart nginx vs service nginx restart

Note: on CentOS7, you can still type ‘service nginx restart’ and it will map to ‘systemctl restart nginx’

In systemd, all the pieces are called ‘units’: services, mount points, devices, sockets are all considered units. Everything referenced here is referring to services – I did not have to get into any of the other unit types.

Basic commands:
systemctl – lists all units
systemctl status – detailed list of units’ status
systemctl show nginx – show the unit’s config details
systemctl start nginx
systemctl stop nginx

Other commands: restart, reload, status, is-enabled, enable, disable, mask, unmask, help

Good basic documentation:
Advanced commands:
Other systemd documentation:

Things to know
– /etc/init.d still exists, but there’s little to nothing there. If you REALLY need to do something with SysV, you can still put scripts in /etc/init.d and symlink from rc2.d/rc3.d but it’s really going rogue
– /etc/init.d/ how-to:
– The systemd system directory is: /usr/lib/systemd/system – this is where the system files are
– The “user-installed” or override directory is: /etc/systemd/system

We have a script that we want to execute when the server is booting up. We’re going to keep it in /etc/init.d/ even
though that directory isn’t really “processing” scripts from there.
– Create /etc/systemd/system/foobar.conf with the following text:


ExecStart=/etc/init.d/foobar start


– systemctl daemon-reload

Pay attention to the ‘Type’ of program:

Good example of postfix.service file:

Other unit config example:
Documentation on unit config file:
List of directives in unit config file:
ENV variable examples:
More ENV examples:

NOTE: any time you make a chance to any file in either /usr/lib/systemd/system or /etc/systemd/system, you need to run: ‘systemctl daemon-reload’

– is run-level 3; I didn’t bother to learn how to do the other crap
– More:

– run: systemctl edit —full [unit]
– to override /usr/lib/systemd/system/[unit]
– create /etc/systemd/system/[unit]
– then run: systemctl reenable [unit]

– run: systemctl edit [unit] (which creates /etc/systemd/system/[unit.d]/override.conf)
– to add/edit a snippet for /usr/lib/systemd/system/[unit]
– create: /etc/systemd/system/[unit.d]
– put a .conf file with the unit config overrides (ex: /etc/systemd/system/[unit.d]/override.conf
– systemctl daemon-reload

CAVEAT: For ‘ExecStart’ changes, you need to clear the variable first in the snippet, then define it again. Example:

ExecStart=new command


iOS ‘Storage is Full’ error

Symptom: you get the dreaded ‘Storage is Full’ error, even though you confirm that you have multiple gigs of space remaining.

Probable resolution: there’s a bug in iOS where when there’s a new iOS version and it’s automatically downloaded to your device, it chews up any/all disk space for some reason. If you delete the new iOS downloaded version, it will clear up the problem. This bug has bitten me several times and it’s annoying.

CS Cart Devops and Deployment

I’ve been working with CS Cart quite a bit these past few months, and while it’s been painful (which is readily apparent to those PHP 5.5+ programmers who’ve had to work with it), I’m starting to get parts of it wrangled into place and I’d like to share some of those tips with you.

[NOTE: the version of CS Cart I’m working with has some significant internal data structure modifications to better align the product with our needs, but almost everything I discuss going forward should be applicable]

First, trying to get a proper devops development -> staging -> QA -> production environment has been a struggle. It’s very similar in deployment to WordPress in that most of the configuration is stored in a database table, and it’s hostname-based.

Single Developer Devops

In a single developer environment, your devops flow is pretty simple – you’re developing locally (something like, hopefully in a Vagrant environment, and you may or may not be syncing your production database to your local dev environment, transmogrifying the domain. In fact, you don’t even have to include Github in the loop – you could push directly to the production server.

Multiple Developer, Staging/QA/Production

In our multiple-developer setup, each developer has his/her own local dev environment as they see fit. I personally use Vagrant to run Ubuntu, and I configure my Vagrant environment with the same Ansible scripts that I use to create/configure the staging, QA and production servers, so my local dev environment exactly matches what the code is actually going to be running on.

One caveat in our circumstance is that we feel the developers do not and should not have access to production data in either their local dev environments or on the staging server, both for security purposes as well as data leakage (hint: think about accidentally emailing to actual customers or accidentally adding/changing/deleting billing info).

So I develop and test locally, and when I’m ready to have others review my work, I push to Github. It’s important to have a git hook that lints your commit(s) before you push them up. Github has a Webhook to auto-POST to a script on our staging server which initiates a Github pull from the staging server. Some people would discourage an auto-pull but the staging server for us is a “throwaway” shared server that we use to allow others to test out functions and features, so we don’t mind if a commit breaks it.

[Since there’s only two developers, we’ve been doing most of our development into ‘master’, but you can also develop into a new branch, push that new branch through and (manually) switch to that branch on staging for testing]

The biggest thing that tripped me up in CS Cart development was remembering to “flip” any addons that have changed, and I wrote some scripts to automatically do that for me.

When we’re ready to test Github code against production data, we run a production sync script on the QA server that pulls down a snapshot of the production database (while changing the CSC hostname:, and also the images directory, so the production product data will display properly.

If cost are an issue, it should be noted that you could eliminate the QA server and just “test” on staging with production data. We’ve found it handy to have staging as a “punching bag” where we can deploy and test with wild abandon, and QA being the server that mirrors production for the most realistic simulation of code and data, but your mileage may vary.

Once we feel the new code is tested out thoroughly on QA, we go ahead and pull over to production and test thoroughly as well. I would feel much better about this process if CS Cart had unit tests but unfortunately there aren’t any at this point.

This is our current devops flow for developing CS Cart code. I’m sure there are other ways to do it, but this works for now. There are certainly other changes that I’d like to make, including modifying the ‘refresh_db’ script to discard production customer data and perhaps swap it with dummy info.

One last thing I’d like to point out is that this same devops workflow could be used for WordPress code deployments, using the Search and Replace script in place of ‘refresh_db’.

CS Cart git hook script to auto-flip

One of the biggest problems I’ve had in CS Cart development was remembering to disable and then re-enable any addons that had changes. I’ve created some scripts to help the devops development flow in CS Cart:

‘cli_lib.php’ is a stripped down version of ‘admin.php’ that I use to bootstrap into CSC from the command line. ‘clear_cache.php’ should be self-explanatory – it just clears the cache from the command line.

HINT: if you’re having any kind of problem with CS Cart, always try clearing the cache. And you may consider doing a “hard reset” (rm -rf var/cache/* in your root CSC directory) because there are files that don’t get cleared out with a standard “clear cache” command.

So the magic starts with a git hook script ‘post-merge’ (.git/hooks/post-merge):


export ROOT=/path/to/cscart/base

rm -rf $ROOT/var/cache/*

cd $ROOT/app/lib
composer install


Pretty standard. It’s worth noting that you can add your own Composer packages in the existing composer.json file and use them throughout CSC. I’ll do a future post about how I added some custom logging to CS Cart (lack of file-based logging is another of my huge CSC pet peeves).


export ROOT=/path/to/cscart


ADDON_LIST=`cd $ROOT; git diff "HEAD@{1}" --name-only | egrep 'app/addons/.+?/addon.xml'`

if [ ! -z "$ADDON_LIST" ]; then
    for file in $ADDON_LIST; do
        ADDONS=("${ADDONS[@]}" ${arrItems[2]})

ADDON_LIST=`cd $ROOT; git diff "HEAD@{1}" --name-only | grep 'var/themes_repository/basic/templates/addons/'`

if [ ! -z "$ADDON_LIST" ]; then
    for file in $ADDON_LIST; do
        ADDONS=("${ADDONS[@]}" ${arrItems[5]})

function join { local IFS="$1"; shift; echo "$*"; }

if [ ! -z "$ADDONS" ]; then

        if [ ${#ADDONS[@]} -gt 1 ]; then
                CHANGED=($(printf "%s\n" "${ADDONS[@]}" | sort -u))
                LIST=`join : "${CHANGED[@]}"`

    echo "php $ROOT/bin/php_flip_addon.php $LIST"
        php $ROOT/bin/php_flip_addon.php $LIST

Basically, this script just determines which addon files have changed, and builds a (unique) colon-separated list to pass to ‘php_flip_addon.php’

I won’t post ‘php_flip_addon.php’ but it takes the colon-separated addon list, starts building a stack of the addons that need to be “flipped”, taking into account any dependencies. It uninstalls all those addons in the proper order, then re-installs them in the reverse order. If there’s a problem uninstalling, it will immediately reverse the order to try and get back to the “known good” state.

Ansible trick for spinning up a new server

So I think everyone and their mother has fallen in love with Ansible – I know I have! Mostly because I’m not really a fan of Ruby and Ansible is just so simple and basic to operate – ssh only required.

I’ve got a bunch of Ansible roles defined: apache-php5, nginx-fpm, nginx-hhvm, etc. It’s nice to be able to spin up a server and test things out. Here’s a quick little script I use to execute roles against a server not listed in /etc/ansible/hosts file:


if [ -z $2 ]; then
  echo "$0 [] [role]"
  exit 1

ansible-playbook $2.yml -i "$1," --extra-vars "fqdn=$1"

The trick is the “$1,” which allows you to define the host on the fly instead of having to define it in the ‘hosts’ file.

All my roles use the ‘aws‘ module to spin up a new EC2 instance, create a new DNS A record, and configure the new host to the specified role as the referenced FQDN.

Bash script to lint only changed PHP files in git before commit

I usually use Phing to manage my lint and unit tests, but I’m dealing with a rather large (existing) project, and Phing is taking too long to lint all the files in the project. I noticed that most of the suggested “only modified files” scripts used PHP, which is fine I guess but it seems like such a waste when a simple Bash script in .git/hooks/pre-commit can suffice:

#!/usr/bin/env bash

if [ "$(id -u)" == "0" ]; then
  echo "You cannot commit as root" 1>&2
  exit 1

FILES=`git diff --cached --name-status --diff-filter=ACM | awk '{ if ($1 != "D") print $2 }' | grep -e \.php$`

for x in $FILES
  CMD="php -l $x"
  echo $CMD
  if [ $? -gt 0 ]; then
    echo $RES

Laravel 4 – non-standard username/passwd fields in user auth table

I’ve been doing a lot of development in Laravel 4 these days – just a great, great framework! But I’m learning it has certain expectations as far as naming conventions go. An example would be the way authentication is done via Eloquent (ORM). Eloquent’s default authentication fields are  ‘username’ and ‘password’. If you want to have something different, you need to extend some functions in order to return the information you want:

// app/models/User.php

use Illuminate\Auth\UserInterface;
use Illuminate\Auth\Reminders\RemindableInterface;

class User extends Eloquent implements UserInterface, RemindableInterface {

protected $fillable = array('name','passwd','email','status','timezone','language','notify');
protected $hidden = array('passwd');

protected $table = "users_t";
protected $primaryKey = "uid";

public static $rules = array(
'name' => 'required',
'passwd' => 'required',
'email' => 'required'

public function getAuthIdentifier() {
return $this->getKey();

public function getAuthPassword() {
return $this->passwd;

public function getReminderEmail() {
return $this->email;

public static function validate($data) {
return Validator::make($data,static::$rules);

The key parts are ‘getAuthIdentifier()’ and ‘getAuthPassword()’ – if you notice, I use a non-standard table ‘users_t’ and I use ‘uid’ as the primary key instead of the expected ‘id’. getKey() picks up the $primaryKey variable.

PHPUnit fails silently – figuring out the problem

Quick post about PHPUnit – if you’re making changes on your existing tests and all of a sudden PHPUnit starts failing silently, the first thing to do is check the error code:

$ phpunit
$ echo $?

The error code ‘255’ indicates a parse problem, and it’s probably not displaying because you have ‘display_errors = false’ in your php.ini, but this is easily fixable! Just add the following code to your ‘phpunit.xml’ file:

<ini name=”display_errors” value=”true”/>

And you should see what’s causing the problem.

Setting up DNS on Route 53

I have always managed my own DNS on servers that I control because it was something I was capable of doing, and I hate being dependent on someone else when something needs to be done. But I’ve been periodically working on the load time for my blogs (this one and, and the performance for my primary and backup DNS servers just isn’t cutting it any more.

My DNS server hosts are small, inexpensive, out-of-the-way hosting companies, and the servers are either older, slower servers or VPS – great on the pocketbook, but not great for performance. The hosts are far from the backbone, so it takes a number of hops just to get to major exchange points, all things that slow down DNS lookups.

The easiest solution to this problem is to host your primary domains’ DNS at Amazon’s Route 53.

I ran a series of tests from and the best DNS performance I could muster was 122ms (and the average was around 350ms). Once I changed over to Route 53, the average DNS lookup was around 25ms, which make a significant difference in the page load time.

Note: this significant packet delay carries through to all aspects of the page loading. Each packet coming in and going out is subject to this same delay. The solution is to host your website on fast servers on or close to major NAPs, but for me, I’m happy hosting everything on a cheap server that I fully control.

Openssh 6.2 allows for both public key and password authentication

The concept of using public/private keys to bypass password entry requirements always sounds good in theory, but my security conscious would never allow me to do so, on the fear that someone who has access to one server can serially access the rest of your server installations.

I do use public key crypto for certain things, like having a separate Subversion user/key so I’m not prompted for a password when I’m committing code.

I always thought, why can’t we have both public key and password authentication on an account? I knew there were patches to make that happen, but who wants to deal with patches every time openssh is updated?

The latest version of openssh (6.2) has answered my prayers. You can enable the requirement that the public key be valid AND that the user authenticates with a password. Add the following line to your ‘sshd_config’ file:

AuthenticationMethods publickey,password publickey,keyboard-interactive

I highly encourage all sysadmin to enable this. I used to watch my system logs getting blasted on a daily basis from brute force guessing on my sshd daemon, but it comforts me greatly to know hackers aren’t even getting a chance to brute force passwords unless they have the proper public key:

error: Received disconnect from a.b.c.d: 11: Bye Bye [preauth] : 2460 time(s)
error: Received disconnect from e.f.g.h: 11: Bye Bye [preauth] : 1428 time(s)

Those requests were rejected before even getting a chance to authenticate. I still get prompted for a password from my main computer so there’s not an open link to my servers from this computer. Also, you can selectively enable ‘disable’ the password requirement for certain accounts. I added the following lines to ‘sshd_config’ as well:

Match User subversion_user
AuthenticationMethods publickey

This allows the ‘subversion_user’ (a limited access user) to authenticate ONLY with the public key and not be prompted for a password.