A Few Recent Articles…

Using a Redis session provider with Ratchet and Symfony

As I've posted before, I play around a fair bit with WebSockets. Although I maintain Wrench, I've been using Ratchet for my latest project. I'm displaying on <canvas> to really make the most of the low latency.

One thing I've been playing with is Ratchet's Session Provider support. The intention is to get your normal session data available for your Websocket application. I've had to work around a few frustrating limitations; they're no fault of the brilliant libraries I've been using though.

One limitation is that the session isn't writeable. They say "please do not try and write to the session" right in the documentation for the SessionProvider. This seems to be because the read-only view of the session they're providing is deserialized by code in Ratchet itself. If they get the (de)serialization wrong, the session may no longer be readable by your main app -- yuck. The only safe way to provide writeable sessions would be to require the whole Symfony Session component. And Ratchet has a much lighter set of dependencies than that, which is nice.

But, that also means that if you are using Symfony2 for your main app, you can write to sessions from your websocket application; you just have to use the Symfony way of accessing a Session instance, rather than the Ratchet way. Here's how I did it. First, I defined a SessionFactory:

namespace Application\WebsocketBundle\Services;

use Symfony\Component\HttpFoundation\Session\Attribute\AttributeBagInterface;
use Symfony\Component\HttpFoundation\Session\Flash\FlashBagInterface;
use Symfony\Component\HttpFoundation\Session\Session;
use Symfony\Component\HttpFoundation\Session\Storage\SessionStorageInterface;

class SessionFactory
    protected $storage;
    protected $attributes;
    protected $flashes;

    public function __construct(SessionStorageInterface $storage, AttributeBagInterface $attributes = null, FlashBagInterface $flashes = null)
        $this->storage = $storage;
        $this->attributes = $attributes;
        $this->flashes = $flashes;

     * @param string $id
    public function getInstance($id)
        $session = new Session($this->storage, $this->attributes, $this->flashes);
        return $session;

Then, I injected what the factory needed:

<?xml version="1.0" encoding="utf-8"?>
<container xmlns=""
        <service id="websocket.session_factory" class="Application\WebsocketBundle\Services\SessionFactory">
            <argument type="service" id="" />
            <argument type="service" id="session.attribute_bag" />
            <argument type="service" id="session.flash_bag" />

Finally, in my server Command, I get an instance of the session factory and pass it into my application for use. Here's what that bit looks like:

// in ListenCommand.php
$factory = $container->get('websocket.session_factory')
// ... then injected into the application

// in ChatApplication.php
if ($name != 'anonymous' && ($session_id = $connection->Session->getId())) {
    $session = $this->factory->getInstance($session_id);
    $session->set('', $name);

And that works pretty well. Unified sessions between Symfony2 and websockets, that you can write to when you need to.

But the next limitation you run into is that the session provider doesn't do any sort of specialized exception handling for using a networked session provider. Most such providers (for example the RedisSessionHandler from SncRedisBundle) won't do any either.

When a network connection isn't used for a period of time, it'll time out (usually called a "send" or "write" timeout). This, by itself, isn't noticed by program execution until the next time something tries to use the connection. In effect, on the next connection, the SessionProvider leaks an exception from its $conn->Session->start() call in onOpen(). That's what we have to handle ourselves.

RedisSessionHandler has a nice, permissive protected-API, so it's ripe for subclassing. Here's what I do:

namespace Application\WebsocketBundle\Session\Storage\Handler;

use Predis\CommunicationException;
use Snc\RedisBundle\Session\Storage\Handler\RedisSessionHandler as sncRedisSessionHandler;

class RedisSessionHandler extends sncRedisSessionHandler
    const CHECK_EVERY = 20;

     * @var int timestamp
    protected $checked = null;

    public function read($sessionId)

    public function write($sessionId, $data)
        parent::write($sessionId, $data);

    public function destroy($sessionId)

     * Simple caching logic, so we don't actually ping every call to read/write/destroy
    protected function checkConnection()
        $now = time();

        if (!$this->checked || $this->checked < $now - self::CHECK_EVERY) {

        // Even if we haven't pinged the connection this time, we know the
        // connection has been written to shortly after we return from
        // here, so we can bump the timestamp anyway
        $this->checked = $now;

    protected function ping()
        try {
            if (!$this->redis->isConnected()) {
        } catch (CommunicationException $e) {
            // If just a send timeout, the ping will provoke this handler
            // and reconnection will succeed

Puppet Tips

I've been using Puppet to configure systems for a few years now; right back to the 0.24 days. Here's some distilled experience:

  1. Forget about subscribe, require and notify. Immediately.

    These are "meta-parameters" you can pass to resources like file and service. There's no reason to even consider them. Use a new version of Puppet and the chaining resources syntax everywhere, all the time. It's more flexible and completely replaces these old parameters.

  2. Don't skimp on ensure => absent

    Every parameterized class should implement this contract (let's call it "ensurable"):

    • The class accepts an argument called $ensure.
    • The argument has a default value of 'present', or any value with similar semantics.
    • The class reacts accordingly if $ensure is set to 'absent'.
    • The class may react to any other value as it sees fit. ('running', 'stopped', etc.)
    • The class passes a sensible $ensure value to all the resources and parameterized classes it declares.

    Because Puppet classes have named parameters, you can add this $ensure parameter after the fact; there's no need to add it when you first define your class. But what I've (too slowly) learned is: you should define it immediately. Don't be lazy. Do it now.

    Why? Well, when you're starting out writing the definitions for your servers, you will make mistakes. You'll define services on boxes that you want to migrate later. You'll reprovision. You'll accidentally copy and paste chunks of your node definitions and set up an extra service. It happens.

    If your class doesn't define $ensure, it's a problem waiting to happen. For example, you could be left with a concat fragment, because you couldn't pass the correct $ensure value into its declaration when you wanted it gone. Simple enough to avoid, but if that happened to you from an exported resource it that could leave a couple of extraneous lines in a config file on another server. This is a pain in the ass to debug. I guarantee it'll take you at least half an hour to trace it back into the fragments directory and to "that mistake I made last week that was only deployed for a minute". Save that debugging time now. Do the right thing, and make all your classes ensurable.

  3. Consider architecture

    If you're configuring 32-bit systems at all, you'll need to deploy packages with different binaries - in APT speak, the packages will have a different arch. When you define your package, the provider will just hand the problem over to the system's packaging, and everything will 'just work': each system will get the correct package for its architecture.

    However, the contents of those packages will often differ in ways you might not expect; it's not just the binaries that might be different. For example, the default PHP extension_dir contains a build identifier, and that can change on different architectures. So, if you're serving php.ini from Puppet, you'll need to check $::architecture and template it accordingly.

  4. Bookmark the documentation

    These two resources provide you the most help:

    • When you come across a new part of the language, or are considering using a feature, have a look at the Language Guide.
    • In your day-to-day module building, you'll want the Type Reference open in a tab permanently. I find it far more useful than any "cheat sheet" I've seen: you need information about what different providers do (and how they do it), and a few sentences of context are invaluable: the exceptions mentioned implicitly define the rule, if you like - they help you get firm mental model of what each resource represents.

WebSockets with Symfony2

I've started playing around with WebSockets in PHP, integrating them into my pet Symfony2 project.

The first library I grabbed was known as php-websocket. It was a bit of a mess. The interface for applications into the server was stuffy, and I ended up having to go to great lengths to integrate it into the services container. This integration work was done in a bundle, VarspoolWebsocketBundle. I later greatly cleaned up the upstream, renaming php-websocket to Wrench.

I've just got done reimplementing all my WebSocket work in Ratchet. And I'm pleased to say that I now consider Wrench and VarspoolWebsocketBundle completely deprecated; I definitely wouldn't consider using them for new projects, and I'm considering posting a warning in the README.

Ratchet needs very little effort to use with Symfony. I'm using a Redis session handler, and Ratchet provides a light integration there. It also works well with the Command component. Here's my command:

class ListenCommand extends ContainerAwareCommand
    protected function configure()
            ->setDescription('Listen for websocket requests (blocks indefinitely)')
            ->addOption('port', 'p', InputOption::VALUE_REQUIRED, 'The port to listen on', 8000)
            ->addOption('interface', 'i', InputOption::VALUE_REQUIRED, 'The interface to listen on', '');

    protected function execute(InputInterface $input, OutputInterface $output)
        $application = new AggregateApplication();
        $server      = new WampServer($application);

        // Wrap server in a session provider
        $handler = $this->getContainer()->get('session.handler');
        if ($handler instanceof \SessionHandlerInterface) {
            $server = new SessionProvider($server, $handler);

        $server = new WsServer($server);

        $server = IoServer::factory(


Short, simple, and easy to extend with further dependencies, options and arguments.

Ratchet still has a few limitations (notably, it's not easy to run multiple applications with a single server). But I can get around that, and they're looking at integrating a Symfony2 Routing to help.

In the meantime, I use a magic AggregateApplication class that farms out events to multiple other applications with __call() and call_user_func():

class AggregateApplication implements WampServerInterface
    protected $children;

    public function __construct()
        $this->children = array(
            new ChatApplication(),
            new IndicatorApplication()

    public function __call($name, array $arguments)
        foreach ($this->children as $child) {
            call_user_func_array(array($child, $name), $arguments);

     * @see \Ratchet\Wamp\WampServerInterface::onSubscribe()
    public function onSubscribe(ConnectionInterface $connection,  $topic)
        return $this->__call(__FUNCTION__, func_get_args());

    // ... other callbacks

This AggregateApplication is instantiated in the Command class, which is ContainerAware. So, you can see how easy it would be to collect tagged application classes out of the service container, or inject sets of dependencies.

As for performance, I strongly recommend you just use a WAMP interface to your application code. This will allow you to take advantage of middleware, and have something other than PHP serve your WebSockets in production. Ratchet makes this easy.


Spotify has finally arrived in New Zealand. Years late, and bundled in with the Australian launch. So, I've finally been able to subscribe. Some New Zealand acts I've found waiting for me:

I already own albums from most of those artists on CD. More of them, I've listened to free streams (their sites, myspace, music blogs, that sort of thing) or, where not available, pirated their work to listen to it, before going down to The Warehouse to pick up albums at $25. Estimates of how much an artist gets from a CD album sale vary around $1 or $2, with the most saavy artist retaining maybe $9 or $10.

So, I wanted to see how Spotify stacks up against my current listening habits. This article from the ABC was interesting: Artist anger as Spotify launches in Australia. Here's the crucial maths from Nick O'Byrne, the general manager of the Australian independent record labels association: O'Byrne says Spotify's streaming makes artists about a third of a cent per stream

They might be being cagey about it but in the end we do know that it's less than a third of a cent and probably sometimes - depending on which labels you are and which artist and what deal you have done with them - it may be less than one tenth of a cent.

If you do the maths on it, if you see a single song on iTunes you might get paid about a dollar by the times iTunes has taken their cut and that goes to the label and then divided amongst the artist and the label themselves.

I've been tracking my listening for seven years now, using Over that time, I've accrued 60,000 listens, not counting the various mobile devices I've had that didn't support scrobbling. I'd say a conservative estimate is that I listen to about 222 tracks a week.

The top artists have had about 2000 listens from me. If I'd been listening to Radiohead this whole time, they'd have earned $9.33 from my streaming. SJD, about $5. Seems about equivalent to buying traditional CDs: that's about the amount of money they'd have received from an infinite number of listens from me had they been under traditional recording contracts and I was still trundling off to The Warehouse.

Of course, I've paid them both more in tickets and merchandise. That won't change. And they'll be receiving money from all my ongoing listens to Kid A and Southern Lights, albums that I bought years ago.

Even if you take quite a sceptical view of digital royalties, compared to commercial CDs, it only takes a bit of long term listening to make up the difference in royalties: if someone likes your music, I think it's quite normal for their number of plays of that artist to be in the thousands. So, quoting per-stream rates is necessarily misleading.

I think Spotify is about the status quo for artists (albeit a sucky status quo), and an absolute bargain for listeners. Highly recommended.