Chicken LogoPullets Forever

“You have to treat the chickens pretty well, because they've got a very tough union."
– Kermit the Frog

Unicode Chicken Dot TK

August 08, 2011

A few weeks back Panic registered Poopla (http://💩.la), what is belived to be The World's First Emoji Domain. Outstanding! Ever since, and despite, reading John Gruber's Star-Struck post I wanted my own iconic domain name. That day I registerd:


Not exactly my iconic chicken, Ginger, but nothing a custom font couldn't solve if I really wanted to be OCD about it.* The process of registering was quite simple, I went to where you can register a free domain name for up to 12 months by logging in with one of the many OpenID accounts available. Being the my first experience I setup the domain as a free domain. After seeing easy it was to configure and manage domain names directly with .tk, I decided to register directly with them.

The next step was to configure Apache so this would be an alias for While there are many things Apache does well, handling non-ASCII characters in the configuration files is not one of them. When the IDN (RCC-5890) standard was introduced, they wisely introduced the Punycode algorithm for translating ASCII into full Unicode. This means that in my Apache configuration, and in many other situations, the URL is represented as

Beautiful, because as it happens, that is how I need to write it for most social networking clients to be able to handle the URL. To start with, if you are using Safari under Lion you will see:

Emoji URL in Safari

My first thought was that this will work in all Lion browsers, as emoji is just a unicode font in Lion. Then I saw Chrome:

Emoji URL in Chrome

Firefox was not much better:

Emoji URL in Firefox
Since social networking clients all have different levels of support for Unicode, the I found that posting the link in it's raw form of 'http://🐔.tk' resulted in varying link identification. Twitter saw this as a valid URL, but many clients reported it as '' or 'http:// .tk'. However via Twitter the link seemed to work all the time. On Facebook the link showed on my wall as 'http://🐔.tk' but was not clickable as a link. On Google+ it was reported as an invalid link while composing my message, and thus not clickable when the message was posted.

Despite all this I have decided to use the new URL as my shortlink url. is shorter than and no more obscure than a shortlink. On Twitter I can use the pretty emoji icon and those who can see it will get all the benefit, while those that cannot will see a random shortlink as usual, unless the Twitter client expands shortlinks, in which case the user will see the full URL for Pullets Forever. On Facebook and Google+ I don't do much in the way of posting my links to my rarely updated blog, and if I did, they allow slightly more than 140 characters, so this is not a huge issue to me.

*Someone really should make a font of all the unicode domain names as relevant icons, ideally starting with the ones that have been registered already.

MAKE RSS Feed Generator

August 19, 2010

I created a simple tool that will auto generate a URL to the RSS feed of your subscription to MAKE. To learn how I did it, read on.

Recently, the RSS feed I created started returning 404 errors. This was completely expected as Sean Michael Ragan pointed out that CoverLeaf was working around the clock on a fix. Fearing the worst, I logged into and found the PDF download link was still there.

Unfortunately CoverLeaf still does not have an RSS feed that you can subscribe to with iTunes for easy syncing to your iPad. Curiosity got the best of me. I wanted to figure out how they had secured the download. I clicked on the link and my copy of MAKE was downloaded from a URL similar to:

That is some security! The first hexadecimal number I have yet to figure out; it seems to be some randomly generated ID. The second one is a UNIX timestamp that is likely used to identify when the link should expire. The URL continues with magazine name, issue identifier, and payload. Finally there is an lm variable on the end. Again this is a mystery, but the number is the same on most links in the magazine, so I assume it is for tracking purposes. A Google search for the number results in hits on only and it seems to be optional.

But where was this URL coming from? I opened up Safari's trusty Web Inspector and started poking around under the hood of the web app and quickly found the following code:

<!-- navbar/download/inputCustomHeader.ftl -->
  <div id="download">
  <div >
    <form name="download_form" id="download_form" action="/make/vol23/Download_submit.action?lm=1280161230000" method="post">
         <input type="radio" name="download" id="download_all" value="all" checked style="display:none;" />
         <label for="download_all"><strong>pdf format (73Mb)</strong></label>
        <div id="download_pdf">
          <span>begin pdf download</span>
<script type="text/javascript">
<!-- navbar/download/input.ftl -->
<!-- navbar/download/inputCustomFooter.ftl -->
<!-- AjaxContainer.ftl -->

This is a snippet of HTML that the app downloaded using a XMLHttpRequest, commonly referred to as an AJAX request. The form submits a request to the server for a PDF that matches the download criteria, which is hard coded to all pages. The form is set to POST (which is odd because this is semantically a GET request) however the backend code that CoverLeaf uses is able to use either request form interchangeably, so I was able to construct a GET request like:

All well and great, but when I deleted my cookies for using the Web Inspector, the URL returned a login page rather than my issue of MAKE. Yes! Finally, CoverLeaf has some sort of security on the download. But wait, the form in question is just asking for email, in fact, if you look at the code:

<form method="post">
  <input type="text" name="email" value="Email" onclick="javascript:this.focus();;">
  <input type="submit" value="LOG IN">
  <p>If you have any questions, please contact<a id="button_digital_support" title="digital support" href="/make/vol23/DigitalSupport_input.action?lm=1280161230000"><span>digital support</span></a></p>

You will notice another POST form, but this time with no action URL. What that tells the browser is that when the form is completed it should be sent back to the current URL. The current URL is the address of the PDF and I was able to change the POST to a GET again and add the to the end of my URL for:

With a valid email address this URL just returns a cookie with your login credentials and a 302 redirect back to the original page. The download page is just a simple HTML page with a JavaScript onload handler that initiates the download.

Simple enough to turn into an RSS feed for iTunes so I can read on my iPad. I chose to write it in PHP as my web host supports that language, but it could have been done in almost any language. The steps to do so are pretty straight forward:

  1. Create a PHP version of the RSS feed that just takes a URL like make.php? and returns an RSS feed with links that contain the email address.
  2. Create a proxy script that takes the email address, magazine, and issue information and scrapes CoverLeaf's pages to return a 302 redirect to the actual URL where the PDF can be found.
  3. Use mod_rewrite to make the URL pretty so iTunes does not complain.

Step 2 is where all the magic happens:


/* STEP 0. Grab the parameters from the query string */
    $magazine = $_GET['magazine'];
    $issue = $_GET['issue'];
    $email = $_GET['email'];

/* STEP 1. create a cookie file to store the session cookies for this request */
    $cookie_file = tempnam ("/tmp", "coverleaf-cookie-");

/* STEP 2. login to CoverLeaf using CURL (Copy URL) with email to set the cookie properly */
    $ch = curl_init ("http://www.$$magazine/$issue/Download_submit.action?pgs=all&lm=1273130943000&email=$email");

    // Tells curl to store any cookies in the file
    curl_setopt ($ch, CURLOPT_COOKIEJAR, $cookie_file);

    // Return rather than output the results of the curl request
    curl_setopt ($ch, CURLOPT_RETURNTRANSFER, true);

    // Use the user's HTTP User Agent so CoverLeaf can keep track of things properly

    // Do the request
    $output = curl_exec($ch);

/* STEP 3. get URL to download issue from */
    $ch = curl_init ("http://www.$$magazine/$issue/Download_submit.action?pgs=all&lm=1273130943000");

    // Use the cookie file from the previous request
    curl_setopt ($ch, CURLOPT_COOKIEFILE, $cookie_file);

    // Return rather than output the results of the curl request
    curl_setopt ($ch, CURLOPT_RETURNTRANSFER, true);

    // Use the user's HTTP User Agent so CoverLeaf can keep track of things properly

    // Output of request is important
    $output = curl_exec ($ch);

    /* Find anchor tag in returned HTML by parsing the Document Object Model returned */
    $dom = new domDocument;
    $pdfURL = $dom->getElementsByTagName('a')->item(0)->getAttribute('href');

/* STEP 4. Tell the User Agent to redirect to the URL that was found */
    header("Location: http://www.$$pdfURL");

Then a few simple mod_rewrite rules to keep iTunes happy:

# RSS Feeds normally end with .rss, some readers expect this, so put email address in URL
RewriteRule feeds/(.*)/make.rss /path/to/cgi-bin/make.php?email=$1

# iTunes requires that content types match the file extension, so put parameters in URL
RewriteRule download/(.*)/(.*)/(.*)\.pdf /path/to/cgi-bin/download_magazine.php?email=$1&magazine=$2&issue=$3

In testing, I noticed that the download links don't always work from either or my RSS feed. Clicking download a few times seems to get around this. I think CoverLeaf has a race condition with their timestamp URL generator that ends up giving you invalid URLs.

CoverLeaf magazines can still be read for free

August 19, 2010

In my hacking for the RSS Feed Generator I noticed that the images of the magazine pages used by the iPad version of CoverLeaf are still freely available to anyone that wants them.

Using the same pattern as the PDFs that I described earlier you can construct URLs for any CoverLeaf magazine you want to read: