Category Archives: Technologies

Gathered on foot, just for you.

For a previous blog, I had a script that collected all of the links I bookmarked on that day and put them in a post. As time went on and I wrote less and less, those link roboposts became about 90% of the content.

That was bad for the blog, so I stopped doing that. Here, I’m doing something that feels similar, but it’s game stuff that happened to be presented to me in person. So, it’s as if I went out and physically gathered these links for you. Appreciate!

Summoner Wars

I discovered this at PAX East. It looked like any other card game, except it was played on a grid. There was a lot of orcy fantasy art on it, with the fonts that customarily accompany that kind of art. My friend Tim and I were walking by its booth, and the game’s designer invited us to try it out. I said sure but was skeptical.

The designer, Colby Dauch, did a great job of walking us through a first turn, and it did turn out to be a very good game. It’s an elegant tactical combat game that centers largely around positioning, as most tactical combat does, but also involves resource management and acquisition. Like in chess, you win by defeating the enemy’s key piece. Like in Magic and Dominion, you have a deck of cards that provides your guys, all of which have different abilities that can be coordinated in many different ways. The guys in your deck can be summoned using your magic points, which are obtained by killing your opponent’s guys.

You can build your own decks, which adds another dimension to the game, but we played with the prepackaged decks, all of which had a very distinct flavor. We played the hell out of this game at PAX, and I think it’s the best game I played there. Colby said an iOS version was in the works, so I’m looking out for that.

Spell Tower

Spell Tower’s another game I saw at PAX. It’s an iOS game in which you make words out of a tower of letters. When you connect a string of serially adjacent letters to make a word, they pop off and the rest of the letters fall to fill the void they leave. It’s vaguely Tetris-esque. You have to consider where you’re making words because you can cause letters to pile up in concentrated spots. A tall pile is bad because when a pile reaching the top of the screen ends the game. Making words in this context is fun, but also compelling. And by “compelling,” I mean it can create compulsion, which I’m ambivalent about.

The developer, Zach Gage, talked to me for a bit about its development. He got a working version in a surprisingly short amount of time using Open Frameworks. This was a surprise to me because I didn’t even know there was an iOS port of OF. Zach’s made a wide variety of software art with it and has a library for working with sprite sheets.

I was tempted to get into it, but I have enough fluency in Objective-C right now to express myself fairly well and am generally short on time. If ofxiPhone had been around three years ago, I would have been all over it, the same way Ruby people are all up ons RubyMotion. If you’re coming to iOS development from a C++ background, you should check it out.


Finally, a couple weeks ago, I went to a Game Dev Night where I met other people making tile-based game maps with ASCII in plain text. The host, Greg Smith, presented us with Letterbrush. Plain text is relatively easy and simple to work with, but it does involve some annoying arrow key-dancing to specific columns and rows. Letterbrush gives you well-known drawing tools so you can skip all that foofawing.

Well, I think there were more, some non-game ones, but I’ve forgotten them. So, I hope you enjoyed those.

A static WordPress

If you have been within earshot of any technology blogs in the last few years, you’ve probably heard about static web sites being a good way to power a blog.

It makes sense. Most weblogs just for reading. They need to change when there’s an update, unlike a web app like Health Month or Mint that needs to change every time you visit. At most, updates need to happen whenever a new comment is posted, but that’s if you have comments and if your comments aren’t handled by an external service like Disqus.

Why should a bunch of PHP stuff and database queries run every time someone wants to read something? All that does is slow things down, and if you had a lot of traffic, it would cost you money.
A bit of last weekend and some of today, I decided to move Death Mountain to a static weblog. I didn’t truly need to do so; I don’t post that often, and I don’t get much traffic. However, I do have concerns about my current web host, and I’d like to not be tied to hosting that provides WordPress.

Mostly, I think that I wanted to do a bit of low-stakes messing around. Non-sequitur tinkering, you could call it. It’s sort of like working on your car, or the Hackintosh hijinks discussed in the Salad Days episode. (Or making a bunch of stew even though your wife doesn’t want any. Like I’m doing right now.)
Jekyll is a nice, lean static blog generator. However:

  • I already have this blog looking the way I want, and I don’t want to painstakingly recreate it.
  • There’s also that should Death Mountain leave a web host that uses WordPress, Bravest Ever would leave it, too, and I don’t want to mess with the way Katt does posts (via WordPress).
  • Also, I liked posting using the WordPress iOS app the one time I’ve used it so far.
  • I like WordPress’s thorough cross-linking by dates and categories.

I wanted to keep the WordPress design and input methods while also having a static site. Maciej, the Pinboard guy, said something about this in passing quite a while ago.

You can use a program like wget or curl to generate a flat HTML version of your website from this local version, and then upload these files to your public server to share them with the world.

Here’s how to do this in practice:

First, get a copy of your WordPress blog running locally.

  1. Transfer the directory containing your instance of WordPress (usually named and containing index.php and various wp-* subdirectories) to your local machine.
  2. Transfer your database entries to your local MySQL instance. Export the database named after your WordPress blog. Some hosting services provide a web-based way to do this; but you can also use a shell command like mysqldump -u[username] -p [db name] > mysql.dump. To import that file into your local MySQL instance, use mysql -u[username] -p [db name] < mysql.dump or the equivalent. (I use MAMP to run Apache, MySQL, and PHP locally. It’s simple.)
  3. The tricky thing about running WordPress locally is that it uses a stored value to determine its base URL rather than using relative paths. e.g. So, if “WordPress Address” and “Site Address” in the WordPress dashboard (or in wp-config.php) is set to, it will point all the links to other parts of the site to something based on rather than on http://localhost.Now, you can set those properties to http://localhost. Then, all the links will work on your local instance of your blog, however, when you take a static snapshot of your site, the links will be pointing to localhost, which is no good.So, you need to keep these WordPress properties pointing to your real blog URL but get requests for on your local machine to go to localhost. So, add an entry to your /private/etc/hosts file like so:

    If you’re on Lion, make sure you put that above the “fe80::1%lo0” line. And don’t forget to flush your DNS cache afterward. On OS X, dscacheutil -flushcache will do it.

    Now, you will be able to run your WordPress blog locally, using your real URL. You can shut down your Internet connection to make sure it’s working.

    Oh, also: You will need to run Apache (or whatever web server you use) on port 80. To do that, it needs to run as root, so start it with sudo.

  4. Now, use wget to get a static snapshot of your site. Unlike running WordPress locally, wget is much less tricker. Just go to a clean directory and run:
    wget -m and robots=off
    Within seconds, you’ll have your whole site as static files. It will be from the non-admin’s perspective as well, so none of the admin stuff will be there.
  5. If you want, that can be the end of the line. When you want to post, just start your MAMP/LAMP stack, write your post, run wget, then upload the files wget gets to the server.I’d prefer not to upload the whole thing, every time, though, so I use git to handle just sending the diffs. It also has the side benefit of letting me revert to specific versions, plus everything else git does. If you are not already familiar with git, though, this is probably not worth it.I’m already git-centric, though, I created a git repo in the directory in which I put wgets output: The static snapshot. Then, I followed this awesome howto to set up a way to update the web site just by pushing with git. (If you’re using github for hosting, you can just push straight to github and skip all that.)

    So, my update workflow goes like this:

    1. Start MAMP, if it’s not already running.
    2. Go to (which will actually be my local instance of the site) and post.
    3. Run wget in my static site directory. (Which is a git repo.)
    4. Commit the changes in git.
    5. Push the git repo to the web host.

That’s it! I’m posting with this method now. It’s two extra no-brainer steps. It’ll serve up faster, and I’ll be able to move web hosting easily now. It’s not the most detailed explanation, but hopefully, you get the idea. I’d go into further detail, but my stew is almost done.

The dearth of electronics manufacturing in the US: More than met my eye

This article provoked quite a few thoughts, but I don’t think it’s worth the time to write an essay. I don’t really have any solutions to these problems, so an essay-style piece would just be pretty wrapping for fragments anyway.

But here’s my fragments:

– I thought manufacturing in Shenzhen was mostly a matter of costs for technology companies. It’s not.

In particular, companies say they need engineers with more than high school, but not necessarily a bachelor’s degree. Americans at that skill level are hard to find, executives contend.

Apple’s executives had estimated that about 8,700 industrial engineers were needed to oversee and guide the 200,000 assembly-line workers eventually involved in manufacturing iPhones. The company’s analysts had forecast it would take as long as nine months to find that many qualified engineers in the United States.”

I would have loved to buy a “fair-trade” iPhone that cost $600 or so. But it turns out not even that is possible.

– Why don’t we have these technical workers? Well, “many reasons” is always the right answer, but I think our fetishization (or maybe “fetishization” is an over-emphatic way of saying “over-emphasization”) of Making It to the Top is part of it. Our insistence that everyone strive to be important millionaires makes vocational jobs (I know – redundant, but I can’t think of how better to describe them) seem like loser business, so people go for bachelor’s degrees in something they can’t get work doing. We’re forcing too many variously-shaped pegs through round holes.

Have you made fun of DeVry? I know I have. Yet, it’s good work, and people could be happy doing it.

– Factories are in China, not just because of the labor cost and available skill, but because everything else is also there, which makes logistics easier and cheaper.

The entire supply chain is in China now,” said another former high-ranking Apple executive. “You need a thousand rubber gaskets? That’s the factory next door. You need a million screws? That factory is a block away. You need that screw made a little bit different? It will take three hours.

I’d actually heard about a pro-US effect of proximity last week: American stringed instruments factories are still competitive with Chinese ones because of the prohibitive cost of shipping cellos and double basses overseas. Yup, didn’t think of this one, either.

So, it’s not just, oh, we tweak this or we tweak that, and we get manufacturing work back. There has to be a manufacturing “community” in place. And to get that, we’d have to commit to developing for decades.

I have doubts about our ability as a nation to commit to anything for decades.

– I don’t think that we necessarily need to bring back electronics manufacturing in order to prosper. (We do, however, need to use our work force in better and more varied ways.) However, I’ve heard people, when discussing how well the American economy is doing, point to Apple and Google or some other fantastically successful company.

“If you scale up from selling one million phones to 30 million phones, you don’t really need more programmers,” said Jean-Louis Gassée, who oversaw product development and marketing for Apple until he left in 1990. “All these new companies — Facebook, Google, Twitter — benefit from this. They grow, but they don’t really need to hire much.”

So, what does that get the country as a whole? It gets us prestige, which not worth nothing. But the success of multinational corporations that started in US doesn’t really help you or me (yeah, some of you work for these companies, so it does help you, but you know what I’m saying) all that much.

Steve Jobs

At first, I thought, well, Steve Jobs lead a super fantastic life, and I’m usually not in the business of mourning people I didn’t know that have lead fantastic lives. No need to be sad for him. (Which is true, if you, like me, didn’t realize he was 56.)

But today, I’m a bit sad for ourselves, which is, of course, selfish. Around 2007, I had worked as a software developer for seven years and was ready to be done. Shit seemed largely fucked up and unrewarding. But my friend Dan, who had been going on about Linux for a decade, had gotten into Macs lately, so after yet another fdisk/reinstall incident, I took a look in that direction, despite my long-held biases against them.

There, in OS X, I found reassurance that shit can be good. And solid and complete and not ugly. A couple of years down the line, I got a MacBook Pro, and it had the same class of “quality-feel” as some really fine chef’s knife, despite the great difference in complexity. Inconceivable to a guy that had been using a ThinkPad three years prior.

I’m going to keep it brief about the iPhone and iPad. But I do remember the day before the iPhone’s launch. I was arguing with a guy who said it wouldn’t even gross $1 million because it lacked tactile feedback. He even went home that night and wrote a long-ass blog post about it. Ha, the fool, you might say. But he was not the only person saying this. That the iPhone could work and work well was inconceivable in the heads of millions.

Steve Jobs didn’t come up with the idea of touchscreen interfaces. However, he got the iPhone made despite plenty of naysaying from experts, and probably from within Apple as well. Why not just make more computers with slight variations? They had a good thing going.

There’s hundreds of incidents like that in his career. Jobs is painted as a force of nature in them. But he is definitely human, and I highly doubt pushing forward weird ideas and products in the face of very vocal detractors ranging from industry experts to Internet commenters is an easy thing.

Without Steve Jobs, there will still continue to be people that come up with things everyone else says is stupid until they try it and love it, but I don’t know if they’ll be at the top like he was and able to shift the technological state of the world from there. Everyone that uses a smartphone, tablet, or computer with a mouse has benefited from Steve Jobs taking weird stuff and pushing it out there, and not just dumping it out, but championing real craftsmanship. We need this kind of high-powered advocate.

This isn’t to say this big wave of progress is going to crash. But I can’t help but feel as though it’s going to calm a bit, at least at the worldwide scale. If that’s the case, maybe we can make up for it by being putting things we imagine forward more often in spite of the inevitable pile on. Easier said than done. I know all too well. But obviously, it can be done.

ALAssetsLibrary and threads

I’m working on an iOS app right now with a feature that uses images from the Photo Library. This was all solid for me, and I had worked with it for a nearly a month before putting it before my alpha testers.

With a set up like that, you know where this is going: It totally did not work for them. At all. After the users would pick a photo from the library, the spinner letting the user know an image was being loaded would sit there forever, and eventually, this would show up in the console logs:

May 19 14:51:17 THE-MOON SpringBoard[27] : MultitouchHID(1ed4d440) uilock state: 0 -> 1

May 19 14:52:00 THE-MOON SpringBoard[27] : jotunheim[725] has active assertions beyond permitted time:
identifier: CoreLocationRegistration process: jotunheim[725] permittedBackgroundDuration: 600.000000 reason: finishTask owner pid:725 preventSuspend preventIdleSleep ,
identifier: CoreLocationRegistration process: jotunheim[725] permittedBackgroundDuration: 600.000000 reason: finishTask owner pid:725 preventSuspend preventIdleSleep

May 19 14:52:00 THE-MOON SpringBoard[27] : Forcing crash report of jotunheim[725]...

For the life of me, I could not reproduce this bug on my phone or my girlfriend’s phone. Which, of course, is bewildering. Googling pointed to a lot of problems related to threading, and indeed I was using a dispatch queue of my own making to do the image work.

I know there’s things that absolutely must be started on the main thread: Network calls (which end up on the web thread) and UI stuff. But I wasn’t doing anything with the network or the UI, as far as I knew. And why would this only happen on my users’ devices and not on devices in my household?
I’ll spare you a recounting the red herrings that I surveyed.
It’s because the first time you try to get stuff out of the Photo Library with ALAssetsLibrary, it asks the user if your app can have access to location data. (Photo metadata can contain with location data.) But it can’t show a UIAlertView from a thread other than the main thread, it can’t, so things will just stall out.

My phone and my lady friend’s phone have had on them previous builds of the app that used ALAssetsLibrary from the main thread. So, back then, that dialog was able to show, and location data access permission was saved. Deleting the app doesn’t revoke that permission. The current build, which used ALAssetsLibrary from a non-main thread, ran into no problems because it had the permission and didn’t need to show any dialogs.
The lessons I can see are:

1. Doing work in helper queues is great, but think twice about whether or not the things you do there are going to lead to UI or network stuff.

If I had read carefully, I would have noted that the documentation says:

When the asset is requested, the user may be asked to confirm the application’s access to the library. If the user denies access to the application, or if no application is allowed to access the data, the failure block is called.

2. Things that affect your app get saved outside of your app and don’t get cleared when you delete your app.

I hope this saves someone somewhere some time.

We just wanted to watch the Daily Show.

When we moved, we decided not to get cable because there were only a few new shows we watched, and we could get them online. So, we got a previous-generation-at-the-time Mac Mini to hook up to the TV (and to serve as the house server in general and watched our shows on Hulu Desktop. It was a comfortable system. (My old PC was slated to do the job originally, but it did not survive the cross-country move.)

Then, one day, Comedy Central decided to stop letting Hulu show the Daily Show and the Colbert Report. The dark times arrived. We entered an era of mild inconvenience, which, of course, felt like total hell because we’d often try to watch these shows during dinner. After you’ve gotten dinner ready, you feel like eating, not messing with stuff to get your show to play.

The problem was that we’d have to go to the Comedy Central site to watch the Daily Show, and we’ve had to watch it through the Safari Flash plug-in. You may have heard a thing or two lately about how Flash is problematic on Macs. Myself, I hadn’t really noticed other than some very occasional freezing because I had been running it on machines with no less than 2 GB of RAM. The Mini, though, has 1 GB of RAM, and whoa buddy. Flash is not pleasant over there.

The Flash 10 plugin wasn’t good under low memory conditions. It would outright crash when we’d try to play Daily Shows. Then, Flash 10.1 came out, and we gave that a shot. No crashes, but still quite pokey, and the audio would get messed up (terrible echoes) if it had been running for a bit. So, to watch an episode, I’d have to restart Safari (it seemed to do even worse in Chrome), get to the web site, and wait a few minutes for the video to load. It was one of those processes that made you feel as though clicking at the wrong time or too many times while it wasn’t responding would result in a crash, and you’d have to start the whole process over again.

It made me miss Hulu – and cable TV – a bit.

Using the mouse and opening browsers isn’t really inconvenient in the Greater Scheme of the Universe, but man, it is way harder than turning on the TV and flipping to a channel. Or opening Hulu Desktop and hitting a few menu items up with the Apple Remote.

That’s pretty much what media center apps like Boxee let you do. I tried Boxee half a year ago, and while it looked cool, it failed to open most of my media files and didn’t have access to the streaming media that Hulu and show-specific web sites did.

So, then. What is the point?


The point is that it’s now the future! Those media center applications are:

1. Better at identifying and playing media on your local drives.
2. Can now play Flash video!
3. Still have convenient remote-based navigation that doesn’t require you to get off the couch or do any screen sharing.

I’ve now got Plex installed on the Mini. It supports plugins that it calls “Apps” – basically video sources that have a bit of Python code that tells Plex how to play it. Two of these apps for the Daily Show and the Colbert Report.

As I understand it, when Plex plays Flash media, it often just goes to web site presenting the video, presents itself as Safari, strips everything out but the video, then plays it in a little WebKit-based browser. The difference between that and playing it on the web site on a real browser? I haven’t done any real analysis, but I’d conjecture that the little browser, which does nothing but play video, takes up much less memory than Safari or Chrome and so the Flash plugin isn’t put into that low-memory situation it deals with so poorly.

(As of now, however, Plex doesn’t support Flash 10.1. So, you have to use this to uninstall Flash, then install Flash 10.)

The takeaways?

1. If you wait long enough, someone will solve your problem for you.
2. Plex (and probably Boxee) will now let you easily watch the Daily Show using a remote.

More on the adventure of Mac Mini media centers at tl;dr. (Which, incidentally, is the newest blog of “Pants McCracky,” who is kind of like the Fedor of bloggers.)
And now it’s your turn to speak! What have been your experiences with media center applications? Do you have any tips and tricks?

Haha, just kidding. This isn’t the part where you speak (you can say something to me at @deathmtn on Twitter if it occurs to you, though), and we don’t do SEO-style “community building” here at Death Mountain. Instead, we have lots of these guys going around adding value:


TaskPaper to html conversion script/A less painful resume updating process

I couldn’t find a TaskPaper-to-html script out there, even though I thought this would have been done a million times by now. (Could be that I’m just getting worse at Googling. Let me know if I’m wrong!) So, I wrote a Perl script to do it:

Usage: perl biglist.taskpaper bigstyle.css

You give it a .taskpaper file and (optionally) a css file, and it’ll generate an html file that’ll contain ul and li elements (you can change the code to use blockquotes and divs, if you want), each set to a class corresponding to the item type – project, task, or note. You can then style those elements and classes in the css file.

TaskPaper is a super simple, yet surprisingly effective application for organizing things into hierarchical lists. It’s intended to be used for to-do lists, but I hadn’t used it for that until yesterday.

For the past couple months, I’ve been mostly using to just jot down, then arrange ideas I had about various projects, which is what I’ve been doing with plain text files for years. The difference between text files and TaskPaper files is that TaskPaper provides formatting for items based on simple cues, like a line starting with a dash or ending with a colon or have a @done “tag” on it. It uses those formatting cues and changes colors and font sizes. It turns out that just doing that makes lists much easier to read and much more attractive. You actually want to look at these lists. That, along with making it very easy, almost unconscious, for the user to format items makes for a rather compelling product, believe it or not.

Anyway, I started using it for my resume. I haven’t touched my resume in a while, but I do remember the maintainability messes I used to have. I’d make the base copy in Word or OpenOffice, fighting it to format it the way I wanted. Then, I’d save it as a PDF to mail to people (because that way I could be sure it displayed the way I intended). More often than not, a recruiter would ask for it to be in doc format, so I’d send the original file. And then sometimes, a web form would ask me to paste in my resume in plain text. So, I’d copy it out of the doc file, paste it into a text editor, see that it looked terrible, then mess with it until it looked right. I’d also make an html version, which I’d have to hand code because the “save as html” features on Word and OpenOffice sucked.

Inevitably, I’d have to update the resume, which meant updating three different versions.

This time, I initially decided to just use TaskPaper to organize my editing, without having to fight Open Office. Soon, though, the idea of using just TaskPaper took hold of me. A .taskpaper file, after all, is just a text file with “- ” and “:” and various @tags in it. So, there’s the text version. I could use a script to convert it to html, then a css file to style it. Then, once I had it open in the browser, I could use the OS X print dialog to save it as a PDF. )

So far, I’m pretty satisfied with the system. I really like that the formatting and content are separated. I’m not dreading the next edit or update. Of course, it does have a big flaw: I assumed that there’d be something out there that’d convert either html+css or a PDF to a Word doc. I was wrong, so I’ve got more research to do. If you happen to know how to handle this, please let me know!

How RGB is related to HSV, and how to implement hue shifting on the iPhone

Last weekend, I wrote some code that shifted hues in existing colors. To do that, I needed to improve my understanding of hue, saturation, and lightness. I had to go on a journey of learning and trustmypaper educational service helped me a lot! So, I thought I’d share my rainbow voyage with you here.
I started by poking around in Photoshop. If you’re familiar with Photoshop, you may recognize this dialog:

If you move around the Hue slider, your image changes color without changing either saturation (which can be described as ‘colorfulness’, sort of) or value (lightness). That is exactly the functionality I wanted to implement. In Photoshop, it’s also fun stuff that can make your photos look psychedelic.

Then, there’s this guy, the Color Picker:

To pick your color, you click around in the box on the left and move the rainbow slider up and down. The number-containing boxes in the middle show you the RGB (red green blue) values of the color currently specified, which is handy. The rainbow slider is a hue controller, and the big box on the left represents lightness on its Y-axis and saturation on its X-axis.
When you sit down to code something like this, though, you can’t manipulate the hue directly like that, at least not on the iPhone. You can’t just say “give me this existing color’s hue” and then use that to compose a new color. Whether you’re using UIColor, CGColor, or cocos2d’s ccColor3B, the only way to manipulate the hue of an existing color is via RGB. Thus, you need to figure out how RGB affects HSV (hue saturation value/lightness).

Moving the hue slider in Photoshop’s color picker and noting the RGB value changes is a good way to do this. Try it, and you’ll notice that among the RGB values, there’s always a constant highest number, a constant lowest number, and a variable middle number.

Let’s say the color you start off with has an RGB of 57181-107 (a mildly-bluish green). You can move the hue slider anywhere you want, but two of the RGB values will always be 57 and 181. If you slide it toward the orange spectrum, you’ll end up with something like 181-107-57. 181 and 57 are still in the mix, yet it’s an entirely different color.

When moving the hue slider, the limits remain constant because changing the upper and lower limits will affect the lightness and saturation of a color. If you move the cursor in the saturation/lightness color panel, however, those limits will change.


On a display, if you max out red, green, and blue, the result is pure white. It follows that raising the upper limit among the RGB components of a given color will move it closer to white, whereas lowering the lower limit will move it further away from white. “Moving a color further away from white” is also known as “darkening a color.”


Saturation is also affected by the upper and lower RGB limits, but by the two limits’ relative distance from each other, rather than by the limit’s absolute values. In RGB terms, gray occurs when each of the components’ values are close to one another. 177-171-161 is a kind of gray, for example. When all three components of a color have matching values, as in 177-177-177, that color is said to be totally desaturated.

Conversely, the further apart the RGB components are, the more saturated the colors become. e.g. 255-112-17 is a very intense orange. 255 and 17 are about as far apart as you can get.


If we want to change the hue while freezing saturation and lightness, we can’t change the upper and lower limits, as explained above. That leaves us just the following:

1. We can change the variable middle value. e.g. 112 in a color like 112-67-233. If we change 112 to 150, we get 150-67-233.

2. We can’t change the upper and lower limits, but we can swap their positions. e.g. In 112-67-233, we can move 67 and 233 and get

Now we know how to change the hue without messing with saturation or lightness, but we’re changing it haphazardly. To find out how to move the hue along the spectrum the way the Photoshop slider does, look at the RGB value changes that happen when the slider is slowly moved in the red-to-blue direction. You’ll see that:

1. One value (either R, G, or B) changes at a time.
2. That value is either moves up until it hits the upper limit, or it moves down until it hits the lower limit.
3. Once a value hits a limit, that value stops changing momentarily, and a different value starts moving.

Here’s a graph of the changes in the RGB values as the color is moved along the spectrum:

(I apologize for the graph’s crapulence. I bet you there’s a good version of this somewhere on the web, but it wasn’t on the first page of search results, so I just scriggled this out.)

The lines represent the amounts of R, G, and B components that are present in each point along the spectrum. The vertical bars at the top are the colors that (approximately) result from those RGB combinations.

The graph shows the following behavior: For each component (R, G, and B):

– If the component is not at the the lower limit and the next component (e.g. the next component after the R component is G) is at the upper limit, that component will descend.

– If the component is not at the upper limit and the next component is at the lower limit, that component will ascend.

– Otherwise, that component doesn’t change.

I’m sure there’s some clean mathematical formula that describes this, but I haven’t done any real math in like twelve years, and now my math organ is shriveled and useless. So, you’ll have to live with the algorithm.


Once you know the above, implementation is fairly straightforward. If you want an example, though, here’s my cocos2d-oriented implementation in Objective-C on github.

OGColorTools contains the code that does the color shifting. The meat of the code is in the method color:shiftHue:inDirection:. OGHueShiftExampleLayer is a simple example layer that creates a sprite and shifts its color every time the layer is tapped. You will need to provide your own bitmap and load it into the sprite for this to work.

Enjoy your hue shifting!


I went to the store yesterday to try out the iPad, and I (more or less) impulse bought it on the spot. Technically, the iPad held no surprises. Experience-wise, it did.

With no further ado, the surprises, enumerated:

1. Browsing the web on the iPad is quite possibly better than browsing the web on a computer. Technically, it’s just like browsing the web on the iPhone. Except that the viewport is much bigger. This is a simple, simple change that inspired many a “it’s just a big iPhone” comment. When it comes to browsing, those comments hold true, in a technical sense. Yet, the embiggening really transforms the experience.

On the iPad, web pages look just like a web page would on your MacBook (or Dell, Lenovo, Alienware XL2bunchofnumbers or what have you). On the iPhone, web pages looked truer to the html than they did on other phones, but they were still, of course, miniaturized.

So, browsing on the iPad looks the same as it does on a computer. Then, what’s the difference? The method of interaction.

On a computer, you move your mouse to links and other interactive objects then click on a mouse button to make things happen. On the iPad, you just touch them. Also, you can put your finger directly on the page and push it up and down. Again, this is a small difference on paper that changes the experience wholly. It feels more intuitive. For me, it feels better.

(Incidentally, I’ve had three different people tell me that their two-year-old kids quickly learned how to use an iPhone. The whole “touch an object, get a reaction” way of operating is what allows this to happen. It’s difficult for someone that age to understand that moving a mouse and hitting keys with certain markings causes a change on the screen, but they do understand the idea of things reacting when you touch them.)

2. You know how there’s a lot of iPhone development work these days centered around creating an iPhone app version of a web site or application? That’s not necessary for the iPad. There’s plenty of space, so most web stuff does not need to be re-presented in smaller bites.

The best way to use Facebook on the iPad is not through a Facebook app, but through Safari. (Well, probably – I haven’t tried Facebook Ultimate.) The best Twitter application for the iPad is Brizzly, through Safari. is better than the Flickr app.

3. The same thing I said above about web browsing, except with games. I thought Final Fantasy I for iPhone was well-done, but there’s no way you can avoid some cramping in a phone game when porting a game with a certain level of complexity. On the iPad, it’s spacious and detailed. You can touch things without worrying about accidentally hitting the wrong thing. It’s a better game than it was on NES or Playstation. To sort of get you an idea of how it looks and feels.

(I tried to play it with one hand while holding the camera with the other, and my girlfriend walked in in the middle of it and made fun of me for doing this. It is also upside down. Yet, I think there’s still a chance that it’ll help you get the idea.)

Final Fantasy I is just a remake of an old game, so it might seem like it’s not that relevant that it’s good on the iPad. (Although it did debut as the top grossing game the week it was released, so a lot of people out there do care about remakes.) However, if simply giving a game five times the area improves things this much, it’s a great sign for future games made specifically for the platform.

4. The keyboard is rather good. I’m not going to write a novel on it, nor even a blog post, if my MacBook is nearby. However, it’s perfectly fine for writing tweets or typing terms into a search field. Being able to spread your fingers out is a big deal.

The iPhone touch-based user interaction model was a huge step forward. But in some ways, the iPhone’s phone-sized frame muted just how huge it was.

In a way, the iPhone is like Jigoro Kano, the creator of judo. He was innovative and capable, but being 5′2″ and somewhat of a pacifist, there were certain things he just couldn’t do. The iPad is like Masahiko Kimura, a much bigger and stronger judoka that defeated fighters all over the world, illustrating more clearly to the world what Kano’s techniques could do.

Of course, spending much of his life traveling around to beat people up limited Kimura in ways that Kano was not limited, in the same way that the iPhone can go many places that the iPad cannot.

(Wow, that post took a pretty weird turn at the end. OK, I’m done here. Excelsior!)