metamerist

Friday, October 12, 2007

Sometimes dumb is smart...

I've rambled on the subject a bit before, but all too often these days I see people trying to create machines smarter than the people that use them and--all too often--the designers fail miserably.

My wife recently purchased a new microwave and, in spite of its good points, this machine (as did the previous one) tweets periodically and relentlessly until the object of nuke-age is removed. As annoying as this is, it does spare me from a fate from which presumably millions have perished: forgetting to eat.

The underlying algorithm seems to work like this:

State #1: Microwave stops.
State #2: Wait thirty seconds for user to open door.
State #3: Beep and go to State #2.

And this appears to go on ad infinitum. I'm left wondering what sort of person might find this system most useful. My conclusion is the most forgetful individual on the planet. Perhaps there's some cook that will finally realize the memory lapse on the hundredth beep cycle, but the food is going to be cold at that point anyway.

Cast one vote for: If you're smart enough to get something inside a microwave and you're smart enough to program a microwave, you're probably smart enough to determine the optimal time to remove a heated item from a microwave.

Another problem with some smart devices is that they're unpredictable. A while back, I was asked to come up with a response to a "smart" photo retouching tool in a competitor's application. As smart as the competitor was, my biggest beef with the "smart" tool was that it was quite unpredictable. Sometimes it was smart and worked well. Sometimes it was too smart for its own good and the results were random and undesirable. The behavior was even acknowledged in the product Help file with the suggestion that if it doesn't work right the first time, one should just undo the operation and try again--which seems to imply the tool is unpredictable to the point of random.

My answer to the smart but unpredictable tool was a tool that was dumb but predictable. One admitted impetus behind this was the short amount of time I was given to come up with a solution, but another force impelling me to go for the comparatively dumb solution was a strong belief in the importance of creating tools with predictable behaviors.

What seems to be forgotten with many "smart" tools is that humans are smart--much smarter than any so-called "smart" tools devised so far--and because we are so smart, our intelligence can often more than compensate for the dumbness of our tools.

Consider what Michelangelo did with David using tools no smarter than a hammer and chisel. If the sculpture had been attempted with a "smart" hammer and a "smart" chisel that randomly tried to guess the master's intentions, I doubt the sculpture would have turned out as well.

The sculptor's tools are dumb, but they are governed by consistent physical laws, and that makes it possible for the sculptor to learn and compensate for tool deficiencies. When tools exhibit unpredictable random behavior, it is impossible to compensate for deficiencies.

Inspiring me to tap out this post is a run-in with branch prediction on Intel CPUs. Today's CPUs work kind of like little assembly lines. The machine instructions our computers execute go on the equivalent of a little conveyor belt (they call 'em 'pipes').

Usually, this all works fine and well, but problems occur when the CPU comes to a fork in the path of execution. When that happens, the CPU has to guess which path will be taken; once the guess is made, the instructions from the guessed path are transferred to the conveyor belt (this is called "branch prediction").

If the CPU guesses the wrong path, it's bad, because the conveyor belt has to be swept clean and reloaded with the other path. If you want your code to run as quickly as possible, you don't want this to happen--it's an unnecessary waste of time.

One of the rules for guessing the paths least and most traveled used to be fairly simple: assume backward jumps will happen and forward jumps won't. What's nice about this is that it's fairly easy for a developer with a knowledge of the probability distribution of branch cases to write code that optimally leverages this simple and predictable branching rule.

Now consider the behavior for static branches in the Pentium M and Core 2 Duo: "These processors do not use static prediction. The predictor makes a random prediction the first time a branch is seen... There is simply a 50% chance of making the right prediction..." (Agner Fog: The Microarchitecture of Intel and AMD CPUs p. 27)

In fairness, this isn't quite as bad as it sounds because in most cases these CPUs learn from their mistakes pretty quickly when the paths are executed repeatedly inside loops, but I'm still frustrated by the deviation in behavior from predictable to random.

It's hard to write code that compensates for a coin flip. In their repeated attempts to make CPUs run software smarter, CPU manufacturers seem to be making it harder for developers to write smart software. Speaking as a developer, it's much easier to exploit relatively dumb rules consistently applied than it is to try to entertain all sorts of complicated smart variations.

Sometimes dumber is smarter, because dumb is often predictable and, consequently, human intelligence can compensate for it.

2 Comments:

Anonymous Anonymous said...

You make a lot of sense. In almost any system i would much rather have it stop or backtrack than go forward without being certain of the problem. Its more productive and less frustrating. And the microwave cycle observation was hilarious.

10:37 PM  
Anonymous Anonymous said...

oh and i am just a random visitor who was looking up a translation from a song by beck. I have no idea what this site is. I'm going to explore though.

10:44 PM  

Post a Comment

Subscribe to Post Comments [Atom]

<< Home