Sunday, November 19, 2006

Church of the Wholly Consumer

Many years ago, I was:
a) Living in the United States
b) Living in the suburbs
c) Living in a house
d) Living with a 3 or 4 (at various time) people who shared groceries
e) Eating a lot of prepackaged, frozen, and generally bad for me food

At that time in my life, it almost made sense to go to Costco. We would make occasional trips there, get large boxes of stuff we didn't realy need, and put it in the huge pantry at home, and all was well.

For the past 10 years, I have been:
a) Living in Canada
b) Living in the downtown of a vibrant city
c) Living in a small apartment
d) Living with one other person, and we very rarely entertain
e) Trying to lose weight and stay healthy, by eating lots of fresh food and staying away from the prepacked stuff which always seem to have too much sugar, fat, salt and random toxins.

So, for some very strange reason, a couple of weeks ago I bought a costco membership. They opened up a store near me (quite the feat, managing to find space for a costco in downtown Vancouver). My company paid 1/2 of the $55 fee, and the deal included $25 gift cards. So my membership was effectivly only a few bucks for the year, and I figured I'd get that much entertainment value out of it.

After a few weeks of looking at my card and thinking "what have I done?", we finally went to the store.

First problem: we need to drive about 10 blocks to the store. For city people, it's a big deal when we need to use the car. We like to walk. We choose to live in the city because we think cars and buring gasoline is something to be avoided. But I knew that there was no way I was going to be able to carry the 5-gallon jar of mayonaise home, so we had to drive. The parking lot was the first hint of the "supersize" experience. Those of you who drive SUVs and have a really hard time getting parked in normal lots would love this place. The parking stalls are at least a foot wider than standard, and there's an extra foot "median" painted between stalls. They had 4 cars where anywhere else there would be at least 5, maybe six cars. We parked the car between a pillar and a wall (normally a challenging feat), and there was meters of clearance all around.

On the way in, there are shopping carts. Just like at Safeway, but the carts are at least 50% bigger.

Inside: a huge, caverous room. Isles were at least 2 meters wide. Everything in excess. I needed some zip-lock sandwich bags. A half year or so ago, I bought a box of 100 bags at Safeway, and it's now empty. At Costco, I could get a package of 4 boxes of 150 bags each. I wouldn't use 600 bags in two years. Likewise, I could get a kilogram of crushed garlic. I did choose the smallest package of peanut butter I could find: 2 kg. We looked at the produce, but everything we saw was in such large units it would have gone bad before we could eat it. Robyn was quite disappointed that we couldn't buy a huge, 5 kg tube of toothpaste (they sell toothpaste in normal-sized tubes, but you have to buy them 4 at a time).

All in all, it was an interesting experience. If I had 5 or 6 people to feed, and a large pantry to keep these boxes of food in, it might make sense to shop there.

At the checkout, the final shock: they don't take normal credit cards. Believe it or not, the only credit card Costco takes is American Express. That's just silly. American Express is well known as the card almost nobody accepts. It's the card you carry if you want to pretend to be willing to pay the bar tab, because you can order rounds of drinks, give your AmEx to the server, have it rejected, and then claim that you tried to pay, but couldn't. Fortunately, I had enough cash in my wallet to pay.

It was fun. I don't think I'll be going back there.

Monday, October 02, 2006

It's not as bad as it used to be

I've been quite bothered lately by how badly people of middle-eastern descent have been treated in Canada and the US (I expect elsewhere, too, but I don't see that news). We have "no fly lists", mostly populated by people guilty of having brown skin. We have people kicked off aircraft (and arrested, I think) for praying "in a foreign language".

Slightly more than a half-century ago, Canada and the US were at war with a race of people who were visually distinct, and had a reputation for being loyal to their family and race. Many people of that race were Canadian or US citizens, and had done absolutely nothing to harm their new home countries. But we suspected them based on their race, and we feared what they might do. So those people were arrested and sent to concentration camps, and all their property stolen by the government (and sold to white people for very little).

Although I'm not happy with how Canada and the US are treating middle-eastern people today, I am delighted that we have at least progressed, and we are not treating them the same way we treated the Japanese during WW2.

Thursday, August 10, 2006

Outlook: Malice

In direct contrast to the title of this blog, I do assume that Microsoft Office applications are specifically designed to make my life miserable.

I assume this because:
a) I have several friends who actually work on MS Office. They have not yet shown a tendency to this kind of prank, but that only means they are devious.
b) The behaviour is just too bizare to be a bug.

Two days ago, Outlook decided that it should group my mailbox by flag status. I have been unable to change it's mind. Whenever I open any mailbox, it is sorted by flag. I click on the date column, and it sorts by date. Then the next time I navigate away from the folder and back to it, we're back to being grouped by flag. It's maddening!

Tuesday, August 08, 2006

New twist on an old concept

This is a flash animation, but I wonder how long before this is actually done in hardware?

http://www.brl.ntt.co.jp/people/hara/fly.swf

Saturday, August 05, 2006

New Blog

Not that this blog is really interesting or valuable to society, but I've created a new one, which is even less interesting and less valuable. http://wadefit.blogspot.com

Are VMs the new OSs?

A brief history of programmable electronic computers:
1) Back in the early history, the application software ran directly "on the hardware". Every program needed to know how to access hardware devices, etc. Writing a program was a lot of work, because you need to code everything!
2) Then came the invention of reusable code. First the static library and then the dynamic library allowed a program to re-use the common boilerplate code, particularly the code needed to access hardware and the like.
3) Then came the OS. The OS served 2 purposes: it provided a standardized, high-level access to hardware (like a library does), and it allowed multiple programs to share the hardware without conflicting with each other.
4) Then came the REALLY BIG OS. After the original operating systems that were really about just providing fair access to hardware, some companies started putting more and more functionality in the OS. Around the same time as the REALLY BIG OS, the idea of dynamically linked libraries (which started in step 2) become really popular. Any given application depends on a very large number of dynamic libraries, some of which are part of the OS, and some of which aren't. Not that the distinction really matters.
5) Then came OS patches. As the OS gets more and more complex, the number of places where an OS can go wrong inreases. So there are more bugs, and more patches. Along with OS patches, there are other dynamic library patches, since those are also complex and therefore prone to bugs.

This is the state commonly known as "DLL Hell", and quite frankly, we're still there. To elaborate a bit more on DLL Hell, in theory, all patches are perfectly backwards compatable, so that if your computer has the absolutely latest version of all components, everything will be peaches and cream. In reality, patches frequently introduce new bugs, and a given software application depends on a specific version of the OS and DLLs. Older version won't work, and neither will newer versions.

Of course, this leads to a new problem: what if you are running two different apps on a computer, but those two apps require a different version (patch level) of the OS. You're stuck. If you set up the patches for one, the other won't work. If you set up for the otherer, the first won't work. This, in a nutshell, is DLL hell.

The fix to this problem is an interesting step backwards. Many IT companies are dedicating hardware to a single application. Instead of having a single "big iron" mainframe doing everything, they have a fleet of smaller computers, and each computer is configured for a single app. This is how IT managers get out of DLL Hell.

But that's expensive. So to reduce the cost, the IT departments are starting to cut down on the number of computers they are using, and simulating the many computers with virtual machines (such as VMWare, MS Virtual Server, Xen or the like). They have one big computer running a VM monitor which simulates several standardized machines, and shares the resources between those machines.

Sound familier? It is. The "new" vm monitors are effectively old-school operating systems, which only standardize the hardware and fairly divide hardware resources between the apps.

So now we have this:
1) At the bottom, the real hardware.
2) Running on the real hardware, the "host OS".
3) Running on the host OS, the VM monitor, which creates several virtual machines.
4) Running in one of the virtual machines, the "guest OS".
5) Running in the guest OS, the single application.

This way, each guest OS can be patched and configured to the level required by the single application that runs on it. the "host OS" can be patched and configured to what is required by the VM Monitor. And in the end, we're back to having several applications running on one chunk of real hardware. But at a significant performance cost, as everything now goes through 5 levels.

I think there are some good lessons learned here. I think the lessons are:
1) DLLs cause more problems than they solve.
2) The OS should be minimal and simple.

What should be in the future?

I think where we should go from here is to modify the vm monitor so that it runs directly on the hardware, making the host OS unnecessary. VMWare, Inc. has already done this, Microsoft will never do this (that would be admitting Windows wasn't needed), and Xen might do that some day. The next step is to lose the DLLs and go back to statically linked libraries, with no OS. The functionality of the OS can be pulled directly into the application via code libraries, so that the application can and should run directly on the virtual machine. Since it's going to be the only thing running in that VM image, there's no point in having an OS to share resources.

What will be in the future?
Basically, I expect we'll continue to live with the 5 layers, even though two of them serve no purpose any more, because hardware is getting cheaper all the time, and it's easier to leave it alone than to make big changes.

But I do predict that software companies will start selling their software as pre-configured VM images, rather than as applications. They will need to work through some licensing issues, but the amount this will simplify support will make it worthwhile.

Saturday, March 11, 2006

The FLCR sega

I had a really simple problem: every once in a while, my router would hang. Normally when the router hangs, I reboot it by unplugging and replugging it. I wanted to be able to do this while I was in another country, without having to ask a neighbor to come over and and unplug it.

My lovely (but shy) wife came up with the idea of building a robot to unplug and replug the device, controlled by telephone (so that when I noticed the problem, I could call home and reboot the machine).

I improved on the idea by having the computer monitor it's own internet connection, and decide to reboot the router whenever it cannot find the internet any more. And I thought that a relay switch would be more practical than an arm which unplugs and re-plugs the device.

Then I started trying to answer the question "How do I get my computer to switch on or off a device that's plugged in?"

The obvious, but expensive, answer was home automation technology, like X10 or the newer replacements. Many hundreds of dollars later, I could have a system which was capable of switching off any lamp or dimmer in the house, and maybe also my router. That research is for another blog entry, but for now, let's say it's way too much overkill for my problem.

Then I started researching computer-controlled relays. I was almost ready to break out the soldering iron with some simple plans that would control a 125 volt relay from the parallel port, when I noticed this: http://www.thinkgeek.com/gadgets/electronic/6ee4/

"Mmmmm", thought I, "this is interesting. It is a 120 volt switch, controlled by the computer. Now how can I switch my USB port on or off?"

After some research, I discovered the USB spec says it's possible to control the power to individual ports. Then I discovered that not all USB controllers support this, and I discoverd that I've used all my USB port already, so I don't have a free USB port to plug this into.

After yet more research I discovered how to get the Linux USB driver to enable/disable power on a specific port, and I discovered which brand of USB hub supports this feature.

Finally, I bought a new USB hub that supports power control, a "Mini power minder", and I wrote some custom code to monitor the network and drive the power to the USB port. "FLCR-2" ("Fast, Lightweight, Cablemodem Reconnector") was born!

What are the lessons learned?
  1. It's OK to search far and wide for ideas, when you really don't know what you're looking for.
  2. Linux is better than Windows, because many things that are simply impossible in Windows are difficult but possible in Linux. Windows can't switch off a USB port on demand, Linux can.

Thursday, March 09, 2006

Canadian TV

I hope this doesn't turn out into a pointless ranting blog, but I do have a pointless rant for today.

What is wrong with Canadian TV?

Years ago, you could identify a Canadian TV show after about 5 minutes of dialog, it was so badly written and awkward. Now the dialog is more realistic, but the shows are DEPRESSING!

Even the ones that are supposed to be comedies are just so sad. I watched the first episode of "At the hotel" today, and now I want to drink until I can't think anymore. I liked the first season of "This is Wonderland", but now I don't watch it anymore because it's too depressing. Da Vinci's Inquest was a great show for a few seasons, then it wandered into the "let's make really bad stuff happen to everyone" land. The new "Da Vinci's City Hall" is all about people stabbing each other in the back, and never had any of the good qualities that made Inquest a good show.

Note to Canadian television writers: Real life is depressing enough, particularly in the Canadian winters. We don't need more reasons to be sad.

Tuesday, February 28, 2006

JavaScript & spellcheckers & malice

I couldn't spell-check my last post, because there was no spellchecker button on my post edit page. I was assured that blogspot has a spellchecker, yet it did not appear. I spent a few minutes going through all the options, to no avail.

It didn't take long for me to realize what the problem was: I have a JavaScript blocker installed in my web browser. I told it to enable JavaScript from "bloger.com" and "blogspot.com", and then the "check spelling" button appeared. Once I also enabled popups I had a working spellchecker.

So what's the big deal? Am I an idot for installing software that blocks JavaScript and popups, and then complaining that spellchecking doesn't work? Of course not. Not really.

There are some real problems with defaulting to allowing JavaScript. The JavaScript interpreter is a large quantity of complex code. Requiring more compex code to execute in order to render a web page adds more risk of exploitable bugs (e.g. buffer overflows and stack smashing attacks). In addition to the possibility of bugs in the JavaScript interperter which allow the author of a malicious web page to take over your computer, there is also the simple fact that JavaScript is a fairly full-featured programming language, and it's never a good idea to run any random code that you happen to find on the web. Even if the code does not exploit a bug in the JavaScript interpreter to take over your computer, there's a good chance the JavaScript code can do something annoying.

All this risk is (not) balanced by the fact that JavaScript rarely adds much to the page. Often, the main "benefit" of the JavaScript code is to make an advertisement much noticable. Good for the advertiser, but not so good for those who'd rather not be bothered by ads. Occasionally, like on blogspot, the JavaScript is actually useful and makes my life easier.

So, I generally disable JavaScript (and other active scripting technologies), and only enable it for pages where there is a real advantage to me, and I feel the advantage outweighs the risk.

Final thought: there are many browsers out there with JavaScript intentionally turned off. If your web site does have some useful functionality that requires it, and JavaScript is disabled, your should put a notice on the page telling the user that a much richer experience is available if JavaScript is enabled.

Why blog?

That's a good question, which each person needs to answer for themselves. In my case, I know deep in my heart why I need to create and use this blog, but I'm not ready to share it yet. It can be my little mystery.

Meanwhile, I need to find the spellchecker, because I can't spell "mystery".