Video: Behind the Scenes at Giant Keck Telescopes

Keck in Motion from Andrew Cooper on Vimeo.

OK, I've said before that I worked on the Keck Telescope (see this post). I love seeing what is still a favorite project of my professional life. Those 36 (now 72) mirrors were the result of a ton of great engineering and a lot of painstaking work. I'm proud to have been part of it.

Keep an eye out for all the maintenance required to keep the telescope operating at its peak. Also, remember that even though Keck is located at arguably the best place on the planet for an optical telescope, it still only operates at its absolute best for one day a year. The conditions have to be just right.

Enjoy!

More Bletchley Park

Saw another new post about Bletchley park getting a grant to fund restoration efforts and new exhibits. As a techie and a student of history, Bletchley park and Building 26 have always been fascinating to me. This is where Turing (of Turing Machine fame) got his start. This is where the modern computer was born. This is where thousands of lives were saved by cracking codes during WWII. Great stuff. Some interesting links found on Boing Boing today:

 

What is truly amazing about all this is that some is still classified, and don't forget that everything was destroyed after the war to preserve secrecy. Rebuilding the bombes and creating the exhibits required a lot of investigation. If you can, read the acknowledgements in The Secret in Building 26.

An Amateur Astronomer's View of the Space Shuttle Discovery Cozying Up to the ISS

An Amateur Astronomer's View of the Space Shuttle Discovery Cozying Up to the ISS. This is pretty amazing. This guy, Rob Bullen, guided his 8.5" telescope with a Cannon 40D BY HAND to get this photo of the shuttle about to dock with the ISS. The level of precision to guide a telescope at a moving target 190 miles up is pretty impressive. You breath a little hard, and you'll move the telescope too far.

Nice work, Rob!

Now that this is the last shuttle mission, I will need to find another reason to get up in the middle of the night to catch a glimpse of the orbiting shuttle. I never saw a launch in person except for back in August 2009 when a night launch trajectory took the shuttle over New England, and I saw the powered flyby. It was impressive.

Another fond memory as a child is being at Kennedy Space Center prior to the Shuttle program really kicking off. We got to wonder around the VAB and all over Pad 39a. Standing on the pad where the Apollo missions started was a highlight of a kid that spent hours in a library reading everything I could about Mercury, Gemini, and Apollo.

Media Server Fun

Back when I moved into my current house, I took the opportunity to setup a media server. In my old place, there really wasn't a good place to set one up, so I would dig out a CD or two (out of 1000+) each time I wanted to hear something. As you might expect, that got old, and I didn't listen to music as much as I'd like. Enter the HTPC. Now, I'm more than a little bit of a music and movie buff. Plus, I'm a computer geek. Why not go all the way? I started out by buying parts for 2 PCs. The first was to be a media server, and it was a desktop unit with an audio component look. The second was my home PC (this was before I switched back to Mac.) In addition to a desire to store all my music, I wanted to source music for three zones in the house and up-convert my DVDs for a large projection screen. Finally, let's combine the HTPC with a Pronto remote and I can control the lights and such. Here are the parts I used:

  • J. River - Media Center - Think of this as iTunes on steroids. It's designed for very large media libraries. It also supports images, video, 10-foot interfaces, etc. Most important for me is that it supports multiple audio zones and bit-perfect playback. MS Windows has a nasty habit of converting your 44kHz CDs to 48kHz. I didn't want any extra conversions to affect audio quality.
  • Girder - This is a pretty awesome macro tool that lets you do just about anything you want, and it can be controlled via IR.
  • SuperNudeList - It's not what you think! This handy tool will take a export from J. River Media Center and turn it into Pronto commands. That lets me choose all my albums from a remote without having to fire up a monitor on the HTPC. In fact, the HTPC never had a permanent keyboard or mouse. Just so you know, the original author's name of this tool was Nudel. That's where the name came from.
  • TheaterTek - This is a fantastic DVD player for the PC. Unlike iTunes or others, it operates like a normal DVD player, and I could easily control it via my Pronto.
  • ffdshow - This little number hooked into the video output stream and converted a standard DVD video stream to 720p or 1080p for my projector. To say it is a tweaker's dream would be the understatement of the year.
  • Roku Soundbridge - This UpNP/DLNA player gives me access to my entire library anywhere in the house. Unfortunately, Roku doesn't make them anymore.

Now I have some hardware, but I have no content. I had plenty of disk space (no RAID yet though,) so I went with a lossless encoding (FLAC.) Again, this ensured that I wasn't missing anything as I played it through my high-end system. This was a chore and a half. In the end, I ended up with 3 CD units in 2 PCs ripping in parallel, and it still took over a month to get everything ripped. Of course, now you can ship everything off to a CD ripping service, and they will take care of it for $0.25 each.

Eventually, I had a system that could play music in three areas of the house, supported UPnP and DLNA clients, acted as my primary DVD player with up-conversion, and controlled other areas (lights, etc.) in the house. It worked pretty well, but I did occasionally receive phone calls from my wife when things went funny.

I did my share of tweaking things over time:

  • Endless hours with ffdshow trying to get the best picture possible. You really can go crazy here.
  • Hacked my HTPC to allow for software RAID on Windows XP. The last thing I wanted to do was rip all those CDs again.
  • Constantly trying to get the Pronto right. I wanted it to be as easy as possible. One problem is that every time a new CD was added, I had to re-run supernudelist, and I did that a lot.
  • Used as a Time Machine backup drive. It works, but Apple prefers AFP file protocol. I kept having to reset my backups, which was a pain.

Why am I telling you all this now given that I did this years ago? I'm telling you this because I now have none of the above anymore. It's all been replaced by simple systems. I had a blast building it all myself, and I saved a ton of cash doing so. Media servers and up-converting DVD players using algorithms like in ffdshow were many thousands of dollars. I did all of this with less than $100 in software and a cheap PC. What happened?

The first to go was TheaterTek. As good as it was, it did occasionally go into strange mode, and it made people nervous. Oppo came out with a DVD plater for less than $200 that had the same up-converting chip as found in $5000 players. Score! I get my video quality, and the family gets a player that doesn't require a geek around just in case.

More recently, I started to notice a drive on the HTPC acting up. Rather than replace it, I decided it was a good time to look around for other options. After all, the server was several years old, and the drive was probably the first of many parts to start failing. What did I settle on this time?

  • Netgear ReadyNAS NVX - This gives me a lot more space to play with. In addition to being a large RAID drive, it also replaces almost everything I did on my HTPC
    • Natively supports Time Machine. I haven't had to think about it once I turned it on for our two Macs.
    • Has a DLNA server so my network-ready DVD player can show videos and photos. An added benefit is that my kids iTouch units can access this using PlugPlayer.
    • Has an iTunes server built in for my Roku Soundbridge and my two Macs.
    • Lots of other features
  • Sonos - I can't say enough good things about the Sonos system. There is a player available for each of my zones, and it works perfectly with the ReadyNAS. It literally took less than 5 minutes to set everything up and have multiple audio zones playing in the house. The sound is great. Sonos has their own controller, but you can use a PC, Mac, or iPhone/iTouch to control everything. There are other options (Logitech SqueezeBox, etc.), but I liked the Sonos. In addition to all the other great features of Sonos, it uses its own mesh network. There are two advantages to this. First, your Sonos players don't have to be near your network access point. They only have to be near another Sonos unit. Second, when you are playing music, you are not impacting your normal wireless network's bandwidth.
  • Harmony Remote - Like many Pronto units, mine's touchscreen eventually stopped working well. I was lucky. Many people only got a couple of years, but mine last far longer. Plus, it died right around the time the HTPC was going south. The Harmony is nowhere near as flexible as the Pronto, but it works for what I need now.

I've gone on too long here. Let me end by saying I now have a great system that just works, and it's incredibly simple to use. I miss that I can't add home control, and I can't really tweak anymore. At least without the option, I'm not tempted to tweak, so I actually use the system more.

John Harrison and the Longitude Problem

It's been a while since I read Longitude by Dava Sobel, but I happened to catch another documentary on the story recently. It reminded me again about this amazing story of science and perseverance. As an engineer, I can't help but admire John Harrison for his scientific method. It's also a description of an iterative method. Granted the time periods are years vs. weeks, but the idea is the same. Before I describe why this is such an amazing story, there needs to be some background on longitude and the longitude problem. Using latitude and longitude, you can describe any location on earth. Both latitude and longitude are expressed in degrees -- as an angle from a reference line. Latitude is a vertical position relative to the earth's equator, and north or south is added to differentiate. Longitude is expressed east or west from the prime meridian, which runs vertically through Greenwich, London.

Latitude is relatively easy to find and calculate. Very simply, if you measure the angle between the horizon and the sun at its highest point (noon,) you can figure out your latitude. A sextant is usually the tool of choice here. Remember, I'm talking about the time before GPS devices.

Longitude is another matter, however. Until John Harrison came around, there wasn't a practical way to calculate longitude from a ship. Most often, navigators would combine frequent speed measurements and a hourglass. Speed was usually measured by dropping a rope over the stern with a wooden plate on the end. This rope had knots tied at precise distances. Using a 30 second sandglass, a sailor would count the number of knots pulled out over the 30 seconds. This count was reported as "knots." Hence, you get n knots as a nautical speed (1 knot = 1.85166 km/h.) Using the number of turns of the hourglass and the frequent knot readings, the navigators would attempt to calculate their longitude. Unfortunately, neither of these pieces of data were very accurate. As a result, a lot of ships found themselves aground or sunk due to bad longitude information. This is where the Longitude Prize came from.

The British government offered the prize for a simple and practical method of determining a ship's longitude. It was a HUGE prize in its day -- enough to retire. The prize was administered by the Board of Longitude, and this is where the problem came in. The Board of Longitude was made up mostly of astronomers who were convinced that the sky was the key. John Harrison's realization was that if you had an extremely accurate clock, calculating longitude was pretty simple. If you had a clock set onboard set to noon at your home port, you can figure out noon on your ship at sea using the sun. If it's noon on the ship, and your clock reads 2:00pm, you know precisely where you are (think time zones which the same idea.) The trouble was that no one knew how to build a clock accurate enough, especially one that would run on a ship that rocked and was often very humid.

John Harrison worked with wood, so his first few clocks were made of wood with no training in clock making. Some of these clocks still work today -- without lubrication due to the natural oils in the wood. At the time, these clocks were considered the most accurate built to date. These clocks were pendulum clocks and of no use on a ship. Still, he invented several techniques and mechanisms still in use today.

Moving from wood to metal and putting himself into the running for the Longitude Prize started him on a 40+ year quest to retrieve the prize. No one ever officially received the prize, but John Harrison did eventually receive a monetary award from Parliament for his achievements. What it took to get there is a fascinating story. It's one of those stories that you can't make up.

If you can, check out the book or any of the documentaries. Not only is it a great story, but it's a great description of an iterative, scientific process to solve a very difficult problem.

Software: Utility vs. Joy

How many software packages do you use that give you joy? How about simple utility? How about both? My guess is there are far fewer that provide joy and even fewer that provide both. I have dozens of applications that provide utility only. For example, I use OmniFocus and EverNote constantly. Both apps have become part of my daily routine of getting things done and keeping notes on what's happening. However, they are both far from joyful. There are times when it's far from it -- "to do" list is is too long or the EverNote iPhone app crashed again. Some apps appear to provide joy, but it's not the app. It's the data. If I'm checking the weekend weather, it's not the app that provided joy. It's the report of perfect weather that gives me joy. Joy is pretty easy to get, but I find it usually is limited to games or streaming media apps. My personal geeky pleasures for joy are Moon Globe and Starmap for iPhone. There is no utility in knowing that the orange "star" I see every night when I take my dog out is actually Mars or that clear area on the Moon is where Apollo 11 landed, but it does make me stop and check out the night sky. It's nice to know where the planets are or what's up on our planetary neighbor. Yes, it is the data that gives me joy, but I also think it's the ease with which I can get it also contributes.

The ultimate is to figure out a way to build and application that provides both joy and utility. How many of those are there? The only one that comes to mind is Google Earth. I can spend an hour just checking parts of the world out, but I also use it for gain useful utility -- how far to the water from here, etc. For your own applications, you can create joy and utility by first doing something useful, and then make it useable. You still might not create joy, but you at least have a chance.

Any apps out there that provide utility and joy? Your browser is another easy example.

BTW, I've been away for awhile. Back in December, I started as CTO of Wimba. Wimba focuses on innovative collaboration solutions that empower educators and engage students. It's all about collaboration in the education market. It's kept me a little on the busy side these days, and it has greatly reduced my coding time. It has opened up several new topics to discuss, so stay tuned.

Software Lifespan

How long should our software last? I'm sure that there are software packages out there that were built decades ago, but I'm talking about packages still being actively updated and sold. Personally, I figure if I get 5-7 years out of system before a major refactor of some part of it, then I'm doing great. Even if you continuously refactor, you will eventually get to the point where the cost of new features in legacy code become prohibitive. Shortcuts are taken or information is lost, and the result is code that is tough to maintain and update. How long do you think code lasts before it's too expensive to update?

The recent Mars Lander story made me think about this. Talk about a huge pat on the back to those NASA engineers. They built a system meant to last 90 days that ended up lasting almost 2000 days. Who knows? If it can survive the Martian winter, it may be able to keep going. The Devil's Advocate in me might say that they grossly over-engineered it, but mostly I'm supremely impressed the a group of engineers built a machine that survived in a very hostile environment for 5+ years.

From http://xkcd.com/:

Mars Lander Chronicles

YAGNI and the Crystal Ball of Software Architecture

YAGNI and the Crystal Ball
How often have you been involved in a project, and someone starts a statement with "It would be really cool if ... ?" The second I hear that, I find myself evaluating what comes next with high degree of skepticism. First of all, it usually would be "really cool," but that doesn't mean we should do it. Too often these ideas solve a problem that you won't ever have or will not have in the foreseeable future.
YAGNI = You're are not going to need it
Sure, it would be pretty cool to have a full plugin architecture, but do you really need it now? Let's gain some traction, iterate, and then we'll determine if it's really necessary. Doing it because it's cool only wastes time if you figure out that the users don't really care. YAGNI.
Always design for current needs, leaving yourself open for the foreseeable future. Forget about the using the crystal ball to guess what your users will want a year from now. It's far better to get something in front of your users sooner and find out what they really want. Even if a year from now you have to do a major refactor because something in the crystal ball came true, you will have a user base now and a good reason to make the change. You did iterate to get there, didn't you?

How often have you been involved in a project, and someone starts a statement with "It would be really cool if ... ?" The second I hear that, I find myself evaluating what comes next with a high degree of skepticism. First of all, it usually would be "really cool," but that doesn't mean we should do it. Too often these ideas solve a problem that you won't ever have or will not have in the foreseeable future.

YAGNI = You're are not going to need it

Sure, it would be pretty cool to have a full plugin architecture, but do you really need it now? Let's gain some traction, iterate, and then we'll determine if it's really necessary. Doing it because it's cool only wastes time if you figure out that the users don't really care. YAGNI.

Always design for current needs, leaving yourself open for the foreseeable future. Forget about using the crystal ball to guess what your users will want a year from now. It's far better to get something in front of your users sooner and find out what they really want now. Even if a year from now you have to do a major refactor because something in the crystal ball came true, you will have a user base now and a good reason to make the change. You did iterate to get there, didn't you?

Using Thinking Sphinx

I recently had an instance where I wanted to add full-text search to an application. I've used Lucene, Solr, and a few others in past lives, but this time I wanted something just as functional but a little more lightweight. After looking around I settled on Sphinx, and so far it's worked great. By itself, Sphinx is not hard to use, but since I'm in Rails, I figured someone must have a gem or plugin for this. Sure enough, I found Thinking Sphinx. Now, it's really simple. Let's get things installed.

To install Sphinx on Linux (See doc for others):

  1. Download Sphinx 0.9.8
  2. tar xzvf sphinx-0.9.8.tar.gz
  3. cd sphinx
  4. ./configure
  5. make
  6. sudo make install

To install Thinking Sphinx:

First, install the gem. There is a plugin available, but I prefer the gem.

sudo gem install freelancing-god-thinking-sphinx \
  --source http://gems.github.com

Add to your config/environment.rb:

config.gem(
  'freelancing-god-thinking-sphinx',
  :lib         => 'thinking_sphinx',
  :version     => '1.1.12'
)

Finally, to make all the rake tasks available to your app, add the following to your Rakefile:

require 'thinking_sphinx/tasks'

Now, we need to use it, but before we do that a brief introduction to some Sphinx terms is necessary. Sphinx will build an index based on fields and attributes. Fields are the actual content of your search index. Fields are always strings. If you want to find content by keywords then it must be a field. Attributes are part of the index, but they are only used for sorting and grouping. Attributes are ignored for keyword searches, but they are very powerful when you want to limit a search. Unlike fields, attributes support multiple types. The supported types are integers, floats, datetimes (as Unix timestamps – and thus integers anyway), booleans, and strings. Take note that string attributes are converted to ordinal integers, which is especially useful for sorting, but not much else.

Thinking Sphinx adds the ability to index any one of your models. To setup an index, you simply add a define_index block. For example:

class Company < ActiveRecord::Base
  define_index do
    indexes [:name, sym], :as => :name, :sortable => true
    indexes description
    indexes city
    indexes state
    indexes country
    indexes area_code
    indexes url
    indexes [industry1, industry2, industry3], :as => :industry
    indexes [subindustry1, subindustry2, subindustry3], :as => :subindustry

    has fortune_rank, created_at, updated_at, vendor_updated_at, employee_bucket, revenue_bucket
    has "reviewed_at IS NULL", :as => :unreviewed, :type => :boolean

    set_property :delta => WorklingDelta
  end
end

Most of this should be pretty self explanatory. To index content (fields), you use "indexes" keyword. As you can see, you can have compound fields by using an array. Note that :name and :id must be symbols or Thinking Sphinx will get confused. You can also use some SQL in your indexes statement.

To add attributes, you use the "has" keyword. Thinking Sphinx is pretty good about determining the type of an attribute, but sometimes you need to tell it using :type.

I will explain the set_property :delta => WorklingDelta later.

To build your index, simply run:

rake thinking_sphinx:index

After processing each model, you will see a message like the one below. Ignore it. Everything is working fine. Really.

distributed index 'company' can not be directly indexed; skipping.

However, if you have made structural changes to your index (which is anything except adding new data into the database tables), you’ll need to stop Sphinx, re-index, and then re-start Sphinx – which can be done through a single rake call.

rake thinking_sphinx:rebuild

Once you have your index setup, you can search really easily.

Company.search "International Business Machines"

This will perform a keyword search across all the indexes for Company. If you want to limit your search to a specific field, use :conditions.

Company.search :conditions => { :description => "computers" }

To use your attributes for grouping and such use :with.

Company.search :conditions => { :description => "computers" },
                                :with => { :employee_bucket => 2 }

With can also accept arrays and ranges. See the doc for more information.

Back to the set_property above. One issue with Sphinx vs. Solr or Lucene is that the Sphinx index is fixed. If you update your model, the change will not be reflected in the index until you rebuild the entire index. To get around this, Sphinx supports delta indexes. A delta index allows you to make a change and have it show up in searches without rebuilding the entire index. Although, rebuilding an index is not a big deal with Sphinx. For example, I can rebuild the Company index defined here in under 2 minutes (1.6 million records).

What does set_property :delta => WorklingDelta do? First, it adds an after_save callback to your model that will use WorklingDelta to perform the delta index step. Given that Workling is in the name you're probably guessing that I hooked this up to use Workling so delta indexing happens asynchronously.

Add lib/workling_delta.rb:

class WorklingDelta < ThinkingSphinx::Deltas::DefaultDelta
  def index(model, instance = nil)
    return true unless ThinkingSphinx.updates_enabled? && ThinkingSphinx.deltas_enabled?
    return true if instance && !toggled(instance)

    doc_id = instance ? instance.sphinx_document_id : nil
    WorklingDeltaWorker.asynch_index(:delta_index_name => delta_index_name(model), :core_index_name => core_index_name(model), :document_id => doc_id)

    return true
  end
end

Add app/workers/workling_delta_worker.rb:

class WorklingDeltaWorker < Workling::Base
  def index(options = {})
    logger.info("WorklingDeltaWorker#index: #{options.inspect}")
    ThinkingSphinx::Deltas::DeltaJob.new(options[:delta_index_name]).perform
    if options[:document_id]
      ThinkingSphinx::Deltas::FlagAsDeletedJob.new(options[:core_index_name], options[:document_id]).perform
    end

    return true
  end
end

Now, whenever a Company object is created, updated, or destroyed, the WorklingDeltaWorker will be called to update the delta index.

If you have a need to perform powerful searches over hundreds of thousands (or even millions) of records give Sphinx and Thinking Sphinx a try. There are some minor feature omissions, but I think the trade-offs for most applications more than make up for them. BTW, scale is not one of the omissions. The largest Sphinx installation, boardreader.com, uses Sphinx to index over 2 billion records. Craigslist.org is probably the biggest with 50 million queries per day.