The Intel NUC Computers, AWS, and racing cars

It has been over five years since my last post about software and technology.  It’s not that I stopped using it.  I just stopped talking about it.  Lately I have been on a bit of a streak.  I have been working on the MARRS Points tracking app in AWS for over a year now.  It will now be the official points tracking application for the 2016 season across all race classes in the Washington DC Region (WDCR) of the SCCA.  I have actually done something mildly productive with my spare time!

An AWS Project Was In Order

It was mainly by happenstance that I got the app going.  I wanted to work in the Amazon AWS cloud a bit to understand it better.  I had managed teams using it for years now at various companies.  So it seemed like a reasonable learning experience. I could have easily chosen Microsoft Azure or the Google Cloud, but AWS has the deepest legacy and I started there.  Once I logged in and started to play with AWS, they let me know my first year was FREE if I kept my usage below specific CPU and memory levels.  Sure no problem.  But what to build, what to do?  I remembered I had built an old Java/JSP app as a framework for a racing site for my brothers and I, called cahallbrosracing.com.  GoDaddy had taken their Java support down and it had been throwing errors for years.  So I decided that was the perfect domain to try, and grabbed the skeleton code.  It would be some type of Java/JSP racing application that used a MySQL database backend.  But for now, I just needed to see if I could configure AWS to let me get anything live.

EC2, RDS, a little AWS security magic…

I provisioned an EC2 node, downloaded Tomcat and Oracle Java and went to work.  In no time, I had the fragments of the old site live and decided I should put my race schedule online.  The schedule would not come from a static HTML page.  It would use a JSP template and a Java object to get the data from the database.  Then each year I would just add new events to the database and display by year.  Quickly the MySQL DB was provisioned, network security provisioned, DB connectivity assembled and the schedule was live.  OK – AWS was EASY to use and I now had a public facing Java environment.  I was always too cheap to pay for a dedicated host. Too cheap to sort out a real public facing Java environment that allowed me to control the Linux services so I could start and stop Tomcat as needed.  But FREE was right up my alley.

So there I was, developing Java, JSP and SQL code right on the “production” AWS Linux server.  Who needs Maven or Ant, I was building it right in the server directories!  Then I started to realize I did not have backups.  I was not using a source code repository.  It could all go away like a previous big app I wrote when my RAID drives both failed in the great 2005 Seattle wind storm.  Not a good idea.

Intel NUCs (and GitHub) to the rescue!

Enter the NUCs!!!  I had learned about the Intel NUC series and bought a handful of them to make a home server farm for Hadoop and Cassandra work.  These units are mostly the i5 models with 16GB of RAM running Ubuntu 14.04.4 LTS.  I realized I needed to do the development at home, keep the code in a GitHub repository, and then push updates to AWS when the next version was ready for production.  My main Java development NUC has been awesome.  It is a great complimentary setup.  An AWS “production” environment in the cloud and a Linux environment at home with the source code repository also in the cloud.  I even installed VMWare Workstation on my laptop so I have Linux at the track.  This allows me to pull the code from GitHub down to my laptop and make changes from the track.  It’s almost like I have made it to 2013 or something.

Why software is never “done”

Well once I got going, I wanted to track my points in the MARRS races.  So I made some tools to allow manual entry of schedules, race results, etc.  This manual process clearly did not scale well.  The discovery of  Race Monitor and their REST APIs. solved that issue.  The code was written to pull the results back from Race Monitor and used Google’s GSON parser.  GSON let me marshal the JSON data to objects used in the Java code.  Unfortunately, Race Monitor does not pass a piece of critical data, the SCCA ID for each racer.  The next step was to work with the Washington DC Region and the fine people at MotorsportReg.com to use their REST APIs to get that data for each race.  This simple Java app has become complex with two REST APIs and tools to manage them.

The rest is history.  The tool can now also import CSV files from the MyLaps Orbits software.   A simple CMS was added to publish announcements and steward’s notes per race.  All of the 2015 season has been pulled into the application across all of the classes and drivers.  Many features, bells and whistles have been added thanks to Lin Toland’s sage advice. Check out the 2015 season SSM and SM Championship pages.  A ton of data and a lot of code go into making those look simple.

Racing into the future with MARRS

I am really looking forward to being able to help all of the WDCR MARRS racers track their season starting in April.  Let’s hope I can drive my car better than last year and race as well as I have coded this application.

It is kind of odd to think that my desire to play with AWS caused me to build something useful for hundreds of weekend racing warriors.  Now the next question, should I make it work for every racing group across the world?  I mean multi-tenant, SaaS, world domination?  Hmmm…  Maybe I should try to finish better than 6th this year…

Ted Cahall

Windows 7 – huge upgrade from XP

Nice hardware helps

I just realized that I bought my “new” Windows 7 machine way back in late January.  The thing is amazing: 8GB RAM, i7 860 Quad Core CPU, 3.0Gbps RAID-1 SATA drives, etc.  I recently went out and bought a 30 inch Samsung monitor so I could put the video card in 2560×1600 mode.  The speed, video, stability, etc. of this machine are incredible!

The most amazing thing is the OS.  I skipped Vista due to all of the bad press – coupled with the fact that XP mostly did everything I needed from a desktop OS.  Mostly was the key part of that sentence.  It really could not handle more than about 2GB of memory efficiently – and I had some leaky open-source apps that regularly gobbled that up since I rarely reboot…

Free Microsoft Software!

Additionally, Microsoft has tossed in some FREE apps that were not available under XP as part of their Windows Live Essentials program.  The most significant of those apps (to me) is Movie Maker.  I regularly edit and upload portions of my SCCA Club Racing videos using Movie Maker.  It is simple and easy – which fits my video skill level really well.  I am also in the process of adding in a TV Tuner card so I can really utilize the Windows Media Center software that came with my Windows 7 Ultimate version.  That should make it even more interesting to connect to my Xbox-360 (which now gives my AppleTV a run for the money in renting movies from the Internet).

Windows 7 handles memory well

I now regularly run over 3GB of apps without any issues on the machine whatsoever.  I have not added all the DB servers, app servers, etc. that I used to run on my various Windows desktops.  That is because I never retire my old machines and they are still on the network somewhere.  I finally have created what is mostly a desktop machine used as a desktop.

No question, Windows 7 is a really fantastic OS.  It will continue to be my main machine to access all the servers running in my in my home data center.

Ted Cahall

Geek Evolution – a home Hadoop cluster

Evolution of the Geek

Over the years, the definition of a geek has evolved. I guess it started with a pretty high bar (think Wozniak in a garage with wire-wrapped motherboards in the ’70s), and then dipped for a while.  Does it mean running Hadoop at home now?

Build your own PCs, add a network, DNS…

For a while it simply meant you had a PC at home (probably early to mid ’80s).   It then moved back up-scale to building your own PCs from components: case, motherboard, CPU, heat sink, drives, memory, etc.  It moved along to the requirement of having a couple of PCs at home that shared an Internet connection.  Eventually you need a few servers for file & print – and maybe a database or web server or two… Need a little internal DNS for any of that?

I have generally felt I was reasonably eligible for at least honorary geek status.  At 15 years old, I wrote my first software on an IBM mini-computer back in the mid-’70s, had a PC in the early to mid-’80s (and two EECS degrees), built my own desktops and servers from components in the mid-’90s, added a server cabinet and network in the early-’00s, etc.  Not sure if the fact that I have a Cisco PIX and know how to configure it from the command line counts for anything.

Home Hadoop Cluster

Using a few hours over the last 3 day weekend, I was able to bring up a Hadoop cluster on 3 CentOS nodes in my basement cabinet.  Things are heading for a six node cluster.  The “single-node cluster” was working in about 10-15 minutes.  I have always scratched my head at the concept of a “single-node” cluster.  Seems like an oxymoron to me.

Single-node “cluster” up and running – this is easy (I thought)…  The hard part was getting the distributed version working.  It is always some simple thing that hangs you up.  In this case, it was the fact that CentOS shares the machine’s hostname with the loopback connector in the /etc/hosts file.  This caused Java to bind to the loopback address (127.0.0.1) when it was listening on the NameNode and JobTracker.  It worked fine in a single node configuration as the DataNodes and TaskTrackers were also looking for the loopback connector on that machine.

Thank goodness for the invention of the search engine.  This handy little post saved me a lot of time debugging the issue:
http://markmail.org/message/i64a3vfs2v542boq#query:+page:1+mid:rvcbv7oc4g2tzto7+state:results

After tailing the logs to the DataNodes, I could see they could not connect to the NameNode.  Linux netstat showed that the NameNode was binding to the loopback connector.  I just was not thinking clearly enough to see that it was not also bound to the static IP address of the NameNode host.  Splitting the loopback connector and static IP address into two lines in the /etc/hosts file did the trick.  I thought the days of editing /etc/hosts were long over with the use of DNS.

The bar used to measure a geek

I guess the bar for being a home computer geek means running distributed processing from a rack in your basement in 2010.  Now on to a little MapReduce, Pig and Hive work this weekend.

Ted Cahall

USB Connectors and Memory Cards

Not all USB connectors are equal

This morning I decided to grab a few photos off of my friend’s camera I borrowed when I went to the World Class Driving 200 MPH EXTREME event last weekend. After all, it has a mini-USB connector on the camera (I thought) and I have dozens of cables from the myriad of devices I have purchased over the years.

Enter micro-USB connector

Much to my surprise, the Olympus FE-370’s mini-USB connector is very “mini”.  In fact, it is so mini, it is called “micro” USB.  It is just slightly smaller than mini and will not accept any of the many mini cables that I own.  Being that there is literally two and a half feet of fresh snow outside and not being the type to give up easily, I fished around for a few of my memory readers and removed the memory card from the camera.

My handy little Transcend RDP8 memory card reader can read four different formats.  This should be no problem.  Denied!  It turns out the Olympus has a special memory card called an xD Picture Card.  These are probably more common that I think – but not common enough for my Transcend reader that I bought for my 16GB CF card.  My SanDisk reader (2 formats) would not accept it as well.

Looks like I need to trudge out into the cold and snow to borrow the cable for the camera.  I should probably invest in a micro-USB cable of my own and a newer memory card reader as well.

Ted Cahall

Kubuntu and Wubi

Linux  desktop variations

After playing with Debian and Ubuntu, I wanted to see what the latest in KDE looked like. I have mostly been a Gnome user and had read some interesting tidbits on KDE 4.3 in LinuxJournal. I did not want to “polute” my Ubuntu installation by downloading all of the KDE parts to it.   So I decided I would add a Kubuntu partition to my Ubuntu box.  I would do this as well as test Kubuntu on my 64bit Windows machines using Wubi.

I was surprised to see that the installers for Ubuntu and Kubuntu are not really from the same code base. The installation on my 32 bit Ubuntu box went off without a hitch. I had a spare drive on it and I used that for the new partition. I needed to manually change the partitions with the partition manager.  This is so it could leave the old Ubuntu 9.04 and 9.10 versions where they were. Even this was simple and straight forward.

Wubi letdown

I guess my biggest surprise was that Wubi does not install Kubuntu/Ubuntu to run “on top of Windows” as I thought it would. I had thought there was an additional VXLD layer or something that was written to let Linux run as a guest OS on top of Windows XP. This would have been really cool. Sort of like Cygwin on steroids. This may sound ridiculous, but a colleague long ago, Bill Thompson, wrote such a VXLD for Windows.  He did this back in the mid ’90s that allowed x86 versions of Unix to run on top of Windows.

I searched around the web and Facebook and LinkedIn to see if I could find Bill. With much digging I found him on LinkedIn. His start-up was called “nQue”. He was also a file system guru that wrote a lot of CR-ROM file system drivers, etc. after the start-up went south.

Needless to say, I think if that feature could be added to the Wubi concept, a lot more people might try Ubuntu.  Adding it right on their Windows desktop as an application environment without requiring a reboot. I know Wubi does not alter the Windows partitions.  So it is still a fairly painless way to try Ubuntu without risking much. Users can always uninstall it as they do any Windows application it if they are not happy with it. I just prefer to rarely reboot my systems if I can avoid it.

Ted Cahall

Ubuntu and Debian Installation Fun

Home Data Center Saga continues…

The “home data center” is getting a bit crazy to maintain. It is a good thing I have so much free time on my hands (not). I did finish a couple of the projects on my list last weekend. I wanted to upgrade one of my P4 3.0GHz “home brew” machines from Fedora Core 3 to Ubuntu and put Debian server on one of the “new” used Dell 2850 servers I bought from work. I am now the proud systems administrator of both of these machines – with plenty of fun along the way.

Upgrading from Fedora to Ubuntu

I started with the Fedora Core 3 to Ubuntu conversion by making sure all of the applications I had written in Java, PHP, Perl, sh, as well as the databases in MySQL had been successfully ported to another CentOS machine and regression tested. That took longer than expected (of course). I had already used BitTorrent (thanks Bram Cohen) to download Ubuntu 9.04. I like their numbering scheme as even I can figure out how new/old the rev is. I then went and installed it on my “home brew” Intel motherboard based system. It worked like a charm and I was checking out its slick UI and features within minutes. So far, so good.

Next I decided to see how the graphics worked and if I could get it into 1920×1280 mode with my 24″ monitor. That was a tad trickier – but I was pleased to see that it went out on the Internet and figured out where to get the latest NVidia driver that supported the video card I had bought years ago. That was slick and the graphics were awesome. In high res mode it even puts some transparency to windows and gives them “momentum distortion” as you move the window. Not sure how useful it is – but it looks pretty cool.

VNC for graphical UI across machines

I like to sit in my home office and use VNC to access the 7 Linux boxes running in my basement and other rooms (versus running between them to try things). I know that “real systems administrators do not use VNC” as told to me by one of our real systems administrators at AOL (and CNET years ago). I am not embarrassed to say I am not a real systems administrator!  I like the graphical UI access to all of these machines. It makes working on them so much easier with 4 or 5 windows open at a time. So here is where the rub is. I enabled VNC, ran back to my office, and tried it. No luck. I made sure SSH worked and I could get to the box – that was all set and good to go. I check that the machine was listening on port 5901 – that was good too. A little snooping in the VNC log file let me know it could not execute the /etc/X11//xinit/xinitrc script. I thought that was odd but enabled execute permissions on the file and everything worked.

Upgrading versions of Ubuntu to 9.10

As I performed a routine update of the OS and files, it let me know that Ubuntu 9.10 was now out (as it was past October – month 10 in year 2009). I had downloaded 9.04 a month earlier when I began thinking of the project. A 9.10 upgrade sounded great – so I decided to “go for it”. Bad decision. After the upgrade the video would not work in graphics mode and I could only bring the system up in text mode. Not a big deal for a “real systems administrator” but definitely not what I was looking for – especially on a desktop machine where I wanted to check out the cool graphics in Ubuntu.

Video driver hell

Since the machine had no real work on it and I did not feel it was worth my time to really figure it all out in 80×24 text mode as I trouble shot the X Window system, I simply put 9.04 back on the machine and got it working where it was before the upgrade. This would represent my fallback case in a worst case scenario. I then used BitTorrent to get 9.10 on a DVD. Ubuntu allows you to add multiple OS versions by partitioning the drive. I did that and shared the drive with 9.04 and 9.10 and performed the installation. 9.10 came up and worked from scratch – but the video upgrade would not work. When I tried to get it to go out and upgrade the video driver as it had in 9.04, it kept telling me that there were no drivers and that the upgrade was not possible. This did not let me use the 1920×1024 graphics mode of the card or monitor.

After playing with the software update tool, I was able to find some NVidia drivers that were available and downloaded those. Once I did that the system finally let me do the upgrade to enhanced video mode and use the 1920×1280. I am not sure why the 9.10 version was not able to automatically find these drivers as the 9.04 version was, but clearly this was why the upgrade had failed when I tried to go from 9.04 to 9.10 “in place”. The VNC issue for xinitrc still existed and I again corrected that. Project complete!

And on to the Debian upgrades

The Debian 5.0.3 server install for my Dell 2850 proved to be less frustrating – but not without hiccups. I had downloaded the first 3 DVDs for Debian and proceeded to the basement to start the install. That is when I noticed that this 2850 came with a CD-ROM drive and not a DVD-ROM drive! I had already put CentOS on the other Dell 2850 months ago – so I “assumed” that both machines had DVD-ROM drives. Bad assumption… The nice thing about Debian is that is allows a “net install” CD to be burned that is fairly small. It then downloads the rest of what it needs as it goes along. So this is the route I chose for the Debian server. From there the install was fairly straightforward. The graphics are nowhere as nice and Ubuntu – but this is a server install and I don’t have a fancy video card in the 2850 anyway. The VNC issue for xinitrc also exists with this version of Debian – which is no surprise as Ubuntu is a downstream distribution of Debian. Another project complete and now I have systems to compare different OS features and issues and keep up with some of the pilot projects we are doing at work to streamline software distribution, etc.

Ted Cahall

Windows, MacOS, and editing race cam videos…

Windows 7 solves so many issues

I finally got around to installing Windows 7 on a used Dell Precision 360 w/ 1GB RAM that I bought from work. During installation I somehow fried the AGP video card’s DVI port. I was able to still get the VGA port to work – and was impressed with the graphics and performance.

I went out and bought a new AGP card and am now really impressed with the “Aero” themes and video effects. The system is amazingly fast.

Windows 7 and video editing

I figured I would look at the Microsoft Live extensions including the Movie Maker download. I was able to get the software up, running, and edit one of the MPEG videos from my TraqMate race cam within minutes. This was really interesting to me as everyone says the Mac and Final Cut are the way to go. Movie Maker was FREE – while Final Cut Express was $199.00 at the Apple Store. 🙁

Mac Snow Leopard and Final Cut Express

I recently bought the Snow Leopard upgrade for my Intel based Mac and Final Cut Express 4.0 for editing videos. On Final Cut Express (not sure about Pro), the version of MPEG that the car cam shoots is not recognized. I need to read in the video with the software provided by TraqMate. The “fun” part about MPEG is that the file extension does not say it all.  There are 3 versions of MPEG videos… It seems that the TraqMate shoots MPEG2 and Final Cut only recognizes  versions 1 and 3. TraqMate makes a video conversion utility that I have not tried yet. http://traqmate.com/downloads/videoconverter/TQConvertInstall.exe.  There are several other free utilities out there as well.

A pay program from Apple should have at least the minimum features of the FREE program from Microsoft…

And a Ted Cahall Racing Video is Born!

When I was done, I went to post the video to YouTube.  But – YouTube was down! I first tried at 11:15AM ET today.   It was down for a while. It was back up when I checked back at 11:30AM. Movie Maker posts directly to YouTube. So here it is.

Ted Cahall

AOL Wins Green IT Award from Uptime Institute

It was great to be up in NYC last Wednesday representing AOL.  The Uptime Institute awarded AOL its Green IT Award for “Data Center Energy Efficiency Improvement: IT“.

Great work by Brad Allison in creating SUMO and for the data center and SA teams for pushing its usage.  This tool allows AOL to identify underutilized servers and either decommission them – or bundle them up onto virtualized hosts.

It is great to work with dedicated people that are not only smart, but care about their environment at the same time.

Ted Cahall

Internet Architecture Video

Back sometime in 2004/2005 when I was the CIO/SVP of Engineering for CNET Networks, they shot a video of me explaining, “Scaling out an Internet Architecture“.  I was thinking about the current publishing system at AOL, DynaPub, that we developed in 2007 after I arrived.  It was interesting after I watched the video again how close DynaPub follows the key principles described in it.

The only parts the video does not cover are:

  • Use of Lucene as the Search engine and SOLR as the container to hold Lucene (we invented SOLR while I was at CNET).
  • Use of XML over HTTP as the transport layer between the App Servers and DBs / Search engine.
  • Use of denormalized MySQL tables for speed
  • The main tool, the CMS, and its very specialized table structure for high-performance.

The AOL Publishing system is the fourth generation publishing system that I have been involved in either designing or managing.  IMHO, most of the: bloated, over-designed, needlessly complex issues from previous publishing systems have been eradicated in DynaPub.  It also has ZERO licensing or maintenance costs as it is all built on open-source – including the operating system – as mentioned in a previous blog post here.

Ted Cahall

Employee Purchase Program – more servers

I recently ordered and received a couple more servers from the AOL Employee Purchase Program.  They are a couple of Dell 2850s with 4GB of RAM.  I also picked up a nice Dell desktop for $100 as well.  Can never have enough of those.  I grabbed some speakers for $5 to hook up to one of my Apple AirPort Express units to allow music in a remote room through iTunes.

Onto installing CentOS on the servers and Windows on the PC.  This should complete my home data center.  I am really racking up a power bill.

Ted Cahall