Home Storage? Back-up? Instant File Sync? Synology!

Synology NAS Servers

One of the most important components of my home data center, are my Network Attached Storage (NAS) servers.

Synology 1517+ Consumer NAS

I have had my old NetGear ReadyNAS unit for at least 9 or 10 years now. It has a whopping 1.3TB of storage across 3 drives in a RAID 5 configuration.  NAS units are great for storing my racing videos that no one will ever watch, old photos now that everyone with a phone collects thousands of photos a year, and copies of my important tax, mortgage, and legal documents.  Some of my friends store TBs of pirated videos from the dark web.  I am a NetFlix and AppleTV guy so that saves me a few TBs.

Goodbye NetGear, hello Synology

While the ReadyNAS served me well, it was long in the tooth and short on TBs.  It also was missing some of the interesting new features that I did not even know I was living without until I bought my first Synology NAS back in 2015- the DS1515+. These guys really have done the whole consumer NAS thing really well.

Main attraction

The main feature I use and like is the immediate file sync of Linux directories on my Linux servers (and one of my Windows 10 desktops as well).  Once I configured this option and selected the directories I wanted synced, all of those files continue to be safely stored on the NAS.  No backups – it copies the files immediately upon edit or save to the NAS file system.  It is also a nice way to grab files from one machine to the other as the systems can all see the disk replicas across the servers.

This does not mean I do not do backups.  I have Amazon Glacier storage and I have those critical legal, tax, mortgage files sent out to Glacier storage once a week from the Synology NAS.  The great thing about that is Synology provides the service that runs on the NAS to do the Glacier backup.  Really simple integration.

Built-in Servers (services)

The disk drives are even “hot swappable“.  No downtime if you have a drive go bad.  Aside from rock solid hardware technology, another amazing thing about Synology is the application ecosystem they provide on the NAS server.  They want you to make this your “server” for everything and anything you do in your home.  Want a VPN sever?  It has that.  DNS? Yep.  Connect with my Macs, Windows, and Linux in their native network protocols? Of course.  It has email servers, video security servers (I bought two cameras to test and they are great), video, photo, audio servers.  There are Active Directory, Email, Network Management, Print, Content Management, WordPress, WikiMedia, E-Commerce, Docker, Git, Web, Plex, Application (Tomcat) and Database servers!  These all run Native on the NAS.  Not just as the disk – but in the memory and on the quad core CPU.

I cannot possibly list all of the features and servers these new Synology NAS units supply.  I have tested many of them, and they are rock solid and dependable.  I never envisioned using my NAS as a “server” other than as a network attached storage server.  Now it can work as so much more.

The more the merrier

The Synology product has me so hooked, I bought my second unit!  A DS1517+ with 8GM of main memory and 30TB of storage (5 disks @ 6TB each).  I used this for the security video storage and as a snap backup of the first unit.  Had I planned it better, I could have arranged these two Synology units in an active-passive mirrored configuration.  This would allow one to take over if the other crashed.  Clearly I do not need that at home.  But it is nice to know that a simple consumer grade products offer these features now.

These really are a great compliment to my home data center’s NUC-based Linux servers.  But I do not use the NAS for Cassandra or Hadoop storage.  That all lives on the SSD drives in each of the NUC units for performance reasons.  I back them up to the NAS off hours.

Highly Recommend Synology NAS

I fully and highly recommend these Synology NAS products.  They do not sell direct.  I recommend finding them on Amazon after you spend hours like I did on their product site comparing models and features.

[Update] One cool thing I forgot to mention before I hit “publish”, is that this unit of course runs Linux.  It is a 3.10 kernel version modified by Synology.  This is the reason so many of these services (servers) are available as a stock part of the unit.  Synology chose to make Linux the engine to run the NAS and  brought along many of the Linux services.  With simple configuration, you can ‘ssh’ into the NAS and work on it as though it were a plain old Linux box.  It is really well done.

Ted Cahall

 

Surfing to the end of the Internet

Keeping (too) busy

Since I left Digital River at the end of February, I have been working closely with Scott Scazafavo on a stealth start-up idea we had been kicking around.   Most mornings I hit my office early and attempt to further the research or  code base.  I worked on some Java REST API code I wanted to improve from its early usage at marrspoints.com.  I remembered there was a simple test site that gave canned responses to HTTP GET, POST requests along with cookies and the likes.  After a tad of searching, I found it again: httpbin.org – what a nice tool.  Simple yet elegant – and great for testing out HTTP code samples where you just need a simple endpoint.  Tutorials on the Internet should just use this site in their examples – as it likely will not change much.

The dangers of the Internet

This is where the danger began…  As I was done using it for the simple testing I was doing, and was ready to move onto the next phase, I noticed that it had the authors name with a hyperlink.  Since I wished I had written such a useful “demo” or example.com website, I wanted to see a tad more about him.  Through Kenneth Reitz, I learned that I comparatively don’t have many cool hobbies or talents (I am not that great of an auto racer and I have not written books, published music, been a professional speaker or even amateur photographer).   That is all on top of his enormous contribution to the Open Source space.  This guy is REALLY talented. Through his link on his personal values, I saw another link stating that “Life is not a Race, but it has No Speed Limits”.  Of course that deserved a click!

Through Kenneth and that link, I met (online so to speak) Derek Sivers and read his axiom – that “Life Has No Speed Limits“.  And though that story, the life of Kimo Williams and why focus matters.  Focus?  On the Internet with so many lessons to learn?

Saying “Hell Yeah!”

It was great to “meet” three SUPER TALENTED people on the Internet this morning.  People I will likely never meet in person or even exchange emails.  Yet, people from whom I have already learned.  While perusing Derek’s site, I found another life lesson to which I truly try to adhere.   No “yes.” Either “HELL YEAH!” or “no.”

OK- back to that focus thing and getting some work done.

Ted Cahall

Using Postman for API consumption

Being a caveman

So what is wrong with curl?  Nothing.  But Postman (at getpostman.com) is simply one of the best tools I have used while developing code that consumes APIs.  This is another case where I was using caveman tech (curl) to do a job so elegantly managed by a service that makes a desktop app that runs on Linux, MacOS, and Windows (and syncs across them).

Sometimes you just need an API

My coding and racing adventures led me to develop and win an award for the marrspoints.com application.  The app consumes two different APIs: race-monitor.com and motorsportreg.com.  I used curl to do the testing dirty work for these as one of them did not publish their response formats that I needed for my JSON parser.

I have been playing with a stock/equities “demo app” for my Cassandra cluster.  The app required me to replace the old Yahoo quotes feed.  I had to do testing on the new feed I chose, and I was still doing it with curl.

Even a stealth API…

Currently, I am now working on a stealth start-up idea with an even more stealth cohort of mine in the financial space.  The data company we have tentatively selected (and their API documentation) pointed me to Postman.  It is awesome.  I have deeply tested the financial access, accounts, instruments, etc.  This was accomplished on my own accounts in only a couple hours of work and research.  Postman is script-able, has variable replacement, etc.  Oh, and the best part, a single developer license is FREE.  My favorite price.

To think Sam Morris at Digital River talked about Postman dozens of times. It never occurred to me to go look at it.  That cost me a lot of wasted time. Especially since I know Sam is “the man”.  Thank you Sam – the second time I heard of it, I knew to go get a copy and learn it quickly.

Ted Cahall

Gnome desktop coming to Ubuntu :-(

Unity vs Gnome

I hate to think of myself as a tech Luddite.  Being an Ubuntu Linux fan has caused familiarization with the Unity desktop.  Recently, I have been playing with 17.10 to see what is coming in 18.04 LTS.  I never thought I would defend the Unity desktop as my earliest Linux days were split between the Gnome and KDE desktops.  But I wish I had my old Unity back. Yes, I know I can return to it in 17.10 – but it is becoming mostly unsupported.  Incremental scaling is essential with today’s 4K monitors.  Or I need Lasik.  Uber-Lasik in my case.

Why I like LTS.1

I never actually run the first point release of an LTS version.  I waited for 16.04.1 to get anything real live on 16.04 LTS.  It seems the Gnome desktop has a big memory leak and it likely will not be fixed in the 18.04 LTS initial release in April.

OK, scratch moving to 18.04 LTS in April on anything I need.  I already am a desktop memory hog as it is and finally upgraded my new desktop machine to 32GB of RAM.

A Gnome future in Ubuntu

I know this is all for the good.  That change thing.  Moving to Gnome in this case.  It is far more widely supported and used across more variants of Linux.   I used to be a CentOS champion as I loosened the evil grip of RedHat subscription fees back in my AOL cost cutting days.  I have since become almost an exclusive Ubuntu home data center.  Seems I will be straddling Gnome and Unity for a year or so.  One other word of caution, the Gnome 3.26 desktop (used in 17.10) does not truly support incremental UI scaling yet.  This is a problem for people like me with a 4K laptop screen or large 4K desktops.  There is a workaround.   However, it is not clear if fractional scaling will make it into Gnome 3.28 which ships with  18.04 LTS.

Happy times.  It is really hard to see my shell windows in a non-scaled up Gnome desktop on a 4K laptop screen.

Ted Cahall

Intel NUCs make a perfect home cluster

Getting my latest NUC

I am pretty psyched to get my latest Intel NUC.  The NUC7i7DNKE has an 8th generation Intel® Core™ i7 vPro™ 4.2 GHz “Turbo”, quad core processor with 32GB of DDR4 2400 MHz RAM and a 1TGB SSD drive.  Not to mention built in 4K UHD video with HDMI ports and USB 3.0.

My home data center NUC cluster
My home data center NUC cluster

I will use this as my main development machine.  It is crazy that I tend to run out of RAM on my 16GB machines running Ubuntu.

This will be my 9th NUC.  Maybe I am a little too in love with these things.  They make great clusters for home research and development on distributed technologies such as Cassandra and Hadoop.  I have three nodes running Cassandra and Hadoop today – and am looking to add a 4th node when I free up my current development machine NUC.

Quiet, Low Power, great for clustering!

They are whisper quiet and use very low power.  There are 5 in a stack sitting on my desk next to me as I write this, and they make less noise than a single standard PC.  In fact, they seem to make no noise at all.

I also run Windows 10 on one as a home theater type of PC connected to a Samsung UHD TV via HDMI.  These NUCs are awesome.  I gave my old i3 core media NUC to my younger brother as a gift.

Here is an old picture of my early stack of NUCs.  They are each 4″ x 4″.

Ted Cahall

Ted Cahall’s “new” tech blog

New Blog along with some old content

As a past media executive at companies such as CNET Networks, Microsoft’s MSN, AOL and the early social network Classmates.com, I have operated a  blog here and there over the years.  Mostly to test out SEO ideas and cross link my sites, etc.

Started on LiveJournal in 2004

One of my unfortunate SEO decisions was using LiveJournal.com for my tech postings.  In 2004 as CTO of CNET Networks, I was fortunate enough to meet Brad Fitzpatrick who invented LiveJournal (as well as memcached).  Since we made a (failed) bid to buy the site, I decided I should use it and get to know it a bit.  I had used it to blog about some of my non-proprietary experiences with technology and software from time to time.

My last post there was almost two years ago to the day.  I was musing at the intersection of my auto racing hobby and my technology hobby.  It was through a lack of automation of the points standing of my auto racing league that I had finally brought these two passions together.  This was all enabled by Open Source, the Intel NUC computers (home data center), and Amazon’s AWS hosting facility.   Resulting in the creation of the marrspoints.com race points tracking web application.

LiveJournal did not seem to get the SEO juice

Compared to modern blogging sites such as WordPress (which this blog is built on), LiveJournal never got the great SEO features that it deserved. Therefore today, I am moving my LiveJournal information over to a new home here at cahall-labs.com.   All of the posts have been successfully moved here as of this post.

Open Source and my Home Data Center

I have a few tech topics that are of interest to me. They include:

Cassandra and Hadoop

The marrspoints.com site was simple to build, but the back end tools to ingest all of the race data was a lot more work.  I occasionally look at ways to change the data ingestion or analytics.  Therefore I play with tools such as Cassandra and Hadoop on my NUC cluster in my home data center.  In general, I will try NOT to blog about racing in this blog.  That will move to a blog at either cahallracing.com or cahall.com.

Thank you LiveJournal – hello WordPress

So thank you to LiveJournal for the tools and time.  It was a good 14 year run.  There is also an old, outdated racing blog on WordPress.  It will likely be moving to a new home in the next month or two.  It will be good to get back to using the tool Matt Mullenweg built (WordPress).  I had the opportunity to work with Matt at CNET when he spent time there for a year on his way to becoming famous.  Clearly I wish I had made a blog tool.  Some day I may even blog about Gavin Hall and Alex Rudloff.  They built blogsmith.  Blogsmith powers TMZ.com and most of the AOL blogs.  I guess I met most of the people that built blogs…  Very, very smart and talented people.

Ted Cahall

Ted Cahall receives SCCA Award

Regional Exec Award

Ted Cahall receives Regional Executive Award
Paul Anderson, Regional Executive of the Washinton DC Region of the SCCA, presents Ted Cahall with the Regional Executive Award for the development of the marrspoints.com web application.

This month I was awarded the Washington DC Region – SCCA Regional Executive’s Award by Paul Anderson for the development and management of the marrspoints.com web application.  Building marrspoints was such a great way to join my two hobbies: auto racing and software development.

It really gave me something useful to work on through which other racers could also benefit.

Standing on the shoulders of giants

What an honor to be recognized.  But these things do not happen in isolation.  I could not have done it without the help and guidance of Lin Toland.  Lin was there providing the feature requests and feedback on the design and functionality.  He also did a lot of unpaid QA for my early roll-out.  You are a first class leader Lin – thank you.

Lin still helps navigate the WDCR SCCA region for me and helps me look at new feature requests including Bracket Racing with Chuck Edmondson.

Thank you for the start!

I would also like to thank Mike Collins of Meathead Racing for getting me involved in racing with the SCCA.  It’s like putting cash in a coffee can and lighting it on fire!

Ted Cahall

The Intel NUC Computers, AWS, and racing cars

It has been over five years since my last post about software and technology.  It’s not that I stopped using it.  I just stopped talking about it.  Lately I have been on a bit of a streak.  I have been working on the MARRS Points tracking app in AWS for over a year now.  It will now be the official points tracking application for the 2016 season across all race classes in the Washington DC Region (WDCR) of the SCCA.  I have actually done something mildly productive with my spare time!

An AWS Project Was In Order

It was mainly by happenstance that I got the app going.  I wanted to work in the Amazon AWS cloud a bit to understand it better.  I had managed teams using it for years now at various companies.  So it seemed like a reasonable learning experience. I could have easily chosen Microsoft Azure or the Google Cloud, but AWS has the deepest legacy and I started there.  Once I logged in and started to play with AWS, they let me know my first year was FREE if I kept my usage below specific CPU and memory levels.  Sure no problem.  But what to build, what to do?  I remembered I had built an old Java/JSP app as a framework for a racing site for my brothers and I, called cahallbrosracing.com.  GoDaddy had taken their Java support down and it had been throwing errors for years.  So I decided that was the perfect domain to try, and grabbed the skeleton code.  It would be some type of Java/JSP racing application that used a MySQL database backend.  But for now, I just needed to see if I could configure AWS to let me get anything live.

EC2, RDS, a little AWS security magic…

I provisioned an EC2 node, downloaded Tomcat and Oracle Java and went to work.  In no time, I had the fragments of the old site live and decided I should put my race schedule online.  The schedule would not come from a static HTML page.  It would use a JSP template and a Java object to get the data from the database.  Then each year I would just add new events to the database and display by year.  Quickly the MySQL DB was provisioned, network security provisioned, DB connectivity assembled and the schedule was live.  OK – AWS was EASY to use and I now had a public facing Java environment.  I was always too cheap to pay for a dedicated host. Too cheap to sort out a real public facing Java environment that allowed me to control the Linux services so I could start and stop Tomcat as needed.  But FREE was right up my alley.

So there I was, developing Java, JSP and SQL code right on the “production” AWS Linux server.  Who needs Maven or Ant, I was building it right in the server directories!  Then I started to realize I did not have backups.  I was not using a source code repository.  It could all go away like a previous big app I wrote when my RAID drives both failed in the great 2005 Seattle wind storm.  Not a good idea.

Intel NUCs (and GitHub) to the rescue!

Enter the NUCs!!!  I had learned about the Intel NUC series and bought a handful of them to make a home server farm for Hadoop and Cassandra work.  These units are mostly the i5 models with 16GB of RAM running Ubuntu 14.04.4 LTS.  I realized I needed to do the development at home, keep the code in a GitHub repository, and then push updates to AWS when the next version was ready for production.  My main Java development NUC has been awesome.  It is a great complimentary setup.  An AWS “production” environment in the cloud and a Linux environment at home with the source code repository also in the cloud.  I even installed VMWare Workstation on my laptop so I have Linux at the track.  This allows me to pull the code from GitHub down to my laptop and make changes from the track.  It’s almost like I have made it to 2013 or something.

Why software is never “done”

Well once I got going, I wanted to track my points in the MARRS races.  So I made some tools to allow manual entry of schedules, race results, etc.  This manual process clearly did not scale well.  The discovery of  Race Monitor and their REST APIs. solved that issue.  The code was written to pull the results back from Race Monitor and used Google’s GSON parser.  GSON let me marshal the JSON data to objects used in the Java code.  Unfortunately, Race Monitor does not pass a piece of critical data, the SCCA ID for each racer.  The next step was to work with the Washington DC Region and the fine people at MotorsportReg.com to use their REST APIs to get that data for each race.  This simple Java app has become complex with two REST APIs and tools to manage them.

The rest is history.  The tool can now also import CSV files from the MyLaps Orbits software.   A simple CMS was added to publish announcements and steward’s notes per race.  All of the 2015 season has been pulled into the application across all of the classes and drivers.  Many features, bells and whistles have been added thanks to Lin Toland’s sage advice. Check out the 2015 season SSM and SM Championship pages.  A ton of data and a lot of code go into making those look simple.

Racing into the future with MARRS

I am really looking forward to being able to help all of the WDCR MARRS racers track their season starting in April.  Let’s hope I can drive my car better than last year and race as well as I have coded this application.

It is kind of odd to think that my desire to play with AWS caused me to build something useful for hundreds of weekend racing warriors.  Now the next question, should I make it work for every racing group across the world?  I mean multi-tenant, SaaS, world domination?  Hmmm…  Maybe I should try to finish better than 6th this year…

Ted Cahall

Kubuntu and Wubi

Linux  desktop variations

After playing with Debian and Ubuntu, I wanted to see what the latest in KDE looked like. I have mostly been a Gnome user and had read some interesting tidbits on KDE 4.3 in LinuxJournal. I did not want to “polute” my Ubuntu installation by downloading all of the KDE parts to it.   So I decided I would add a Kubuntu partition to my Ubuntu box.  I would do this as well as test Kubuntu on my 64bit Windows machines using Wubi.

I was surprised to see that the installers for Ubuntu and Kubuntu are not really from the same code base. The installation on my 32 bit Ubuntu box went off without a hitch. I had a spare drive on it and I used that for the new partition. I needed to manually change the partitions with the partition manager.  This is so it could leave the old Ubuntu 9.04 and 9.10 versions where they were. Even this was simple and straight forward.

Wubi letdown

I guess my biggest surprise was that Wubi does not install Kubuntu/Ubuntu to run “on top of Windows” as I thought it would. I had thought there was an additional VXLD layer or something that was written to let Linux run as a guest OS on top of Windows XP. This would have been really cool. Sort of like Cygwin on steroids. This may sound ridiculous, but a colleague long ago, Bill Thompson, wrote such a VXLD for Windows.  He did this back in the mid ’90s that allowed x86 versions of Unix to run on top of Windows.

I searched around the web and Facebook and LinkedIn to see if I could find Bill. With much digging I found him on LinkedIn. His start-up was called “nQue”. He was also a file system guru that wrote a lot of CR-ROM file system drivers, etc. after the start-up went south.

Needless to say, I think if that feature could be added to the Wubi concept, a lot more people might try Ubuntu.  Adding it right on their Windows desktop as an application environment without requiring a reboot. I know Wubi does not alter the Windows partitions.  So it is still a fairly painless way to try Ubuntu without risking much. Users can always uninstall it as they do any Windows application it if they are not happy with it. I just prefer to rarely reboot my systems if I can avoid it.

Ted Cahall

Ubuntu and Debian Installation Fun

Home Data Center Saga continues…

The “home data center” is getting a bit crazy to maintain. It is a good thing I have so much free time on my hands (not). I did finish a couple of the projects on my list last weekend. I wanted to upgrade one of my P4 3.0GHz “home brew” machines from Fedora Core 3 to Ubuntu and put Debian server on one of the “new” used Dell 2850 servers I bought from work. I am now the proud systems administrator of both of these machines – with plenty of fun along the way.

Upgrading from Fedora to Ubuntu

I started with the Fedora Core 3 to Ubuntu conversion by making sure all of the applications I had written in Java, PHP, Perl, sh, as well as the databases in MySQL had been successfully ported to another CentOS machine and regression tested. That took longer than expected (of course). I had already used BitTorrent (thanks Bram Cohen) to download Ubuntu 9.04. I like their numbering scheme as even I can figure out how new/old the rev is. I then went and installed it on my “home brew” Intel motherboard based system. It worked like a charm and I was checking out its slick UI and features within minutes. So far, so good.

Next I decided to see how the graphics worked and if I could get it into 1920×1280 mode with my 24″ monitor. That was a tad trickier – but I was pleased to see that it went out on the Internet and figured out where to get the latest NVidia driver that supported the video card I had bought years ago. That was slick and the graphics were awesome. In high res mode it even puts some transparency to windows and gives them “momentum distortion” as you move the window. Not sure how useful it is – but it looks pretty cool.

VNC for graphical UI across machines

I like to sit in my home office and use VNC to access the 7 Linux boxes running in my basement and other rooms (versus running between them to try things). I know that “real systems administrators do not use VNC” as told to me by one of our real systems administrators at AOL (and CNET years ago). I am not embarrassed to say I am not a real systems administrator!  I like the graphical UI access to all of these machines. It makes working on them so much easier with 4 or 5 windows open at a time. So here is where the rub is. I enabled VNC, ran back to my office, and tried it. No luck. I made sure SSH worked and I could get to the box – that was all set and good to go. I check that the machine was listening on port 5901 – that was good too. A little snooping in the VNC log file let me know it could not execute the /etc/X11//xinit/xinitrc script. I thought that was odd but enabled execute permissions on the file and everything worked.

Upgrading versions of Ubuntu to 9.10

As I performed a routine update of the OS and files, it let me know that Ubuntu 9.10 was now out (as it was past October – month 10 in year 2009). I had downloaded 9.04 a month earlier when I began thinking of the project. A 9.10 upgrade sounded great – so I decided to “go for it”. Bad decision. After the upgrade the video would not work in graphics mode and I could only bring the system up in text mode. Not a big deal for a “real systems administrator” but definitely not what I was looking for – especially on a desktop machine where I wanted to check out the cool graphics in Ubuntu.

Video driver hell

Since the machine had no real work on it and I did not feel it was worth my time to really figure it all out in 80×24 text mode as I trouble shot the X Window system, I simply put 9.04 back on the machine and got it working where it was before the upgrade. This would represent my fallback case in a worst case scenario. I then used BitTorrent to get 9.10 on a DVD. Ubuntu allows you to add multiple OS versions by partitioning the drive. I did that and shared the drive with 9.04 and 9.10 and performed the installation. 9.10 came up and worked from scratch – but the video upgrade would not work. When I tried to get it to go out and upgrade the video driver as it had in 9.04, it kept telling me that there were no drivers and that the upgrade was not possible. This did not let me use the 1920×1024 graphics mode of the card or monitor.

After playing with the software update tool, I was able to find some NVidia drivers that were available and downloaded those. Once I did that the system finally let me do the upgrade to enhanced video mode and use the 1920×1280. I am not sure why the 9.10 version was not able to automatically find these drivers as the 9.04 version was, but clearly this was why the upgrade had failed when I tried to go from 9.04 to 9.10 “in place”. The VNC issue for xinitrc still existed and I again corrected that. Project complete!

And on to the Debian upgrades

The Debian 5.0.3 server install for my Dell 2850 proved to be less frustrating – but not without hiccups. I had downloaded the first 3 DVDs for Debian and proceeded to the basement to start the install. That is when I noticed that this 2850 came with a CD-ROM drive and not a DVD-ROM drive! I had already put CentOS on the other Dell 2850 months ago – so I “assumed” that both machines had DVD-ROM drives. Bad assumption… The nice thing about Debian is that is allows a “net install” CD to be burned that is fairly small. It then downloads the rest of what it needs as it goes along. So this is the route I chose for the Debian server. From there the install was fairly straightforward. The graphics are nowhere as nice and Ubuntu – but this is a server install and I don’t have a fancy video card in the 2850 anyway. The VNC issue for xinitrc also exists with this version of Debian – which is no surprise as Ubuntu is a downstream distribution of Debian. Another project complete and now I have systems to compare different OS features and issues and keep up with some of the pilot projects we are doing at work to streamline software distribution, etc.

Ted Cahall