WolkeWerks Alpha Goes Live

Today marks another step along my journey as a co-founder (chief bottle washer?) of a FinTech start-up – we are ready to announce our WolkeWerks Alpha Launch!  It was been an interesting and rewarding experience to say the least.  My co-founder, Scott Scazafavo, and I have spent most week days with at least one video meeting as we hash through the details of our product and the problems it solves for our consumers.  Only two people in the company, and yet we still have multiple locations and a two hour timezone gap.  Flexibility is a key to success.

To me, we have too much polish and too many features for an Alpha.  For Scott, he no longer “cringes” when showing the product to his friends and family.   The joys of being co-founders.

We are really fast, er, not that fast

As usual, things always take longer than one would like or even optimistically estimate.  After Scott and I determined the initial high-level plan, we selected a data provider and I was able to produce a proof-of-concept / prototype in one week.  WE are REALLY fast we thought.  Within a day later, I had skinned the “consumer” version in Bootsrap 3.  OMG we are SUPER fast!  The prototype made it clear we had all of the building blocks we would need (aside from an army of software engineers, designers, research assistants, etc).

If I was able to write a fully functioning, Bootstrap-skinned prototype that was based upon a data service REST API in under two weeks, surely we could get our Alpha product live by April?  May at the very latest.

Where did it all go Blanche?

Hmmm.  Oddly, as we launch today, on July 5th, it is interesting to see where the time went.  Figuring out the features for Alpha took longer than expected as we scoped the ideas down to the bare essentials.  There was over a month spent on looking at technologies and products that we eventually realized would not be used until the MVP phase.  A huge help was the book “The Lean Startup” by Eric Ries.  It helped us focus on much less and testing our way into changes incrementally (and thus an Alpha, Beta, MVP and the major releases).

Of course there is also the fact that we only have one software engineer (me).  I like to think I can code fairly quickly, but in fact, I am also the AWS systems administrator, the Apache and Tomcat administrator, the MySQL DBA, as well as the front-end web developer (JavaScript, jQuery, Bootstrap 4, HTML5, CSS, JSP) and the core services developer (Java).  Oddly, as is the case during my whole career, product management was able to generate more ideas and features than engineering was able to produce at the same pace.  I was doing some product management along with Scott – but again only I was doing any coding or system administration.  Clearly I learned to stop making my own backlog bigger fairly quickly. 😉

In the beginning…

In the early phases we needed to agree on the basic functionality.  We knew the long term product would use distributed processing, AI and machine learning.  These are of great interest to me and so I poured myself into learning them more deeply (and getting them working in my lab) as fast as possible. This was going to be a super cool product and possibly even more fun to build!

A dollop of Hadoop and a sprinkle of Spark

What a dream job!  I was a full-time student again.  One of our product’s main goals is to help a consumer manage their online subscriptions.  My quest for building a Hadoop based AI engine allowed me to add at least five more online subscriptions to my credit cards!  I was a super-user and the product was not even built yet.  Courses from Udemy, Lynda.com, Coursera, Pluralsight were great!  I quickly outpaced the top courses in Treehouse but had fun looking at them. These paid services were in addition to my regular free sites of w3schools.com and others.  I was suddenly an online training expert.  Visions of blogging about the various online training sites and their relative merits sadly danced in my brain.

I took courses about Eclipse, Docker containers, AWS S3, Hadoop, MapReduce, Spark, and stuff I cannot even recall at this point.  All of this while building out and upgrading my Hadoop and Cassandra clusters and testing my various theories on how to make the product sing.  Then a dose of reality hit as I was working my way through “The Lean Startup” book.  Oops.  I had gone way too far down that path when we were not even sure of the core viability of our product.  There were MUCH simpler ways to achieve the product viability testing we would need without the AI engine working day one.  Well that was at least one month of “fun” research that was pointless to our Alpha.  Ouch.  That was a big chunk of time lost – regardless of how much fun it was.

Time for a road trip

Once I realized I had lost my way according to the “start-up bible”, I quickly re-focused my interactions with Scott.  We decided we needed to spend some time in the same city (with a real whiteboard) to flush out the phases of the product roadmap.  Scott and I chose Denver as it was in between Seattle and Minneapolis.  We used Trello (for free) and carved out what the Alpha, Beta, and MVP versions of the product needed to be (knowing even Alpha would shift as we continued to determine feasibility).

Real collaboration – at the code level

The next issues was linking Scott’s product features and design to my coding.  Sounds simple, but we needed to agree on simple things such as a front-end framework and possibly a tool that would allow Scott to design mock-ups that could be implemented with relative ease inside the chosen framework.  With much angst and encouragement, we agreed Scott would learn a little Bootstrap and select a Bootstrap tool to make the “unauthenticated” portion of the Alpha site, since my version was a white page with a login form (I thought it looked great!)

Since Bootstrap 4 was available, Scott picked a tool that generated Bootstrap 4 code.  No worries, Bootstrap 4 is backwards compatible to the Bootstrap 3 code used in the prototype – right?  Um, no. 🙁

I made the mistake of a 5 minutes hack of the consumer site (integrating the authenticated prototype and the new unauthenticated code from Scott).  It produced a hybrid Bootstrap 3/4-ish site that I showed to Scott.  I think I almost killed the company.  It looked OK-ish (to me).  Scott was so depressed at how bad it looked he was on LinkedIn looking at Junior Product Manager roles.

Staying positive and building momentum

I realized that incremental crap-ism might not be the way to encourage Scott along the Bootstrap 3 to 4 migration journey. I quickly researched all the issues with making the site fully Bootstrap 4 and re-coding the prototype so it looked proper in both the authenticated and non-authenticated states.  Slowly I coaxed Scott back off the ledge when he saw his designs working pixel-by-pixel as he had designed them.

Moving from Bootstrap 3 to 4, creating a working rhythm with Scott as he generated pages and updates to the prototype pages took some time, but the pages started looking better and better.  Eventually Scott learned how to use GitHub and started making some UI changes directly himself.  Now we had put down our rocks and sticks and began cooking with gas.

There were some data service issues that we needed to address.  Then some AWS issues we needed to investigate and correct, plus redirects, SSL certs, password standards, and a lot of other things that  were correctly deemed important into a FinTech Alpha.

Then the real fun began.  The product was getting close enough, but there were some key features that we felt would make the product more useful and pleasing to our consumers, and there was an issues with “the browser wars” that was likely to cause confusion with our Alpha users.  Should we add the features and try to fix the browser wars issue?  In the end we decided yes.  The additional features required about a day of coding.  Well worth it.

FireFox, FireFox, why, why, why?

We decided we needed to make a valiant effort to correct an issue where browsers were auto-populating our site’s credentials into a third-party access credentials form.  We allow our consumers to access their own data through our product, but they need to securely pass their credentials onto their banking service.  If the browser auto-populated these fields with our site’s login credentials , the user would likely pass the wrong credentials to the back-end banking service and the connection would fail.  It should be easy to stop a browser from auto-populating a form field you would think.  Unfortunately, again, no.

Here is where the “browser wars” come into play again.  The HTML5 specification says that there is a attribute that can be used on the <form> and <input> tags called “autocomplete”. By setting autocomplete=”off” the browser should know not to populate that password field with the current site credentials.  Perfect.  Except none of the browsers honor it!

Protecting the installed base – of course!

There is a significant hurdle browser companies implement so users are less likely to consider switching browsers.  Many users allow the browser to remember all of their passwords to the many banking, media, and merchant sites they use.  Smart users do not use the same password for their favorite recipe site as their investment or banking sites.  Who can remember all of those passwords?  Let Chrome, Firefox and Edge remember them for you!  But once a user has done that, it is way too much work to start over again with a new browser and migrate all of those passwords (that they really don’t remember) along with them.  Great product management by the Chrome, Firefox and Edge teams in protecting their installed bases.

This is all fine and dandy, until a site needs to be sure the consumer is not confused when a different set of credentials are being requested.  Auto-populating credentials (especially when they should be different than the site they are on) may cause less sophisticated users to submit the wrong credentials.  This further confounds the user experience and frustration levels.

A partial fix for now – and full fix in Beta

Interesting enough, on a blog from the Mozilla Developer Network, there is an article that explains a simple way to stop the browser from auto populating the forms fields.  And guess what?  It works for the latest versions of Chrome and Edge – but NOT FIREFOX!?   Mozilla explained how to correct the issue (autocomplete=”new-password”) – but then explains they ignore that as well.

Why would Mozilla do this?  Because they have been losing market share to Chrome for years – and they need to be more aggressive in their product design to capture and keep new users by storing their password at all costs.  Sad.

So we launch with Firefox users having some confusion when we (securely) need their credentials to their bank services.  There is a complex JavaScript fix that we will eventually implement that randomizes the field names on load, but changes them to “password” and “username” just prior to form submit.  Sad we need to resort to that.  But we will make that part of the Beta and just pre-warn our Firefox Alpha users.  After all, it is Alpha, and we decide who signs up and what issues we are asking them to avoid.

A curvy roadmap – and then a right turn

So after a month’s travel down a wrong road along the course of developing our roadmap, we have finally gotten to the day of our Alpha release.  We need to remind ourselves it is not an MVP or a Beta – it is just an Alpha.  But to me, an Alpha that is pretty slick and does a lot of what we will need it to do as we roll towards Beta and the MVP.

There are many features left for Beta and the MVP as well as dusting off the machine learning and AI code – but we are on our way!

Ted Cahall

 

Zoom – FREE P2P Video Conference

Scott Scazafavo and I have been working full time on our new start-up, WolkeWerks.com.  This often places me in my home office reaching out to colleagues for advice and collaboration.  My communication tool of choice has been free peer-to-peer (P2P) video conference tools.  Scott and I have used Skype and FaceTime, but experienced the common video lags and garbled voices.  These were frustrating experiences needless to say.

Zoom Logo

It only take a garage to fall on me

My dad used to say, “I don’t need an entire house to fall on me to learn something, it only takes a garage”.  I think he was telling me to learn from trends when they are still small – and even a garage hurts when it falls on you.

The second time someone (ok a nice recruiter) asked me connect with Zoom, I realized it was a high quality service in terms video choppiness and garbled voices.  I did not look into pricing as I figured it was another service used by larger corporations ala WebEx or BlueJeans.  When a colleague in Berlin sent an invite with it, I thought it was odd that he was willing to pay for a service just to chat with me.  This was on top of another colleague with a pending Zoom call scheduled.  Why does everyone want to see a bald guy on video when it is such a frustrating technology?

FREE Zoom P2P Video Conferencing

Because it really isn’t frustrating anymore.  At least from my sample set of 4 calls now. One to Boston, one for two hours to Berlin, and two to different people in Seattle now.  But the biggest surprise, it is FREE for 2 people for unlimited connectivity.  It is also free for 3 or more people for 40 or less minutes.  FREE is my favorite word as I am an unabashed open source bigot.  But FREE that really works well is amazing.

I love the idea of getting people to try something for free for personal use and then once they fall in love with it, they are happy to pay for it in other circumstances.  I have not tried a 3 or more participant call yet.  I suspect they have this technology so dialed in that once you do a 30 minute call with 3/4 people that runs long, you get hooked on how well it worked and add your credit card to the account.

Check out Zoom

The folks at Zoom also have connectivity modules and upgrades for H.323/SIP systems, LifeSize, Polycom, and Cisco gear in corporations. It is all on their website.  They seem like they really nailed the tech on this so far.

I have a call Monday to London with another colleague.  I would never have asked him to use video conferencing in the past.  Too clunky and messy.  But we are setup on Zoom and it will be good to see his face for the first time in a year – even though we catch up nearly every month.

Check out Zoom. All it takes is a laptop, iPad or a mobile phone.

Ted Cahall

MARRSPOINTS gets some SEO love

The marrspoints.com racing application recently got some SEO updates.  These we long overdue in terms of getting better ranking inside Google.  Now driver’s season results URLs include the drivers name (example for Mike Collins) and the race results include the race name and classes (example for 2017 MARRS 5 SM Feature race).  Most importantly the Points Leaderboards have the class name and season as part of the URL now.

On top of all of that, I automated the sitemap to build nightly and worked with the Google Search console to fix duplicate title tags and content descriptions.

Enter Tuckey – SEO URL Rescue!

This all should have been done long ago.  But features were my first priority.  I used the Tuckey urlrewrite filter for all of the friendly URL magic.  It really is awesome and I am glad I remembered it from all the way back to my CNET days when we used it on a project there.

I still have some clean up to do when the pages are selected by form drop-down menus.  My sitemap tool does not include these paths.  I know Google is a lot happier to not see parameters on the URLs any longer.  It is a LOT of JavaScript magic to rewrite the form action to use the rewrite destination.  So that may be left for another year or two until it works its way up the stack in terms of importance.

Ted Cahall

Surfing to the end of the Internet

Keeping (too) busy

Since I left Digital River at the end of February, I have been working closely with Scott Scazafavo on a stealth start-up idea we had been kicking around.   Most mornings I hit my office early and attempt to further the research or  code base.  I worked on some Java REST API code I wanted to improve from its early usage at marrspoints.com.  I remembered there was a simple test site that gave canned responses to HTTP GET, POST requests along with cookies and the likes.  After a tad of searching, I found it again: httpbin.org – what a nice tool.  Simple yet elegant – and great for testing out HTTP code samples where you just need a simple endpoint.  Tutorials on the Internet should just use this site in their examples – as it likely will not change much.

The dangers of the Internet

This is where the danger began…  As I was done using it for the simple testing I was doing, and was ready to move onto the next phase, I noticed that it had the authors name with a hyperlink.  Since I wished I had written such a useful “demo” or example.com website, I wanted to see a tad more about him.  Through Kenneth Reitz, I learned that I comparatively don’t have many cool hobbies or talents (I am not that great of an auto racer and I have not written books, published music, been a professional speaker or even amateur photographer).   That is all on top of his enormous contribution to the Open Source space.  This guy is REALLY talented. Through his link on his personal values, I saw another link stating that “Life is not a Race, but it has No Speed Limits”.  Of course that deserved a click!

Through Kenneth and that link, I met (online so to speak) Derek Sivers and read his axiom – that “Life Has No Speed Limits“.  And though that story, the life of Kimo Williams and why focus matters.  Focus?  On the Internet with so many lessons to learn?

Saying “Hell Yeah!”

It was great to “meet” three SUPER TALENTED people on the Internet this morning.  People I will likely never meet in person or even exchange emails.  Yet, people from whom I have already learned.  While perusing Derek’s site, I found another life lesson to which I truly try to adhere.   No “yes.” Either “HELL YEAH!” or “no.”

OK- back to that focus thing and getting some work done.

Ted Cahall

Using Postman for API consumption

Being a caveman

So what is wrong with curl?  Nothing.  But Postman (at getpostman.com) is simply one of the best tools I have used while developing code that consumes APIs.  This is another case where I was using caveman tech (curl) to do a job so elegantly managed by a service that makes a desktop app that runs on Linux, MacOS, and Windows (and syncs across them).

Sometimes you just need an API

My coding and racing adventures led me to develop and win an award for the marrspoints.com application.  The app consumes two different APIs: race-monitor.com and motorsportreg.com.  I used curl to do the testing dirty work for these as one of them did not publish their response formats that I needed for my JSON parser.

I have been playing with a stock/equities “demo app” for my Cassandra cluster.  The app required me to replace the old Yahoo quotes feed.  I had to do testing on the new feed I chose, and I was still doing it with curl.

Even a stealth API…

Currently, I am now working on a stealth start-up idea with an even more stealth cohort of mine in the financial space.  The data company we have tentatively selected (and their API documentation) pointed me to Postman.  It is awesome.  I have deeply tested the financial access, accounts, instruments, etc.  This was accomplished on my own accounts in only a couple hours of work and research.  Postman is script-able, has variable replacement, etc.  Oh, and the best part, a single developer license is FREE.  My favorite price.

To think Sam Morris at Digital River talked about Postman dozens of times. It never occurred to me to go look at it.  That cost me a lot of wasted time. Especially since I know Sam is “the man”.  Thank you Sam – the second time I heard of it, I knew to go get a copy and learn it quickly.

Ted Cahall

Gnome desktop coming to Ubuntu :-(

Unity vs Gnome

I hate to think of myself as a tech Luddite.  Being an Ubuntu Linux fan has caused familiarization with the Unity desktop.  Recently, I have been playing with 17.10 to see what is coming in 18.04 LTS.  I never thought I would defend the Unity desktop as my earliest Linux days were split between the Gnome and KDE desktops.  But I wish I had my old Unity back. Yes, I know I can return to it in 17.10 – but it is becoming mostly unsupported.  Incremental scaling is essential with today’s 4K monitors.  Or I need Lasik.  Uber-Lasik in my case.

Why I like LTS.1

I never actually run the first point release of an LTS version.  I waited for 16.04.1 to get anything real live on 16.04 LTS.  It seems the Gnome desktop has a big memory leak and it likely will not be fixed in the 18.04 LTS initial release in April.

OK, scratch moving to 18.04 LTS in April on anything I need.  I already am a desktop memory hog as it is and finally upgraded my new desktop machine to 32GB of RAM.

A Gnome future in Ubuntu

I know this is all for the good.  That change thing.  Moving to Gnome in this case.  It is far more widely supported and used across more variants of Linux.   I used to be a CentOS champion as I loosened the evil grip of RedHat subscription fees back in my AOL cost cutting days.  I have since become almost an exclusive Ubuntu home data center.  Seems I will be straddling Gnome and Unity for a year or so.  One other word of caution, the Gnome 3.26 desktop (used in 17.10) does not truly support incremental UI scaling yet.  This is a problem for people like me with a 4K laptop screen or large 4K desktops.  There is a workaround.   However, it is not clear if fractional scaling will make it into Gnome 3.28 which ships with  18.04 LTS.

Happy times.  It is really hard to see my shell windows in a non-scaled up Gnome desktop on a 4K laptop screen.

Ted Cahall

Ted Cahall’s “new” tech blog

New Blog along with some old content

As a past media executive at companies such as CNET Networks, Microsoft’s MSN, AOL and the early social network Classmates.com, I have operated a  blog here and there over the years.  Mostly to test out SEO ideas and cross link my sites, etc.

Started on LiveJournal in 2004

One of my unfortunate SEO decisions was using LiveJournal.com for my tech postings.  In 2004 as CTO of CNET Networks, I was fortunate enough to meet Brad Fitzpatrick who invented LiveJournal (as well as memcached).  Since we made a (failed) bid to buy the site, I decided I should use it and get to know it a bit.  I had used it to blog about some of my non-proprietary experiences with technology and software from time to time.

My last post there was almost two years ago to the day.  I was musing at the intersection of my auto racing hobby and my technology hobby.  It was through a lack of automation of the points standing of my auto racing league that I had finally brought these two passions together.  This was all enabled by Open Source, the Intel NUC computers (home data center), and Amazon’s AWS hosting facility.   Resulting in the creation of the marrspoints.com race points tracking web application.

LiveJournal did not seem to get the SEO juice

Compared to modern blogging sites such as WordPress (which this blog is built on), LiveJournal never got the great SEO features that it deserved. Therefore today, I am moving my LiveJournal information over to a new home here at cahall-labs.com.   All of the posts have been successfully moved here as of this post.

Open Source and my Home Data Center

I have a few tech topics that are of interest to me. They include:

Cassandra and Hadoop

The marrspoints.com site was simple to build, but the back end tools to ingest all of the race data was a lot more work.  I occasionally look at ways to change the data ingestion or analytics.  Therefore I play with tools such as Cassandra and Hadoop on my NUC cluster in my home data center.  In general, I will try NOT to blog about racing in this blog.  That will move to a blog at either cahallracing.com or cahall.com.

Thank you LiveJournal – hello WordPress

So thank you to LiveJournal for the tools and time.  It was a good 14 year run.  There is also an old, outdated racing blog on WordPress.  It will likely be moving to a new home in the next month or two.  It will be good to get back to using the tool Matt Mullenweg built (WordPress).  I had the opportunity to work with Matt at CNET when he spent time there for a year on his way to becoming famous.  Clearly I wish I had made a blog tool.  Some day I may even blog about Gavin Hall and Alex Rudloff.  They built blogsmith.  Blogsmith powers TMZ.com and most of the AOL blogs.  I guess I met most of the people that built blogs…  Very, very smart and talented people.

Ted Cahall

The Intel NUC Computers, AWS, and racing cars

It has been over five years since my last post about software and technology.  It’s not that I stopped using it.  I just stopped talking about it.  Lately I have been on a bit of a streak.  I have been working on the MARRS Points tracking app in AWS for over a year now.  It will now be the official points tracking application for the 2016 season across all race classes in the Washington DC Region (WDCR) of the SCCA.  I have actually done something mildly productive with my spare time!

An AWS Project Was In Order

It was mainly by happenstance that I got the app going.  I wanted to work in the Amazon AWS cloud a bit to understand it better.  I had managed teams using it for years now at various companies.  So it seemed like a reasonable learning experience. I could have easily chosen Microsoft Azure or the Google Cloud, but AWS has the deepest legacy and I started there.  Once I logged in and started to play with AWS, they let me know my first year was FREE if I kept my usage below specific CPU and memory levels.  Sure no problem.  But what to build, what to do?  I remembered I had built an old Java/JSP app as a framework for a racing site for my brothers and I, called cahallbrosracing.com.  GoDaddy had taken their Java support down and it had been throwing errors for years.  So I decided that was the perfect domain to try, and grabbed the skeleton code.  It would be some type of Java/JSP racing application that used a MySQL database backend.  But for now, I just needed to see if I could configure AWS to let me get anything live.

EC2, RDS, a little AWS security magic…

I provisioned an EC2 node, downloaded Tomcat and Oracle Java and went to work.  In no time, I had the fragments of the old site live and decided I should put my race schedule online.  The schedule would not come from a static HTML page.  It would use a JSP template and a Java object to get the data from the database.  Then each year I would just add new events to the database and display by year.  Quickly the MySQL DB was provisioned, network security provisioned, DB connectivity assembled and the schedule was live.  OK – AWS was EASY to use and I now had a public facing Java environment.  I was always too cheap to pay for a dedicated host. Too cheap to sort out a real public facing Java environment that allowed me to control the Linux services so I could start and stop Tomcat as needed.  But FREE was right up my alley.

So there I was, developing Java, JSP and SQL code right on the “production” AWS Linux server.  Who needs Maven or Ant, I was building it right in the server directories!  Then I started to realize I did not have backups.  I was not using a source code repository.  It could all go away like a previous big app I wrote when my RAID drives both failed in the great 2005 Seattle wind storm.  Not a good idea.

Intel NUCs (and GitHub) to the rescue!

Enter the NUCs!!!  I had learned about the Intel NUC series and bought a handful of them to make a home server farm for Hadoop and Cassandra work.  These units are mostly the i5 models with 16GB of RAM running Ubuntu 14.04.4 LTS.  I realized I needed to do the development at home, keep the code in a GitHub repository, and then push updates to AWS when the next version was ready for production.  My main Java development NUC has been awesome.  It is a great complimentary setup.  An AWS “production” environment in the cloud and a Linux environment at home with the source code repository also in the cloud.  I even installed VMWare Workstation on my laptop so I have Linux at the track.  This allows me to pull the code from GitHub down to my laptop and make changes from the track.  It’s almost like I have made it to 2013 or something.

Why software is never “done”

Well once I got going, I wanted to track my points in the MARRS races.  So I made some tools to allow manual entry of schedules, race results, etc.  This manual process clearly did not scale well.  The discovery of  Race Monitor and their REST APIs. solved that issue.  The code was written to pull the results back from Race Monitor and used Google’s GSON parser.  GSON let me marshal the JSON data to objects used in the Java code.  Unfortunately, Race Monitor does not pass a piece of critical data, the SCCA ID for each racer.  The next step was to work with the Washington DC Region and the fine people at MotorsportReg.com to use their REST APIs to get that data for each race.  This simple Java app has become complex with two REST APIs and tools to manage them.

The rest is history.  The tool can now also import CSV files from the MyLaps Orbits software.   A simple CMS was added to publish announcements and steward’s notes per race.  All of the 2015 season has been pulled into the application across all of the classes and drivers.  Many features, bells and whistles have been added thanks to Lin Toland’s sage advice. Check out the 2015 season SSM and SM Championship pages.  A ton of data and a lot of code go into making those look simple.

Racing into the future with MARRS

I am really looking forward to being able to help all of the WDCR MARRS racers track their season starting in April.  Let’s hope I can drive my car better than last year and race as well as I have coded this application.

It is kind of odd to think that my desire to play with AWS caused me to build something useful for hundreds of weekend racing warriors.  Now the next question, should I make it work for every racing group across the world?  I mean multi-tenant, SaaS, world domination?  Hmmm…  Maybe I should try to finish better than 6th this year…

Ted Cahall

Windows 7 – huge upgrade from XP

Nice hardware helps

I just realized that I bought my “new” Windows 7 machine way back in late January.  The thing is amazing: 8GB RAM, i7 860 Quad Core CPU, 3.0Gbps RAID-1 SATA drives, etc.  I recently went out and bought a 30 inch Samsung monitor so I could put the video card in 2560×1600 mode.  The speed, video, stability, etc. of this machine are incredible!

The most amazing thing is the OS.  I skipped Vista due to all of the bad press – coupled with the fact that XP mostly did everything I needed from a desktop OS.  Mostly was the key part of that sentence.  It really could not handle more than about 2GB of memory efficiently – and I had some leaky open-source apps that regularly gobbled that up since I rarely reboot…

Free Microsoft Software!

Additionally, Microsoft has tossed in some FREE apps that were not available under XP as part of their Windows Live Essentials program.  The most significant of those apps (to me) is Movie Maker.  I regularly edit and upload portions of my SCCA Club Racing videos using Movie Maker.  It is simple and easy – which fits my video skill level really well.  I am also in the process of adding in a TV Tuner card so I can really utilize the Windows Media Center software that came with my Windows 7 Ultimate version.  That should make it even more interesting to connect to my Xbox-360 (which now gives my AppleTV a run for the money in renting movies from the Internet).

Windows 7 handles memory well

I now regularly run over 3GB of apps without any issues on the machine whatsoever.  I have not added all the DB servers, app servers, etc. that I used to run on my various Windows desktops.  That is because I never retire my old machines and they are still on the network somewhere.  I finally have created what is mostly a desktop machine used as a desktop.

No question, Windows 7 is a really fantastic OS.  It will continue to be my main machine to access all the servers running in my in my home data center.

Ted Cahall

Kubuntu and Wubi

Linux  desktop variations

After playing with Debian and Ubuntu, I wanted to see what the latest in KDE looked like. I have mostly been a Gnome user and had read some interesting tidbits on KDE 4.3 in LinuxJournal. I did not want to “polute” my Ubuntu installation by downloading all of the KDE parts to it.   So I decided I would add a Kubuntu partition to my Ubuntu box.  I would do this as well as test Kubuntu on my 64bit Windows machines using Wubi.

I was surprised to see that the installers for Ubuntu and Kubuntu are not really from the same code base. The installation on my 32 bit Ubuntu box went off without a hitch. I had a spare drive on it and I used that for the new partition. I needed to manually change the partitions with the partition manager.  This is so it could leave the old Ubuntu 9.04 and 9.10 versions where they were. Even this was simple and straight forward.

Wubi letdown

I guess my biggest surprise was that Wubi does not install Kubuntu/Ubuntu to run “on top of Windows” as I thought it would. I had thought there was an additional VXLD layer or something that was written to let Linux run as a guest OS on top of Windows XP. This would have been really cool. Sort of like Cygwin on steroids. This may sound ridiculous, but a colleague long ago, Bill Thompson, wrote such a VXLD for Windows.  He did this back in the mid ’90s that allowed x86 versions of Unix to run on top of Windows.

I searched around the web and Facebook and LinkedIn to see if I could find Bill. With much digging I found him on LinkedIn. His start-up was called “nQue”. He was also a file system guru that wrote a lot of CR-ROM file system drivers, etc. after the start-up went south.

Needless to say, I think if that feature could be added to the Wubi concept, a lot more people might try Ubuntu.  Adding it right on their Windows desktop as an application environment without requiring a reboot. I know Wubi does not alter the Windows partitions.  So it is still a fairly painless way to try Ubuntu without risking much. Users can always uninstall it as they do any Windows application it if they are not happy with it. I just prefer to rarely reboot my systems if I can avoid it.

Ted Cahall